id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
23,595,067 | https://en.wikipedia.org/wiki/UC%20Davis%20Department%20of%20Applied%20Science | The Department of Applied Science at the University of California, Davis was a cooperative academic program involving the University of California, Davis and the Lawrence Livermore National Laboratory (LLNL). It was established in the fall of 1963 by Edward Teller, director of LLNL, and Roy Bainer, then dean of the UC Davis College of Engineering. The department was discontinued in 2011.
History
Teller's push for an educational institution associated with the LLNL was part of a general movement championed by Alvin M. Weinberg of Oak Ridge National Laboratory to use the United States Department of Energy National Laboratories to educate scientists, since at the time the department employed roughly 10% of the scientists in the United States. Teller first approached the University of California, Berkeley with his idea, but the faculty there opposed the idea because of the military focus of the program and the administration wasn't receptive. So he turned, reluctantly, to UC Davis instead. There Bainer and Emil M. Mrak, then chancellor of UCD, were more receptive to Teller's plan, although some faculty of the College of Engineering were unhappy with the idea of outsiders teaching their students.
Nicknamed "Teller Tech," the department was established in 1963 by Edward Teller on the grounds of the Lawrence Livermore National Laboratory (LLNL). It was the first graduate education program associated with one of the national laboratories. At the dedication of the new program, then president of the University of California, Clark Kerr, said that the school's "imaginative new curriculum" would allow the department to "build in a short time and at small cost a highly advanced training program of great significance to modern society."
The lab at first shared the facilities at Lawrence Livermore, although the students conducted non-classified research. Teller intended the DAS to educate advanced students in nuclear physics and other subjects applicable to defense industries. The Atomic Energy Commission, which administered LLNL, was worried about allowing DAS to use its facilities if foreign students would be enrolled. To meet this objection Teller agreed to limit the number of foreign students attending and to require prospective students to undergo FBI background checks.
Later, the lab was administered by the University of California for the Department of Energy and students were allowed to participate in classified, as well as unclassified projects. As part of the admissions process students were required to fill out a PSQ so that the Office of Personnel Management (OPM) could do a background check on them. Once their clearance was granted they were allowed to participate in classified research. Students were required to have US Citizenship to participate. Country of origin wasn't an issue.
Teller, who had been director of the Lawrence Livermore National Laboratory beginning in 1958, was named the first chairman of the Department of Applied Science. Its main location, built in 1976, was on the grounds of the LLNL in a building paid for with a matching grant of $1 million from the Hertz Foundation and thus called Hertz Hall.
The Department of Applied Science later became more centered at the UC Davis campus. Many of the department's faculty had joint appointments with LLNL or other national laboratories, so that students in the department had access to facilities in both locations.
The UC Davis College of Engineering closed the Department of Applied Science in July 2011 for budgetary reasons after 48 years of operation.
Notable faculty
Berni Alder - cofounder with Teller and National Medal of Science winner.
Edward Teller
References
Nuclear research institutes
University of California, Davis
1963 establishments in California | UC Davis Department of Applied Science | Engineering | 704 |
576,387 | https://en.wikipedia.org/wiki/Finite%20field%20arithmetic | In mathematics, finite field arithmetic is arithmetic in a finite field (a field containing a finite number of elements) contrary to arithmetic in a field with an infinite number of elements, like the field of rational numbers.
There are infinitely many different finite fields. Their number of elements is necessarily of the form pn where p is a prime number and n is a positive integer, and two finite fields of the same size are isomorphic. The prime p is called the characteristic of the field, and the positive integer n is called the dimension of the field over its prime field.
Finite fields are used in a variety of applications, including in classical coding theory in linear block codes such as BCH codes and Reed–Solomon error correction, in cryptography algorithms such as the Rijndael (AES) encryption algorithm, in tournament scheduling, and in the design of experiments.
Effective polynomial representation
The finite field with pn elements is denoted GF(pn) and is also called the Galois field of order pn, in honor of the founder of finite field theory, Évariste Galois. GF(p), where p is a prime number, is simply the ring of integers modulo p. That is, one can perform operations (addition, subtraction, multiplication) using the usual operation on integers, followed by reduction modulo p. For instance, in GF(5), is reduced to 2 modulo 5. Division is multiplication by the inverse modulo p, which may be computed using the extended Euclidean algorithm.
A particular case is GF(2), where addition is exclusive OR (XOR) and multiplication is AND. Since the only invertible element is 1, division is the identity function.
Elements of GF(pn) may be represented as polynomials of degree strictly less than n over GF(p). Operations are then performed modulo m(x) where m(x) is an irreducible polynomial of degree n over GF(p), for instance using polynomial long division. Addition is the usual addition of polynomials, but the coefficients are reduced modulo p. Multiplication is also the usual multiplication of polynomials, but with coefficients multiplied modulo p and polynomials multiplied modulo the polynomial m(x). This representation in terms of polynomial coefficients is called a monomial basis (a.k.a. 'polynomial basis').
There are other representations of the elements of GF(pn); some are isomorphic to the polynomial representation above and others look quite different (for instance, using matrices). Using a normal basis may have advantages in some contexts.
When the prime is 2, it is conventional to express elements of GF(pn) as binary numbers, with the coefficient of each term in a polynomial represented by one bit in the corresponding element's binary expression. Braces ( "{" and "}" ) or similar delimiters are commonly added to binary numbers, or to their hexadecimal equivalents, to indicate that the value gives the coefficients of a basis of a field, thus representing an element of the field. For example, the following are equivalent representations of the same value in a characteristic 2 finite field:
Primitive polynomials
There are many irreducible polynomials (sometimes called reducing polynomials) that can be used to generate a finite field, but they do not all give rise to the same representation of the field.
A monic irreducible polynomial of degree having coefficients in the finite field GF(), where for some prime and positive integer , is called a primitive polynomial if all of its roots are primitive elements of GF(). In the polynomial representation of the finite field, this implies that is a primitive element. There is at least one irreducible polynomial for which is a primitive element. In other words, for a primitive polynomial, the powers of generate every nonzero value in the field.
In the following examples it is best not to use the polynomial representation, as the meaning of changes between the examples. The monic irreducible polynomial over GF(2) is not primitive. Let be a root of this polynomial (in the polynomial representation this would be ), that is, . Now , so is not a primitive element of GF(28) and generates a multiplicative subgroup of order 51. The monic irreducible polynomial over GF(2) is primitive, and all 8 roots are generators of .
All GF(28) have a total of 128 generators (see Number of primitive elements), and for a primitive polynomial, 8 of them are roots of the reducing polynomial. Having as a generator for a finite field is beneficial for many computational mathematical operations.
Addition and subtraction
Addition and subtraction are performed by adding or subtracting two of these polynomials together, and reducing the result modulo the characteristic.
In a finite field with characteristic 2, addition modulo 2, subtraction modulo 2, and XOR are identical. Thus,
Under regular addition of polynomials, the sum would contain a term 2x6. This term becomes 0x6 and is dropped when the answer is reduced modulo 2.
Here is a table with both the normal algebraic sum and the characteristic 2 finite field sum of a few polynomials:
In computer science applications, the operations are simplified for finite fields of characteristic 2, also called GF(2n) Galois fields, making these fields especially popular choices for applications.
Multiplication
Multiplication in a finite field is multiplication modulo an irreducible reducing polynomial used to define the finite field. (I.e., it is multiplication followed by division using the reducing polynomial as the divisor—the remainder is the product.) The symbol "•" may be used to denote multiplication in a finite field.
Rijndael's (AES) finite field
Rijndael (standardised as AES) uses the characteristic 2 finite field with 256 elements, which can also be called the Galois field GF(28). It employs the following reducing polynomial for multiplication:
x8 + x4 + x3 + x + 1.
For example, {53} • {CA} = {01} in Rijndael's field because
{|
|-
| || (x6 + x4 + x + 1)(x7 + x6 + x3 + x)
|-
| = || (x13 + x12 + x9 + x7) + (x11 + x10 + x7 + x5) + (x8 + x7 + x4 + x2) + (x7 + x6 + x3 + x)
|-
| = || x13 + x12 + x9 + x11 + x10 + x5 + x8 + x4 + x2 + x6 + x3 + x
|-
| = || x13 + x12 + x11 + x10 + x9 + x8 + x6 + x5 + x4 + x3 + x2 + x
|}
and
{|
|-
| || x13 + x12 + x11 + x10 + x9 + x8 + x6 + x5 + x4 + x3 + x2 + x mod x8 + x4 + x3 + x1 + 1
|-
| = || (11111101111110 mod 100011011)
|-
| = || {3F7E mod 11B} = {01}
|-
| = || 1 (decimal)
|}
The latter can be demonstrated through long division (shown using binary notation, since it lends itself well to the task. Notice that exclusive OR is applied in the example and not arithmetic subtraction, as one might use in grade-school long division.):
11111101111110 (mod) 100011011
^100011011
01110000011110
^100011011
0110110101110
^100011011
010101110110
^100011011
00100011010
^100011011
000000001
(The elements {53} and {CA} are multiplicative inverses of one another since their product is 1.)
Multiplication in this particular finite field can also be done using a modified version of the "peasant's algorithm". Each polynomial is represented using the same binary notation as above. Eight bits is sufficient because only degrees 0 to 7 are possible in the terms of each (reduced) polynomial.
This algorithm uses three variables (in the computer programming sense), each holding an eight-bit representation. a and b are initialized with the multiplicands; p accumulates the product and must be initialized to 0.
At the start and end of the algorithm, and the start and end of each iteration, this invariant is true: a b + p is the product. This is obviously true when the algorithm starts. When the algorithm terminates, a or b will be zero so p will contain the product.
Run the following loop eight times (once per bit). It is OK to stop when a or b is zero before an iteration:
If the rightmost bit of b is set, exclusive OR the product p by the value of a. This is polynomial addition.
Shift b one bit to the right, discarding the rightmost bit, and making the leftmost bit have a value of zero. This divides the polynomial by x, discarding the x0 term.
Keep track of whether the leftmost bit of a is set to one and call this value carry.
Shift a one bit to the left, discarding the leftmost bit, and making the new rightmost bit zero. This multiplies the polynomial by x, but we still need to take account of carry which represented the coefficient of x7.
If carry had a value of one, exclusive or a with the hexadecimal number 0x1b (00011011 in binary). 0x1b corresponds to the irreducible polynomial with the high term eliminated. Conceptually, the high term of the irreducible polynomial and carry add modulo 2 to 0.
p now has the product
This algorithm generalizes easily to multiplication over other fields of characteristic 2, changing the lengths of a, b, and p and the value 0x1b appropriately.
Multiplicative inverse
The multiplicative inverse for an element a of a finite field can be calculated a number of different ways:
By multiplying a by every number in the field until the product is one. This is a brute-force search.
Since the nonzero elements of GF(pn) form a finite group with respect to multiplication, (for ), thus the inverse of a is a. This algorithm is a generalization of the modular multiplicative inverse based on Fermat's little theorem.
Multiplicative inverse based on the Fermat's little theorem can also be interpreted using the multiplicative Norm function in finite field. This new viewpoint leads to an inverse algorithm based on the additive Trace function in finite field.
By using the extended Euclidean algorithm.
By making logarithm and exponentiation tables for the finite field, subtracting the logarithm from pn − 1 and exponentiating the result.
By making a modular multiplicative inverse table for the finite field and doing a lookup.
By mapping to a composite field where inversion is simpler, and mapping back.
By constructing a special integer (in case of a finite field of a prime order) or a special polynomial (in case of a finite field of a non-prime order) and dividing it by a.
Implementation tricks
Generator based tables
When developing algorithms for Galois field computation on small Galois fields, a common performance optimization approach is to find a generator g and use the identity:
to implement multiplication as a sequence of table look ups for the logg(a) and gy functions and an integer addition operation. This exploits the property that every finite field contains generators. In the Rijndael field example, the polynomial (or {03}) is one such generator. A necessary but not sufficient condition for a polynomial to be a generator is to be irreducible.
An implementation must test for the special case of a or b being zero, as the product will also be zero.
This same strategy can be used to determine the multiplicative inverse with the identity:
Here, the order of the generator, , is the number of non-zero elements of the field. In the case of GF(28) this is . That is to say, for the Rijndael example: . So this can be performed with two look up tables and an integer subtract. Using this idea for exponentiation also derives benefit:
This requires two table look ups, an integer multiplication and an integer modulo operation. Again a test for the special case must be performed.
However, in cryptographic implementations, one has to be careful with such implementations since the cache architecture of many microprocessors leads to variable timing for memory access. This can lead to implementations that are vulnerable to a timing attack.
Carryless multiply
For binary fields GF(2n), field multiplication can be implemented using a carryless multiply such as CLMUL instruction set, which is good for n ≤ 64. A multiplication uses one carryless multiply to produce a product (up to 2n − 1 bits), another carryless multiply of a pre-computed inverse of the field polynomial to produce a quotient = ⌊product / (field polynomial)⌋, a multiply of the quotient by the field polynomial, then an xor: result = product ⊕ ((field polynomial) ⌊product / (field polynomial)⌋). The last 3 steps (pclmulqdq, pclmulqdq, xor) are used in the Barrett reduction step for fast computation of CRC using the x86 pclmulqdq instruction.
Composite exponent
When k is a composite number, there will exist isomorphisms from a binary field GF(2k) to an extension field of one of its subfields, that is, GF((2m)n) where . Utilizing one of these isomorphisms can simplify the mathematical considerations as the degree of the extension is smaller with the trade off that the elements are now represented over a larger subfield. To reduce gate count for hardware implementations, the process may involve multiple nesting, such as mapping from GF(28) to GF(((22)2)2).
Program examples
C programming example
Here is some C code which will add and multiply numbers in the characteristic 2 finite field of order 28, used for example by Rijndael algorithm or Reed–Solomon, using the Russian peasant multiplication algorithm:
/* Add two numbers in the GF(2^8) finite field */
uint8_t gadd(uint8_t a, uint8_t b) {
return a ^ b;
}
/* Multiply two numbers in the GF(2^8) finite field defined
* by the modulo polynomial relation x^8 + x^4 + x^3 + x + 1 = 0
* (the other way being to do carryless multiplication followed by a modular reduction)
*/
uint8_t gmul(uint8_t a, uint8_t b) {
uint8_t p = 0; /* accumulator for the product of the multiplication */
while (a != 0 && b != 0) {
if (b & 1) /* if the polynomial for b has a constant term, add the corresponding a to p */
p ^= a; /* addition in GF(2^m) is an XOR of the polynomial coefficients */
if (a & 0x80) /* GF modulo: if a has a nonzero term x^7, then must be reduced when it becomes x^8 */
a = (a << 1) ^ 0x11b; /* subtract (XOR) the primitive polynomial x^8 + x^4 + x^3 + x + 1 (0b1_0001_1011) – you can change it but it must be irreducible */
else
a <<= 1; /* equivalent to a*x */
b >>= 1;
}
return p;
}
This example has cache, timing, and branch prediction side-channel leaks, and is not suitable for use in cryptography.
D programming example
This D program will multiply numbers in Rijndael's finite field and generate a PGM image:
/**
Multiply two numbers in the GF(2^8) finite field defined
by the polynomial x^8 + x^4 + x^3 + x + 1.
*/
ubyte gMul(ubyte a, ubyte b) pure nothrow {
ubyte p = 0;
foreach (immutable ubyte counter; 0 .. 8) {
p ^= -(b & 1) & a;
auto mask = -((a >> 7) & 1);
// 0b1_0001_1011 is x^8 + x^4 + x^3 + x + 1.
a = cast(ubyte)((a << 1) ^ (0b1_0001_1011 & mask));
b >>= 1;
}
return p;
}
void main() {
import std.stdio, std.conv;
enum width = ubyte.max + 1, height = width;
auto f = File("rijndael_finite_field_multiplication.pgm", "wb");
f.writefln("P5\n%d %d\n255", width, height);
foreach (immutable y; 0 .. height)
foreach (immutable x; 0 .. width) {
immutable char c = gMul(x.to!ubyte, y.to!ubyte);
f.write(c);
}
}
This example does not use any branches or table lookups in order to avoid side channels and is therefore suitable for use in cryptography.
See also
Zech's logarithm
References
Sources
(reissued in 1984 by Cambridge University Press ).
External links
Wikiversity: Reed–Solomon for Coders – Finite Field Arithmetic
Arithmetic
Arithmetic
Articles with example D code
Articles with example C code | Finite field arithmetic | Mathematics | 3,877 |
1,636,903 | https://en.wikipedia.org/wiki/Atriplex | Atriplex () is a plant genus of about 250 species, known by the common names of saltbush and orache (; also spelled orach). It belongs to the subfamily Chenopodioideae of the family Amaranthaceae s.l..
The genus is quite variable and widely distributed. It includes many desert and seashore plants and halophytes, as well as plants of moist environments.
The generic name originated in Latin and was applied by Pliny the Elder to the edible oraches. The name saltbush derives from the fact that the plants retain salt in their leaves; they are able to grow in areas affected by soil salination.
Description
Species of plants in genus Atriplex are annual or perennial herbs, subshrubs, or shrubs. The plants are often covered with bladderlike hairs, that later collapse and form a silvery, scurfy or mealy surface, rarely with elongate trichomes. The leaves are arranged alternately along the branches, rarely in opposite pairs, either sessile or on a petiole, and are sometimes deciduous. The leaf blade is variably shaped and may be entire, tooth or lobed.
The flowers are borne in leaf axils or on the ends of branches, in spikes or spike-like panicles . The flowers are unisexual, some species monoecious, others dioecious. Male flowers have 3-5 perianth lobes and 3-5 stamens. Female flowers are usually lacking a perianth, but are enclosed by 2 leaf-like bracteoles, have a short style and 2 stigmas.
After flowering, the bracteoles sometimes enlarge, thicken or become appendaged, enclosing the fruit but without adhering to it.
The chromosome base number is x = 9, except for Atriplex lanfrancoi, which is x=10.
A few Atriplex species are C3-plants, but most species are C4-plants, with a characteristic leaf anatomy, known as kranz anatomy.
Taxonomy
The genus Atriplex was first formally described in 1753 by Carl Linnaeus in Species Plantarum. The genus name was used by Pliny for orach, or mountain spinach (A. hortensis).
Phylogeny
The genus evolved in Middle Miocene, the C4-photosynthesis pathway developed about 14.1–10.9 million years ago (mya), when the climate became increasingly dry. The genus diversified rapidly and spread over the continents. The C4 Atriplex colonized North America probably from Eurasia during the Middle/Late Miocene, about 9.8–8.8 mya, and later spread to South America. Australia was colonized twice by two C4 lineages, one from Eurasia or America about 9.8–7.8 mya, and one from Central Asia about 6.3–4.8 mya. The last lineage diversified rapidly, and became the ancestor of most Australian Atriplex species.
Systematics
The type species (lectotype) is Atriplex hortensis. The name is derived from Ancient Greek ἀτράφαξυς (atraphaxys), "orach", itself a Pre-Greek substrate loanword.
Atriplex is an extremely species-rich genus and comprises about 250-300 species, with new species still being discovered. An example includes Atriplex yeelirrie, formally described in 2015.
Traditional taxonomy of Atripliceae based on morphological features has been controversial. Molecular studies have found that many genera are not true clades. One such study found that Atripliceae could be divided into two main clades, Archiatriplex, with a few, scattered species, and the larger Atriplex clade, which is highly diverse and found around the world. After phylogenetic research, Kadereit et al. (2010) excluded Halimione as a distinct sister genus. The remaining Atriplex species were grouped into several clades.
The following is a cladogram with estimated divergence times for the tribe Atripliceae. To infer the phylogeny, an ITS matrix composed of spacer ITS-1, the 5.8S subunit, and spacer ITS-2 were amplified and sequenced for each specimen. Not all species in the genus Atriplex are presented in the cladogram (based on page 7 of ). This work suggested that the Americas were colonised by C4 Atriplex from Eurasia or Australia. Furthermore, that in the Americas Atriplex first appeared in South America, where two lineages underwent in situ diversification and evolved sympatrically. North America was then colonised by Atriplex from South America, then one lineage later moved back to South America.
Atriplex lanfrancoi/cana-Clade:
Atriplex lanfrancoi (Brullo & Pavone) G. Kadereit et Sukhor. (Syn.: Cremnophyton lanfrancoi Brullo & Pavone): endemic to Malta and Gozo.
Atriplex cana C.A. Mey.: from Eastern European Russia to western China.
Atriplex section Atriplex: annual C3-plants.
Atriplex aucheri Moq.: in Eastern Europe and West Asia.
Atriplex hortensis L. – Garden orache, red orach, mountain spinach, French spinach: in Asia, cultivated or naturalized in Europe.
Atriplex oblongifolia Waldst. & Kit. – Oblong-leaved orache: in Eurasia.
Atriplex sagittata Borkh. (Syn.: Atriplex nitens Schkuhr): in Eurasia
Atriplex section Teutliopsis Dumort.: annual C3-plants.
Atriplex australasica Moq.
Atriplex calotheca (Rafn) Fr.: in Northern Europe.
Atriplex davisii Aellen: from southern Europe to Egypt.
Atriplex glabriuscula Edmondston – Northeastern saltbush, Babington's orache, smooth orache, Scotland orache, glabrous orache: In central and northern Europe.
Atriplex gmelinii C.A. Mey. ex Bong. – Gmelin's saltbush: in Asia and North America.
Atriplex intracontinentalis Sukhor.: from Central Europe to Asia.
Atriplex laevis C.A. Mey.: in Asia, naturalized in eastern Europe.
Atriplex latifolia Wahlenb.: in Eurasia.
Atriplex littoralis L. – Grass-leaved orache: in Eurasia and North Africa.
Atriplex longipes Drejer – Long-stalked orache: in northern Europe.
Atriplex micrantha C.A. Mey.: in Asia, naturalized in Europe.
Atriplex nudicaulis Boguslaw – Baltic saltbush: in Eurasia.
Atriplex patula L. – Common orache, spreading orache: in Eurasia and North Africa.
Atriplex praecox Hülph. – Early orache: in northern Europe.
Atriplex prostrata Moq. – Spear-leaved orache, thin-leaved orache, triangle orache, fat hen: in Eurasia and North Africa.
C4-Atriplex-Clade: containing the majority of species. The traditional classification into sections (sect. Obione, sect. Pterochiton, sect. Psammophila, sect. Sclerocalymma, sect. Stylosa) did not reflect the phylogenetical relationships and was rejected by Kadereit et al. (2010).
Atriplex acanthocarpa (Torr.) S. Watson: in North America.
Atriplex acutibractea Anderson: in Australia.
Atriplex altaica Sukhor.: in Asia.
Atriplex angulata Benth.: in Australia.
Atriplex billardierei (Moq.) Hook. f.: in Australia.
Atriplex canescens (Pursh) Nutt. – Chamiso, chamiza, four-winged saltbush, grey sagebrush: in North America.
Atriplex centralasiatica Iljin: in Asia.
Atriplex cinerea Poir. – Grey saltbush, truganini: in Australia
Atriplex codonocarpa P.G. Wilson: in Australia.
Atriplex conduplicata F. Muell.: in Australia.
Atriplex confertifolia (Torr. & Frém.) S. Watson – Shadscale (saltbush): in North America.
Atriplex cordobensis Gand. & Stuck.: in South America.
Atriplex deserticola Phil.: in South America.
Atriplex dimorphostegia Kar. & Kir.: in North Africa.
Atriplex eardleyae Aellen: in Australia
Atriplex elachophylla F. Muell.: in Australia.
Atriplex fissivalvis F. Muell.: in Australia
Atriplex flabellum Bunge ex Boiss.: in Eurasia.
Atriplex gardneri (Moq.) D. Dietr. – Gardner's saltbush, moundscale: in North America
Atriplex glauca L.: in Portugal, Spain and in North Africa.
Atriplex halimus L. – Mediterranean saltbush, sea orache, shrubby orache: in south Europe, North Africa and southwest Asia.
Atriplex herzogii Standl.: in North America.
Atriplex holocarpa F. Muell.: in Australia.
Atriplex hymenelytra (Torr.) S. Watson – Desert holly: in North America.
Atriplex hymenotheca Moq.: in Australia.
Atriplex imbricata (Moq.) D. Dietr.: in South America.
Atriplex inamoena Aellen: in Eurasia.
Atriplex intermedia Anderson: in Australia.
Atriplex isatidea Moq.: in Australia.
Atriplex laciniata L. – Frosted orache: In western and northern Europe.
Atriplex lampa (Moq.) Gillies ex Small: in South America.
Atriplex lehmanniana Bunge: in Eurasia.
Atriplex lentiformis (Torr.) S. Watson – Quail bush: in North America.
Atriplex leptocarpa F. Muell.: in Australia.
Atriplex leucoclada Boiss.: in Eurasia.
Atriplex leucophylla (Moq.) D. Dietr.: in North America
Atriplex lindleyi Moq.: in Australia.
Atriplex moneta Bunge ex Boiss.: in Eurasia.
Atriplex muelleri Benth.: in Australia.
Atriplex nessorhina S.W.L. Jacobs: in Australia.
Atriplex nummularia Lindl. – Old man saltbush, giant saltbush: in Australia.
Atriplex obovata Moq.: in North America.
Atriplex pamirica Iljin: in Eurasia.
Atriplex parishii S. Watson: in North America
Atriplex parryi S. Watson: in North America
Atriplex parvifolia Kunth: in South America.
Atriplex patagonica (Moq.) D. Dietr.: in South America.
Atriplex phyllostegia (Torr. ex S. Watson) S. Watson: in North America.
Atriplex polycarpa (Torr.) S. Watson – Allscale (saltbush), desert saltbush, cattle saltbush, cattle spinach: in North America.
Atriplex powellii S. Watson – Powell's saltbush: in North America.
Atriplex pseudocampanulata Aellen: in Australia.
Atriplex quinii F. Muell.: in Australia.
Atriplex recurva d'Urv.: in Eurasia, endemic to areas around the Aegean.
Atriplex rhagodioides F. Muell.: in Australia.
Atriplex rosea L. – Tumbling orache: in Eurasia and North Africa.
Atriplex rusbyi Britton ex Rusby: in South America.
Atriplex schugnanica Iljin: in Asia.
Atriplex semibaccata R. Br. – Australian saltbush, berry saltbush, creeping saltbush: in Australia.
Atriplex semilunaris Aellen: in Australia.
Atriplex serenana A. Nelson ex Abrams: in North America
Atriplex sibirica L.; in Asia, naturalized in Europe.
Atriplex sphaeromorpha Iljin: in Russia, Ukraine and Caucasus.
Atriplex spinibractea Anderson: in Australia.
Atriplex spongiosa F. Muell.: in Australia.
Atriplex stipitata Benth.: in Australia.
Atriplex sturtii S.W.L. Jacobs: in Australia.
Atriplex suberecta I. Verd. – Sprawling saltbush, lagoon saltbush: in Australia.
Atriplex tatarica Aellen: in Europe, North Africa and Asia.
Atriplex turbinata (Anderson) Aellen: in Australia.
Atriplex undulata (Moq.) D. Dietr.: in South America.
Atriplex velutinella F. Muell.: in Australia.
Atriplex vesicaria Heward ex Benth. – Bladder saltbush: in Australia.
Distribution and habitat
The genus Atriplex is distributed nearly worldwide from subtropical to temperate and to subarctic regions. Most species-rich are Australia, North America, South America and Eurasia. Many species are halophytes and are adapted to dry environments with salty soils.
Ecology
Atriplex species are used as food plants by the larvae of some Lepidoptera species; see the list of Lepidoptera which feed on Atriplex. They are also sometimes consumed by camels. For spiders such as Phidippus californicus and other arthropods, saltbush plants offer opportunities to hide and hunt in habitat that is otherwise often quite barren.
It has been proposed that genus Atriplex was a main food source in the diet of the extinct giant kangaroo Procoptodon goliah. Stable isotopic data suggested that their diet consisted of plants that used the C4 photosynthetic pathway, and due to their semi-arid distribution, chenopod saltbushes were likely responsible.
Uses
The favored species for human consumption is now usually garden orache (A. hortensis), but many species are edible and the use of Atriplex as food is known since at least the late Epipaleolithic (Mesolithic).
Common orache (A. patula) is attested as an archaeophyte in northern Europe, and the Ertebølle culture is presumed to have used it as a food. Its seed has been found among apparent evidence of cereal preparation and cooking at Late Iron Age villages in Britain. In the biblical Book of Job, mallûaḥ (מַלּ֣וּחַ, probably Mediterranean saltbush, A. halimus, the major culinary saltbush in the region) is mentioned as food eaten by social outcasts (). Grey saltbush (A. cinerea) has been used as bushfood in Australia since prehistoric times.
Chamiso (A. canescens) and shadscale (A. confertifolia) were eaten by Native Americans, and spearscale (A. hastata) was a food in rural Eurasia.
Studies on Atriplex species demonstrated their potential use in agriculture. Meat from sheep which have grazed on saltbush has surprisingly high levels of vitamin E, is leaner and more hydrated than regular lamb and has consumer appeal equal to grain-fed lamb. The vitamin E levels could have animal health benefits while extending the shelf-life and maintaining the fresh red colour of saltbush lamb. This effect has been demonstrated for old man saltbush (A. nummularia) and river saltbush (A. amnicola). For reasons unknown, sheep seem to prefer the more fibrous, less nutritious river saltbush.
A study on A. nummularia discovered the species have a nitrogen content of 2.5–3.5%, and could potentially be used as a protein supplement for grazing if palatable. A subsequent study allowed sheep and goats to voluntarily feed on Atriplex halimus and aimed to determine if the saltbush was palatable, and if so, did it provide enough nutrients to supplement the diet of these animals. In this study they determined when goats and sheep are given as much A. halimus as they like, they do obtain enough nutrients to supplement their dietunless the animal requirements are higher during pregnancy and milk production.
Saltbushes are also used as an ornamental plant in landscaping and can be used to prevent soil erosion in coastal areas. Old man saltbush (Atriplex nummularia) has also been successfully used to rehabilitate old mining sites around Lightning Ridge (Australia).
See also
Barbara Hulme, producer of Atriplex hybrids
References
(1999): Orach. In: Oxford Companion to Food: 556.
Halophytes
Drought-tolerant plants
Garden plants
Amaranthaceae genera
Taxa named by Carl Linnaeus
Pseudocereals
Chenopodioideae | Atriplex | Chemistry | 3,862 |
677,078 | https://en.wikipedia.org/wiki/Meander%20%28mathematics%29 | In mathematics, a meander or closed meander is a self-avoiding closed curve which crosses a given line a number of times, meaning that it intersects the line while passing from one side to the other. Intuitively, a meander can be viewed as a meandering river with a straight road crossing the river over a number of bridges. The points where the line and the curve cross are therefore referred to as "bridges".
Meander
Given a fixed line L in the Euclidean plane, a meander of order n is a self-avoiding closed curve in the plane that crosses the line at 2n points. Two meanders are equivalent if one meander can be continuously deformed into the other while maintaining its property of being a meander and leaving the order of the bridges on the road, in the order in which they are crossed, invariant.
Examples
The single meander of order 1 intersects the line twice:
This meander intersects the line four times and thus has order 2:
There are two meanders of order 2. Flipping the image vertically produces the other.
Here are two non-equivalent meanders of order 3, each intersecting the line six times:
Meandric numbers
The number of distinct meanders of order n is the meandric number Mn. The first fifteen meandric numbers are given below .
M1 = 1
M2 = 2
M3 = 8
M4 = 42
M5 = 262
M6 = 1828
M7 = 13820
M8 = 110954
M9 = 933458
M10 = 8152860
M11 = 73424650
M12 = 678390116
M13 = 6405031050
M14 = 61606881612
M15 = 602188541928
Meandric permutations
A meandric permutation of order n is defined on the set {1, 2, ..., 2n} and is determined as follows:
With the line oriented from left to right, each intersection of the meander is consecutively labelled with the integers, starting at 1.
The curve is oriented upward at the intersection labelled 1.
The cyclic permutation with no fixed points is obtained by following the oriented curve through the labelled intersection points.
In the diagram on the right, the order 4 meandric permutation is given by (1 8 5 4 3 6 7 2). This is a permutation written in cyclic notation and not to be confused with one-line notation.
If π is a meandric permutation, then π2 consists of two cycles, one containing of all the even symbols and the other all the odd symbols. Permutations with this property are called alternate permutations, since the symbols in the original permutation alternate between odd and even integers. However, not all alternate permutations are meandric because it may not be possible to draw them without introducing a self-intersection in the curve. For example, the order 3 alternate permutation, (1 4 3 6 5 2), is not meandric.
Open meander
Given a fixed line L in the Euclidean plane, an open meander of order n is a non-self-intersecting curve in the plane that crosses the line at n points. Two open meanders are equivalent if one can be continuously deformed into the other while maintaining its property of being an open meander and leaving the order of the bridges on the road, in the order in which they are crossed, invariant.
Examples
The open meander of order 1 intersects the line once:
The open meander of order 2 intersects the line twice:
Open meandric numbers
The number of distinct open meanders of order n is the open meandric number mn. The first fifteen open meandric numbers are given below .
m1 = 1
m2 = 1
m3 = 2
m4 = 3
m5 = 8
m6 = 14
m7 = 42
m8 = 81
m9 = 262
m10 = 538
m11 = 1828
m12 = 3926
m13 = 13820
m14 = 30694
m15 = 110954
Semi-meander
Given a fixed oriented ray R (a closed half line) in the Euclidean plane, a semi-meander of order n is a non-self-intersecting closed curve in the plane that crosses the ray at n points. Two semi-meanders are equivalent if one can be continuously deformed into the other while maintaining its property of being a semi-meander and leaving the order of the bridges on the ray, in the order in which they are crossed, invariant.
Examples
The semi-meander of order 1 intersects the ray once:
The semi-meander of order 2 intersects the ray twice:
Semi-meandric numbers
The number of distinct semi-meanders of order n is the semi-meandric number Mn (usually denoted with an overline instead of an underline). The first fifteen semi-meandric numbers are given below .
M1 = 1
M2 = 1
M3 = 2
M4 = 4
M5 = 10
M6 = 24
M7 = 66
M8 = 174
M9 = 504
M10 = 1406
M11 = 4210
M12 = 12198
M13 = 37378
M14 = 111278
M15 = 346846
Properties of meandric numbers
There is an injective function from meandric to open meandric numbers:
Mn = m2n−1
Each meandric number can be bounded by semi-meandric numbers:
Mn ≤ Mn ≤ M2n
For n > 1, meandric numbers are even:
Mn ≡ 0 (mod 2)
External links
"Approaches to the Enumerative Theory of Meanders" by Michael La Croix
Combinatorics
Integer sequences | Meander (mathematics) | Mathematics | 1,184 |
24,952,083 | https://en.wikipedia.org/wiki/SoaML | SoaML (Service-Oriented Architecture Modeling Language ) is an open source specification project from the Object Management Group (OMG), describing a Unified Modeling Language (UML) profile and metamodel for the modeling and design of services within a service-oriented architecture.
Description
SoaML has been created to support the following modeling capabilities:
Identifying services, dependencies between them and services requirements
Specifying services (functional capabilities, consumer expectations, the protocols and message exchange patterns)
Defining service consumers and providers
The policies for using and providing services
Services classification schemes
Integration with OMG Business Motivation Model
Foundation for further extensions both related to integration with other OMG metamodels like BPDM and BPMN 2.0, as well as SBVR, OSM, ODM and others.
The existing models and meta models (e.g. TOGAF) for describing system architectures turned out to be insufficient to describe SOA in a precise and standardized way. The UML itself seems to be too general for the purpose of describing SOA and needed clarification and standardization of even basic terms like provider, consumer, etc.
See also
Systems Modeling Language
Unified Modeling Language
Further reading
SoaML Wiki. "SoaML Wiki". SoaML and OMG, 03 Nov 2009.
SoaML OMG Specification http://www.omg.org/spec/SoaML/
OASIS SOA Reference Model Technical Committee http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=soa-rm
References and notes
Notes
Citations
External articles
Using SoaML services architecture by Jim Amsden, a co-author of the OMG SoaML standard.
Modeling with SoaML, the services-oriented architecture modeling language, a five-part series by Jim Amsden.
Unified Modeling Language
Data modeling languages
Specification languages
Service-oriented (business computing)
Enterprise modelling
Modeling languages | SoaML | Engineering | 401 |
34,903,867 | https://en.wikipedia.org/wiki/Jean-Claude%20Sikorav | Jean-Claude Sikorav (born 21 June 1957) is a French mathematician. He is professor at the École normale supérieure de Lyon. He is specialized in symplectic geometry.
Main contributions
Sikorav is known for his proof, joint with François Laudenbach, of the Arnold conjecture for Lagrangian intersections in cotangent bundles, as well as for introducing generating families in symplectic topology.
Selected publications
Sikorav is one of fifteen members of a group of mathematicians who published the book Uniformisation des surfaces de Riemann under the pseudonym of Henri Paul de Saint-Gervais.
He has written the survey
.
and research papers
.
.
Honors
Sikorav is a Knight of the Ordre des Palmes Académiques.
References
External links
Home page at the École Normale Supérieure de Lyon
1957 births
École Normale Supérieure alumni
Living people
French mathematicians
Chevaliers of the Ordre des Palmes Académiques
Topologists
Lycée Louis-le-Grand alumni | Jean-Claude Sikorav | Mathematics | 210 |
60,270,671 | https://en.wikipedia.org/wiki/Kalai%20%28process%29 | The art of kalai (kalhai or qalai) is the process of coating an alloy surface such as copper or brass by deposition of metal tin on it. The word "kalai" is derived from Sanskrit word kalya lepa, which means "white wash or tin". A cultural Sanskrit work by Keladi Basava called "Sivatattva Ratnakara" (1699) mentions "kalaya-lepa" in the chapter of cookery or "supashashtra" which means applying kalai on utensils. People practicing the art of kalai are called Kalaiwala or Kalaigar. Basically, Kalaigars or Kalaiwalas are community craftsmen.
History
Vessels with kalai, both on its interior and exterior have been found in the excavations of Bramhapuri at Kolhapur, Maharashtra which adds to the archeological evidence of kalai art. From this evidence, P K. Gode, who studied tin coating on metallic vessels in India, stated that the history of tin coating dates back to 1300 C.E. The history of kalai is also recorded in “Parsibhashanushasana” of Vikaramasimha (before Samvat 1600 i.e. C.E. 1544) and also in the famous Ain- I -Akbari (C.E. 1590) by Abul Fazal.
Spiritual approach
The copper vessels with kalai were used to store water and cook food earlier because of a spiritual belief that copper attracts and transmits a divine consciousness also called “Chaitanya”. The spiritual approach to the use of copper vessels to store water is that copper and tin have Sattva-Raja (the basic component of creation/universe) component that is transferred to water.
Scientific approach
Earlier, copper and brass vessels were used because of their high conductivity. High conductivity of copper vessels reduces the fuel cost. However, a chemical reaction between copper and oxygen called oxidization turns the copper vessels black. Copper also reacts with the moisture in air and creates copper carbonate, which can be noticed as light green rust on the surface. Copper carbonate is poisonous and can make a person severely ill if it gets mixed with food. The copper can get dissolved in water in trace amounts when the water is stored in copper vessels for a long period of time. The process is known as the “oligodynamic effect”. Kalai protects from food poisoning and blackening of copper vessels by preventing direct contact of air with the copper or brass surface. Tin is also a good conductor of heat like copper, hence applying kalai does not result in loss of heat conductivity for the utensil.
The kalai is required to be done on the vessels approximately every two months. Tin will melt if the temperature is above 425 degrees Fahrenheit (218.333 degrees Celsius). Also, the tin coating wears away with time. In order to protect the coating, one should use wooden or silicone spatulas and avoid cooking acidic foods.
Process
Kalai can be done in various ways. Virgin grade tin (called ‘ranga’ in Hindi), caustic soda, sal ammoniac (ammonium chloride, called ‘nausadar’ powder in Hindi), and water are used in the process.
The first step of kalai is to clean the utensil with water. There are two ways of cleaning the utensil further to remove any impurities such as dust. The first is to clean it with caustic soda. The other is to wash it with dilute acid solution which contains a gold purifying compound known as ‘sufa’. If the latter is used, the utensil should be cleaned immediately after applying the dilute acidic solution as it may bear a mark if not done immediately.
After the cleaning, the vessel is heated on burning coal for about 2 to 3 minutes. The Kalaiwala, Kalaigar or Kalaikar then digs a small pit in the ground to burn the coal. He/she prepares a temporary blast furnace to do kalai and blows air through bellows. After the vessel turns pinkish hot, virgin grade tin (in the form of strips) is applied on the hot vessel. This step is called ‘casting’ by the Kalaigars. The ‘nausadar’ powder is sprinkled on the vessel. The tin melts rapidly which is then rubbed evenly on the utensil with the help of a cotton cloth or a swab of cotton. The rubbing process is known as ‘majaay’ in Hindi. A whitish smoke with the peculiar smell of ammonia is released when the ‘nausadar’ powder is rubbed on the utensil. A silvery lining appears on the vessel with a shine. The final step of kalai is to dip the utensil in cold water.
Present scenario
Kalai was earlier done with silver instead of tin but now it would be too expensive. As stainless steel and aluminum ware came into being, the usage of copper and brass utensils decreased, which led the Kalaigars to suffer losses. Nowadays only some hotels and a very few people use vessels with kalai. As a result, there are a very few Kalaigars left and the art of kalai is vanishing.
References
Alloys
Copper
Brass
Tin | Kalai (process) | Chemistry | 1,093 |
18,926,098 | https://en.wikipedia.org/wiki/Rejuvelac | Rejuvelac is a kind of grain water that was invented and promoted by Ann Wigmore, born in Cropos, Lithuania. The beverage is closely related to a traditional Romanian drink, called borș, a fermented wheat bran that can be used to make a sour soup called ciorbă or as the basis for vegan cheeses.
Rejuvelac is a raw food made by soaking a grain or pseudocereal (usually sprouted) in water for about two days at room temperature and then reserving the liquid. A second batch can be made from the grain/pseudocereal, this time requiring only about one day to ferment. A third batch is possible but the flavor may be disagreeable. The spent grain/pseudocereal is usually discarded afterward.
References
Bacteriology
Dietary supplements
Fermented drinks | Rejuvelac | Biology | 172 |
16,049,560 | https://en.wikipedia.org/wiki/Acetogenin | Acetogenins are a class of polyketide natural products found in plants of the family Annonaceae. They are characterized by linear 32- or 34-carbon chains containing oxygenated functional groups including hydroxyls, ketones, epoxides, tetrahydrofurans and tetrahydropyrans. They are often terminated with a lactone or butenolide. Over 400 members of this family of compounds have been isolated from 51 different species of plants. Many acetogenins are characterized by neurotoxicity.
Examples include:
Annonacin
Annonins
Bullatacin
Uvaricin
Structure
Structurally, acetogenins are a series of C-35/C-37 compounds usually characterized by a long aliphatic chain bearing a terminal methyl-substituted α,β-unsaturated γ-lactone ring, as well as one to three tetrahydrofuran (THF) rings. These THF rings are located along the hydrocarbon chain, along with a number of oxygenated moieties (hydroxyls, acetoxyls, ketones, epoxides) and/or double bonds.
Research
Acetogenins have been investigated for their biological properties, but are a concern due to neurotoxicity. Purified acetogenins and crude extracts of the common North American pawpaw (Asimina triloba) or the soursop (Annona muricata) remain under laboratory studies.
Mechanism of action
Acetogenins inhibit NADH dehydrogenase, a key enzyme in energy metabolism.
References
External links
Fatty alcohols
Polyketides
NADH dehydrogenase inhibitors
Plant toxins | Acetogenin | Chemistry | 348 |
589,503 | https://en.wikipedia.org/wiki/Xenon%20tetroxide | Xenon tetroxide is a chemical compound of xenon and oxygen with molecular formula XeO4, remarkable for being a relatively stable compound of a noble gas. It is a yellow crystalline solid that is stable below −35.9 °C; above that temperature it is very prone to exploding and decomposing into elemental xenon and oxygen (O2).
All eight valence electrons of xenon are involved in the bonds with the oxygen, and the oxidation state of the xenon atom is +8. Oxygen is the only element that can bring xenon up to its highest oxidation state; even fluorine can only give XeF6 (+6).
Two other short-lived xenon compounds with an oxidation state of +8, XeO3F2 and XeO2F4, are accessible by the reaction of xenon tetroxide with xenon hexafluoride. XeO3F2 and XeO2F4 can be detected with mass spectrometry. The perxenates are also compounds where xenon has the +8 oxidation state.
Reactions
At temperatures above −35.9 °C, xenon tetroxide is very prone to explosion, decomposing into xenon and oxygen gases with ΔH = −643 kJ/mol:
XeO4 → Xe + 2 O2
Xenon tetroxide dissolves in water to form perxenic acid and in alkalis to form perxenate salts:
XeO4 + 2 H2O → H4XeO6
XeO4 + 4 NaOH → Na4XeO6 + 2 H2O
Xenon tetroxide can also react with xenon hexafluoride to give xenon oxyfluorides:
XeO4 + XeF6 → XeOF4 + XeO3F2
XeO4 + 2XeF6 → XeO2F4 + 2 XeOF4
Synthesis
All syntheses start from the perxenates, which are accessible from the xenates through two methods. One is the disproportionation of xenates to perxenates and xenon:
2 + 2 OH− → + Xe + O2 + 2 H2O
The other is oxidation of the xenates with ozone in basic solution:
+ O3 + 3 OH− → + O2 + 2 H2O
Barium perxenate is reacted with sulfuric acid and the unstable perxenic acid is dehydrated to give xenon tetroxide:
+ 2 → 2 +
→ 2 +
Any excess perxenic acid slowly undergoes a decomposition reaction to xenic acid and oxygen:
2 → + 2 + 2
References
Xenon(VIII) compounds
Inorganic compounds
Oxides | Xenon tetroxide | Chemistry | 594 |
1,831,837 | https://en.wikipedia.org/wiki/Axis%20mundi | In astronomy, is the Latin term for the axis of Earth between the celestial poles. In a geocentric coordinate system, this is the axis of rotation of the celestial sphere. Consequently, in ancient Greco-Roman astronomy, the is the axis of rotation of the planetary spheres within the classical geocentric model of the cosmos.
In 20th-century comparative mythology, the term – also called the cosmic axis, world axis, world pillar, center of the world, or world tree – has been greatly extended to refer to any mythological concept representing "the connection between Heaven and Earth" or the "higher and lower realms". Mircea Eliade introduced the concept in the 1950s. closely relates to the mythological concept of the (navel) of the world or cosmos.
Items adduced as examples of the by comparative mythologists include plants (notably a tree but also other types of plants such as a vine or stalk), a mountain, a column of smoke or fire, or a product of human manufacture (such as a staff, a tower, a ladder, a staircase, a maypole, a cross, a steeple, a rope, a totem pole, a pillar, a spire). Its proximity to heaven may carry implications that are chiefly religious (pagoda, temple mount, minaret, church) or secular (obelisk, lighthouse, rocket, skyscraper). The image appears in religious and secular contexts. The symbol may be found in cultures utilizing shamanic practices or animist belief systems, in major world religions, and in technologically advanced "urban centers". In Mircea Eliade's opinion: "Every Microcosm, every inhabited region, has a Centre; that is to say, a place that is sacred above all."
Specific examples of cosmic mountains or centers include one from Egyptian texts described as providing support for the sky, Mount Mashu from the Epic of Gilgamesh, Adam's Peak, which is a sacred mountain in Sri Lanka associated with Adam or Buddha in Islamic and Buddhist traditions respectively, Mount Qaf in other Islamic and Arabic cosmologies, the mountain Harā Bərəz in Zoroastrian cosmology, Mount Meru in Hindu, Jain, and Buddhist cosmologies, Mecca as a cosmic center in Sufi cosmology (with minority traditions placing it as Medina or Jerusalem), and, in Tenrikyo, the Jiba at the Tenrikyo Church Headquarters in Tenri, Nara, Japan. In pre-Islamic Arabia, some central temples, including the Temple of Awwam, were cosmic centers.
Background
There are multiple interpretations about the origin of the concept of the axis mundi. One psychological and sociological interpretation suggests that the symbol originates in a natural and universal psychological perception – i.e., that the particular spot that one occupies stands at "the center of the world". This space serves as a microcosm of order because it is known and settled. Outside the boundaries of the microcosm lie foreign realms that – because they are unfamiliar or not ordered – represent chaos, death, or night. From the center, one may still venture in any of the four cardinal directions, make discoveries, and establish new centers as new realms become known and settled. The name of China — meaning "Middle Nation" ( pinyin: ) – is often interpreted as an expression of an ancient perception that the Chinese polity (or group of polities) occupied the center of the world, with other lands lying in various directions relative to it.
A second interpretation suggests that ancient symbols such as the axis mundi lie in a particular philosophical or metaphysical representation of a common and culturally shared philosophical concept, which is that of a natural reflection of the macrocosm (or existence at grand scale) in the microcosm (which consists of either an individual, community, or local environment that shares the same principles and structures as the macrocosm). In this metaphysical representation of the universe, mankind is placed into an existence that serves as a microcosm of the universe or the entire cosmic existence, and who – in order to achieve higher states of existence or liberation into the macrocosm – must gain necessary insights into universal principles that can be represented by his life or environment in the microcosm. In many religious and philosophical traditions around the world, mankind is seen as a sort of bridge between either: two worlds, the earthly and the heavenly (as in Hindu, and Taoist philosophical and theological systems); or three worlds, namely the earthly, heavenly, and the "sub-earthly" or "infra-earthly" (e.g., the underworld, as in the Ancient Greek, Incan, Mayan, and Ancient Egyptian religious systems). Spanning these philosophical systems is the belief that man traverses a sort of axis, or path, which can lead from man's current central position in the intermediate realms into heavenly or sub-earthly realms. Thus, in this view, symbolic representations of a vertical axis represent a path of "ascent" or "descent" into other spiritual or material realms, and often capture a philosophy that considers human life to be a quest in which one develops insights or perfections in order to move beyond this current microcosmic realm and to engage with the grand macrocosmic order.
In other interpretations, an axis mundi is more broadly defined as a place of connection between heavenly and the earthly realms – often a mountain or other elevated site. Tall mountains are often regarded as sacred and some have shrines erected at the summit or base. Mount Kunlun fills a similar role in China. Mount Kailash is holy to Hinduism and several religions in Tibet. The Pitjantjatjara people in central Australia consider Uluru to be central to both their world and culture. The Teide volcano was for the Canarian aborigines () a kind of . In ancient Mesopotamia, the cultures of ancient Sumer and Babylon built tall platforms, or ziggurats, to elevate temples on the flat river plain. Hindu temples in India are often situated on high mountains – e.g., Amarnath, Tirupati, Vaishno Devi, etc. The pre-Columbian residents of Teotihuacán in Mexico erected huge pyramids, featuring staircases leading to heaven. These Amerindian temples were often placed on top of caves or subterranean springs, which were thought to be openings to the underworld. Jacob's Ladder is an axis mundi image, as is the Temple Mount. For Christians, the Cross on Mount Calvary expresses this symbol. The Middle Kingdom, China, had a central mountain, Kunlun, known in Taoist literature as "the mountain at the middle of the world". To "go into the mountains" meant to dedicate oneself to a spiritual life.
As the abstract concept of is present in many cultural traditions and religious beliefs, it can be thought to exist in any number of locales at once. Mount Hermon was regarded as the axis mundi in Canaanite tradition, from where the sons of God are introduced descending in 1 Enoch 6:6. The ancient Armenians had a number of holy sites, the most important of which was Mount Ararat, which was thought to be the home of the gods as well as the center of the universe. Likewise, the ancient Greeks regarded several sites as places of Earth's (navel) stone, notably the oracle at Delphi, while still maintaining a belief in a cosmic world tree and in Mount Olympus as the abode of the gods. Judaism has the Temple Mount; Christianity has the Mount of Olives and Calvary; and Islam has the Ka'aba (said to be the first building on Earth), as well as the Temple Mount (Dome of the Rock). In Hinduism, Mount Kailash is identified with the mythical Mount Meru and regarded as the home of Shiva; in Vajrayana Buddhism, Mount Kailash is recognized as a similarly sacred place. In Shinto, the Ise Shrine is the .
Sacred places can constitute world centers (), with an altar or place of prayer as the axis. Altars, incense sticks, candles, and torches form the axis by sending a column of smoke, and prayer, toward heaven. It has been suggested by Romanian religious historian Mircea Eliade that architecture of sacred places often reflects this role: "Every temple or palace – and by extension, every sacred city or royal residence – is a Sacred Mountain, thus becoming a Centre." Pagoda structures in Asian temples take the form of a stairway linking earth and heaven. A steeple in a church or a minaret in a mosque also serve as connections of earth and heaven. Structures such as the maypole, derived from the Saxons' Irminsul, and the totem pole among indigenous peoples of the Americas also represent world axes. The calumet, or sacred pipe, represents a column of smoke (the soul) rising from a world center. A mandala creates a world center within the boundaries of its two-dimensional space analogous to that created in three-dimensional space by a shrine.
In the classical elements and the Vedic Pancha Bhoota, the corresponds to Aether, the quintessence.
Plants
Plants often serve as images of the axis mundi. The image of the Cosmic Tree provides an axis symbol that unites three planes: sky (branches), earth (trunk), and underworld (roots). In some Pacific Island cultures, the banyan tree – of which the Bodhi tree is of the Sacred Fig variety – is the abode of ancestor spirits. In Hindu religion, the banyan tree is considered sacred and is called ("Of all trees I am the banyan tree" – Bhagavad Gita). It represents eternal life because of its seemingly ever-expanding branches. The Bodhi tree is also the name given to the tree under which Gautama Siddhartha, the historical Buddha, sat on the night he attained enlightenment. The Mesoamerican world tree connects the planes of the underworld and the sky with that of the terrestrial realm. The Yggdrasil, or World Ash, functions in much the same way in Norse mythology; it is the site where Odin found enlightenment. Other examples include Jievaras in Lithuanian mythology and Thor's Oak in the myths of the pre-Christian Germanic peoples. The Tree of Life and the Tree of Knowledge of Good and Evil in Genesis present two aspects of the same image. Each is said to stand at the center of the paradise garden from which four rivers flow to nourish the whole world. Each tree confers a boon. Bamboo, the plant from which Asian calligraphy pens are made, represents knowledge and is regularly found on Asian college campuses. The Christmas tree, which can be traced in its origins back to pre-Christian European beliefs, represents an axis mundi. In Yoruba religion, oil palm is the axis mundi (though not necessarily a "world tree") that Ọrunmila climbs to alternate between heaven and earth.
Human figure
The human body can express the symbol of the world axis. Some of the more abstract Tree of Life representations, such as the sefirot in Kabbalism and the chakra system recognized by Hinduism and Buddhism, merge with the concept of the human body as a pillar between heaven and earth. Disciplines such as yoga and tai chi begin from the premise of the human body as axis mundi. The Buddha represents a world center in human form. Large statues of a meditating figure unite the human form with the symbolism of the temple and tower. Astrology in all its forms assumes a connection between human health and affairs and celestial-body orientation. World religions regard the body itself as a temple and prayer as a column uniting earth and heaven. The ancient Colossus of Rhodes combined the role of the human figure with those of portal and skyscraper. The Renaissance image known as the Vitruvian Man represented a symbolic and mathematical exploration of the human form as world axis.
Homes
Secular structures can also function as . In Navajo culture, the hogan acts as a symbolic cosmic center. In some Asian cultures, houses were traditionally laid out in the form of a square oriented toward the four compass directions. A traditional home was oriented toward the sky through feng shui, a system of geomancy, just as a palace would be. Traditional Arab houses are also laid out as a square surrounding a central fountain that evokes a primordial garden paradise. Mircea Eliade noted that "the symbolism of the pillar in [European] peasant houses likewise derives from the 'symbolic field' of the . In many archaic dwellings the central pillar does in fact serve as a means of communication with the heavens, with the sky." The nomadic peoples of Mongolia and the Americas more often lived in circular structures. The central pole of the tent still operated as an axis, but a fixed reference to the four compass points was avoided.
Shamanic function
A common shamanic concept, and a universally told story, is that of the healer traversing the axis mundi to bring back knowledge from the other world. It may be seen in the stories from Odin and the World Ash Tree to the Garden of Eden and Jacob's Ladder to Jack and the Beanstalk and Rapunzel. It is the essence of the journey described in The Divine Comedy by Dante Alighieri. The epic poem relates its hero's descent and ascent through a series of spiral structures that take him through the core of the earth, from the depths of hell to celestial paradise. It is also a central tenet in the Southeastern Ceremonial Complex.
Anyone or anything suspended on the axis between heaven and earth becomes a repository of potential knowledge. A special status accrues to the thing suspended: a serpent, a rod, a fruit, mistletoe. Derivations of this idea find form in the Rod of Asclepius, an emblem of the medical profession, and in the caduceus, an emblem of correspondence and commercial professions. The staff in these emblems represents the axis mundi, while the serpents act as guardians of, or guides to, knowledge.
Modern expressions
A modern artistic representation of the is the (The Endless Column, 1938) an abstract sculpture by Romanian Constantin Brâncuși. The column takes the form of a "sky pillar" () upholding the heavens even as its rhythmically repeating segments invite climb and suggest the possibility of ascension.
See also
Celestial sphere
Comparative mythology
History of the center of the Universe
Hyperborea
North Pole
Potomitan
Religious cosmology
Sacred natural site
Taiji (philosophy)
References
Sources
Comparative mythology
Esoteric cosmology
Geographical centres
Religious cosmologies
Mythological places
Panentheism
Pantheism
Religious symbols
Shamanism
Spirituality | Axis mundi | Physics,Mathematics | 3,005 |
901,291 | https://en.wikipedia.org/wiki/Sodium%20benzoate | Sodium benzoate also known as benzoate of soda is the sodium salt of benzoic acid, widely used as a food preservative (with an E number of E211) and a pickling agent. It appears as a white crystalline chemical with the formula C6H5COONa.
Production
Sodium benzoate is commonly produced by the neutralization of sodium hydroxide (NaOH) with benzoic acid (C6H5COOH), which is itself produced commercially by partial oxidation of toluene with oxygen.
Reactions
Sodium benzoate can be decarboxylated with strong base and heat, yielding benzene:
C6H5COONa + NaOH -> C6H6 + Na2CO3
Natural occurrence
Sodium benzoate is not a naturally occurring substance. However many foods are natural sources of benzoic acid, its salts, and its esters. Fruits and vegetables can be rich sources, particularly berries such as cranberry and bilberry. Other sources include seafood, such as prawns, and dairy products.
Uses
As a preservative
Sodium benzoate can act as a food preservative. It is most widely used in acidic foods such as salad dressings (for example acetic acid in vinegar), carbonated drinks (carbonic acid), jams and fruit juices (citric acid), pickles (acetic acid), condiments, and frozen yogurt toppings. It is also used as a preservative in medicines and cosmetics. Under these conditions it is converted into benzoic acid (E210), which is bacteriostatic and fungistatic. Benzoic acid is generally not used directly due to its poor water solubility.
Concentration as a food preservative is limited by the FDA in the U.S. to 0.1% by weight. Sodium benzoate is also allowed as an animal food additive at up to 0.1%, per the Association of American Feed Control Officials.
Sodium benzoate has been replaced by potassium sorbate in the majority of soft drinks in the United Kingdom.
In the 19th century, sodium benzoate as a food ingredient was investigated by Harvey W. Wiley with his 'Poison Squad' as part of the US Department of Agriculture. This led to the 1906 Pure Food and Drug Act, a key event in the early history of food regulation in the United States.
In pharmaceuticals
Sodium benzoate is used as a treatment for urea cycle disorders due to its ability to bind amino acids. This leads to excretion of these amino acids and a decrease in ammonia levels. Recent research shows that sodium benzoate may be beneficial as an add-on therapy (1 gram/day) in schizophrenia. Total Positive and Negative Syndrome Scale scores dropped by 21% compared to placebo.
Sodium benzoate, along with phenylbutyrate, is used to treat hyperammonemia.
Sodium benzoate, along with caffeine, is used to treat postdural puncture headache, respiratory depression associated with overdosage of narcotics, and with ergotamine to treat vascular headache.
Other uses
Sodium benzoate is also used in fireworks as a fuel in whistle mix, a powder that emits a whistling noise when compressed into a tube and ignited.
Mechanism of food preservation
The mechanism starts with the absorption of benzoic acid into the cell. If the intracellular pH falls to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase decreases sharply, which inhibits the growth and survival of microorganisms that cause food spoilage.
Health and safety
In the United States, sodium benzoate is designated as generally recognized as safe (GRAS) by the Food and Drug Administration. The International Programme on Chemical Safety found no adverse effects in humans at doses of 647–825 mg/kg of body weight per day.
Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice.
The human body rapidly clears sodium benzoate by combining it with glycine to form hippuric acid which is then excreted. The metabolic pathway for this begins with the conversion of benzoate by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine N-acyltransferase into hippuric acid.
Association with benzene in soft drinks and pepper sauces
In combination with ascorbic acid (vitamin C, E300), sodium benzoate and potassium benzoate may form benzene. In 2006, the Food and Drug Administration tested 100 beverages available in the United States that contained both ascorbic acid and benzoate. Four had benzene levels that were above the 5 ppb Maximum Contaminant Level set by the Environmental Protection Agency for drinking water. Most of the beverages that tested above the limit have been reformulated and subsequently tested below the safety limit. Heat, light and shelf life can increase the rate at which benzene is formed. Hot peppers naturally contain vitamin C ("nearly as much as in one orange") so the observation about beverages applies to pepper sauces containing sodium benzoate, like Texas Pete.
ADHD and hyperactivity
Research published, including in 2007 for the UK's Food Standards Agency (FSA) suggests that certain artificial colors, when paired with sodium benzoate, may be linked to hyperactive behavior and other ADHD symptoms. The results were inconsistent regarding sodium benzoate, so the FSA recommended further study. The Food Standards Agency concluded that the observed increases in hyperactive behavior, if real, were more likely to be linked to the artificial colors than to sodium benzoate. The report's author, Jim Stevenson from Southampton University, said: "The results suggest that consumption of certain mixtures of artificial food colours and sodium benzoate preservative are associated with increases in hyperactive behaviour in children. . . . Many other influences are at work but this at least is one a child can avoid."
Compendial status
British Pharmacopoeia
European Pharmacopoeia
Food Chemicals Codex
Japanese Pharmacopoeia
United States Pharmacopeia
See also
Acceptable daily intake
List of investigational antipsychotics
Potassium benzoate
References
External links
International Programme on Chemical Safety - Benzoic Acid and Sodium Benzoate report
Safety data for sodium benzoate
Antiseptics
Benzoates
Preservatives
Organic sodium salts
E-number additives
. | Sodium benzoate | Chemistry | 1,359 |
39,288,495 | https://en.wikipedia.org/wiki/Mitochondrial%20fusion | Mitochondria are dynamic organelles with the ability to fuse and divide (fission), forming constantly changing tubular networks in most eukaryotic cells. These mitochondrial dynamics, first observed over a hundred years ago are important for the health of the cell, and defects in dynamics lead to genetic disorders. Through fusion, mitochondria can overcome the dangerous consequences of genetic malfunction. The process of mitochondrial fusion involves a variety of proteins that assist the cell throughout the series of events that form this process.
Process overview
When cells experience metabolic or environmental stresses, mitochondrial fusion and fission work to maintain functional mitochondria. An increase in fusion activity leads to mitochondrial elongation, whereas an increase in fission activity results in mitochondrial fragmentation. The components of this process can influence programmed cell death and lead to neurodegenerative disorders such as Parkinson's disease. Such cell death can be caused by disruptions in the process of either fusion or fission.
The shapes of mitochondria in cells are continually changing via a combination of fission, fusion, and motility. Specifically, fusion assists in modifying stress by integrating the contents of slightly damaged mitochondria as a form of complementation. By enabling genetic complementation, fusion of the mitochondria allows for two mitochondrial genomes with different defects within the same organelle to individually encode what the other lacks. In doing so, these mitochondrial genomes generate all of the necessary components for a functional mitochondrion.
With mitochondrial fission
The combined effects of continuous fusion and fission give rise to mitochondrial networks. The mechanisms of mitochondrial fusion and fission are regulated by proteolysis and posttranslational modifications. The actions of fission, fusion and motility cause the shapes of mitochondria to continually change.
The changes in balance between the rates of mitochondrial fission and fusion directly affect the wide range of mitochondrial lengths that can be observed in different cell types. Rapid fission and fusion of the mitochondria in cultured fibroblasts has been shown to promote the redistribution of mitochondrial green fluorescent protein (GFP) from one mitochondrion to all of the other mitochondria. This process can occur in a cell within a time period as short as an hour.
The significance of mitochondrial fission and fusion is distinct for nonproliferating neurons, which are unable to survive without mitochondrial fission. Such nonproliferating neurons cause two human diseases known as dominant optic atrophy and Charcot Marie Tooth disease type 2A, which are both caused by fusion defects. Though the importance of these processes is evident, it is still unclear why mitochondrial fission and fusion are necessary for nonproliferating cells.
Regulation
Many gene products that control mitochondrial fusion have been identified, and can be reduced to three core groups which also control mitochondrial fission. These groups of proteins include mitofusins, OPA1/Mgm1, and Drp1/Dnm1. All of these molecules are GTP hydrolyzing proteins (GTPases) that belong to the dynamin family. Mitochondrial dynamics in different cells are understood by the way in which these proteins regulate and bind to each other. These GTPases in control of mitochondrial fusion are well conserved between mammals, flies, and yeast. Mitochondrial fusion mediators differ between the outer and inner membranes of the mitochondria. Specific membrane-anchored dynamin family members mediate fusion between mitochondrial outer membranes known as Mfn1 and Mfn2. These two proteins are mitofusin contained within humans that can alter the morphology of affected mitochondria in over-expressed conditions. However, a single dynamin family member known as OPA1 in mammals mediates fusion between mitochondrial inner membranes. These regulating proteins of mitochondrial fusion are organism-dependent; therefore, in Drosophila (fruit flies) and yeasts, the process is controlled by the mitochondrial transmembrane GTPase, Fzo. In Drosophila, Fzo is found in postmeiotic spermatids and the dysfunction of this protein results in male sterility. However, a deletion of Fzo1 in budding yeast results in smaller, spherical mitochondria due to the lack of mitochondrial DNA (mtDNA).
Apoptosis
The balance between mitochondrial fusion and fission in cells is dictated by the up-and-down regulation of mitofusins, OPA1/Mgm1, and Drp1/Dnm1. Apoptosis, or programmed cell death, begins with the breakdown of mitochondria into smaller pieces. This process results from up-regulation of Drp1/Dnm1 and down-regulation of mitofusins. Later in the apoptosis cycle, an alteration of OPA1/Mgm1 activity within the inner mitochondrial membrane occurs. The role of the OPA1 protein is to protect cells against apoptosis by inhibiting the release of cytochrome c. Once this protein is altered, there is a change in the cristae structure, release of cytochrome c, and the activation of the destructive caspase enzymes. These resulting changes indicate that inner mitochondrial membrane structure is linked with regulatory pathways in influencing cell life and death. OPA1 plays both a genetic and molecular role in mitochondrial fusion and in cristae remodeling during apoptosis. OPA1 exists in two forms; the first being soluble and found in the intermembrane space, and the second as an integral inner membrane form, work together to restructure and shape the cristae during and after apoptosis. OPA1 blocks intramitochondrial cytochrome c redistribution which proceeds remodeling of the cristae. OPA1 functions to protect cells with mitochondrial dysfunction due to Mfn deficiencies, doubly for those lacking Mfn1 and Mfn2, but it plays a greater role in cells with only Mfn1 deficiencies as opposed to Mfn2 deficiencies. Therefore, it is supported that OPA1 function is dependent on the amount of Mfn1 present in the cell to promote mitochondrial elongation.
In mammals
Both proteins, Mfn1 and Mfn2, can act either together or separately during mitochondrial fusion. Mfn1 and Mfn2 are 81% similar to each other and about 51% similar to the Drosophila protein Fzo. Results published for a study to determine the impact of fusion on mitochondrial structure revealed that Mfn-deficient cells demonstrated either elongated cells (majority) or small, spherical cells upon observation.
The Mfn protein has three different methods of action: Mfn1 homotypic oligomers, Mfn2 homotypic oligomers and Mfn1-Mfn2 heterotypic oligomers. It has been suggested that the type of cell determines the method of action but it has yet to be concluded whether or not Mfn1 and Mfn2 perform the same function in the process or if they are separate. Cells lacking this protein are subject to severe cellular defects such as poor cell growth, heterogeneity of mitochondrial membrane potential and decreased cellular respiration.
Mitochondrial fusion plays an important role in the process of embryonic development, as shown through the Mfn1 and Mfn2 proteins. Using Mfn1 and Mfn2 knock-out mice, which die in utero at midgestation due to a placental deficiency, mitochondrial fusion was shown not to be essential for cell survival in vitro, but necessary for embryonic development and cell survival throughout later stages of development. Mfn1 Mfn2 double knock-out mice, which die even earlier in development, were distinguished from the "single" knock-out mice. Mouse embryo fibroblasts (MEFs) originated from the double knock-out mice, which do survive in culture even though there is a complete absence of fusion, but parts of their mitochondria show a reduced mitochondrial DNA (mtDNA) copy number and lose membrane potential. This series of events causes problems with adenosine triphosphate (ATP) synthesis.
The Mitochondrial Inner/Outer Membrane Fusion (MMF) Family
The Mitochondrial Inner/Outer Membrane Fusion (MMF) Family (TC# 9.B.25) is a family of proteins that play a role in mitochondrial fusion events. This family belongs to the larger Mitochondrial Carrier (MC) Superfamily. The dynamic nature of mitochondria is critical for function. Chen and Chan (2010) have discussed the molecular basis of mitochondrial fusion, its protective role in neurodegeneration, and its importance in cellular function. The mammalian mitofusins Mfn1 and Mfn2, GTPases localized to the outer membrane, mediate outer-membrane fusion. OPA1, a GTPase associated with the inner membrane, mediates subsequent inner-membrane fusion. Mutations in Mfn2 or OPA1 cause neurodegenerative diseases. Mitochondrial fusion enables content mixing within a mitochondrial population, thereby preventing permanent loss of essential components. Cells with reduced mitochondrial fusion show a subpopulation of mitochondria that lack mtDNA nucleoids. Such mtDNA defects lead to respiration-deficient mitochondria, and their accumulation in neurons leads to impaired outgrowth of cellular processes and consequent neurodegeneration.
Family members
A representative list of the proteins belonging to the MMF family is available in the Transporter Classification Database.
9.B.25.1.1 - The mitochondrial inner/outer membrane fusion complex, Fzo/Mgm1/Ugo1. Only the Ugo1 protein is a member of the MC superfamily.
9.B.25.2.1 - The mammalian mitochondrial membrane fusion complex, Mitofusin 1 (Mfn1)/Mfn2/Optical Atrophy Protein 1 (OPA1) complex. This subfamily includes mitofusins 1 and 2.
Mitofusins: Mfn1 and Mfn2
Mfn1 and Mfn2 (TC# 9.B.25.2.1; Q8IWA4 and O95140, respectively), in mammalian cells are required for mitochondrial fusion, Mfn1 and Mfn2 possess functional distinctions. For instance, the formation of tethered structures in vitro occurs more readily when mitochondria are isolated from cells overexpressing Mfn1 than Mfn2. In addition, Mfn2 specifically has been shown to associate with Bax and Bak (Bcl-2 family, TC#1.A.21), resulting in altered Mfn2 activity, indicating that the mitofusins possess unique functional characteristics. Lipidic holes may open on opposing bilayers as intermediates, and fusion in cardiac myocytes is coupled with outer mitochondrial membrane destabilization that is opportunistically employed during the mitochondrial permeability transition.
Mutations in Mfn2 (but not Mfn1) result in the neurological disorder Charcot-Marie-Tooth syndrome. These mutations can be complemented by the formation of Mfn1–Mfn2CMT2A hetero-oligomers but not homo-oligomers of Mfn2+–Mfn2CMT2A. This suggests that within the Mfn1–Mfn2 hetero-oligomeric complex, each molecule is functionally distinct. This suggests that control of the expression levels of each protein likely represents the most basic form of regulation to alter mitochondrial dynamics in mammalian tissues. Indeed, the expression levels of Mfn1 and Mfn2 vary according to cell or tissue type as does the mitochondrial morphology.
Yeast mitochondrial fusion proteins
In yeast, three proteins are essential for mitochondrial fusion. Fzo1 (P38297) and Mgm1 (P32266) are conserved guanosine triphosphatases that reside in the outer and inner membranes, respectively. At each membrane, these conserved proteins are required for the distinct steps of membrane tethering and lipid mixing. The third essential component is Ugo1, an outer membrane protein with a region homologous to but distantly related to a region in the Mitochondrial Carrier (MC) family. Hoppins et al., 2009 showed that Ugo1 is a modified member of this family, containing three transmembrane domains and existing as a dimer, a structure that is critical for the fusion function of Ugo1. Their analyses of Ugo1 indicate that it is required for both outer and inner membrane fusion after membrane tethering, indicating that it operates at the lipid-mixing step of fusion. This role is distinct from the fusion dynamin-related proteins and thus demonstrates that at each membrane, a single fusion protein is not sufficient to drive the lipid-mixing step. Instead, this step requires a more complex assembly of proteins. The formation of a fusion pore has not yet been demonstrated. The Ugo1 protein is a member of the MC superfamily.
See also
Mitochondrial fission
Mitochondrial carriers
MFN1
MFN2
OPA1
DNM1
Transporter Classification Database
References
Mitochondrial genetics
Cell anatomy
Cell biology
Protein families
Membrane proteins
Transmembrane proteins
Transmembrane transporters
Transport proteins
Integral membrane proteins | Mitochondrial fusion | Biology | 2,748 |
41,686 | https://en.wikipedia.org/wiki/Security%20management | Security management is the identification of an organization's assets i.e. including people, buildings, machines, systems and information assets, followed by the development, documentation, and implementation of policies and procedures for protecting assets.
An organization uses such security management procedures for information classification, threat assessment, risk assessment, and risk analysis to identify threats, categorize assets, and rate system vulnerabilities.
Loss prevention
Loss prevention focuses on what one's critical assets are and how they are going to protect them. A key component to loss prevention is assessing the potential threats to the successful achievement of the goal. This must include the potential opportunities that further the object (why take the risk unless there's an upside?) Balance probability and impact determine and implement measures to minimize or eliminate those threats.
Security management includes the theories, concepts, ideas, methods, procedures, and practices that are used to manage and control organizational resources in order to accomplish security goals. Policies, procedures, administration, operations, training, awareness campaigns, financial management, contracting, resource allocation, and dealing with problems like security degradation are all included in this vast sector.
Security risk management
The management of security risks applies the principles of risk management to the management of security threats. It consists of identifying threats (or risk causes), assessing the effectiveness of existing controls to face those threats, determining the risks' consequence(s), prioritizing the risks by rating the likelihood and impact, classifying the type of risk, and selecting an appropriate risk option or risk response. In 2016, a universal standard for managing risks was developed in The Netherlands. In 2017, it was updated and named: Universal Security Management Systems Standard 2017.
Types of risks
External
Strategic: Competition and customer demand.
Operational: Regulations, suppliers, and contract.
Financial: FX and credit.
Hazard: Natural disasters, cyber, and external criminal acts.
Compliance: New regulatory or legal requirements are introduced, or existing ones are changed, exposing the organization to a non-compliance risk if measures are not taken to ensure compliance.
Internal
Strategic: R&D.
Operational: Systems and processes (H&R, Payroll).
Financial: Liquidity and cash flow.
Hazard: Safety and security; employees and equipment.
Compliance: Concrete or potential changes in an organization's systems, processes, suppliers, etc. may create exposure to a legal or regulatory non-compliance.
Risk options
Risk avoidance
The first choice to be considered is the possibility of eliminating the existence of criminal opportunity or avoiding the creation of such an opportunity. When additional considerations or factors are not created as a result of this action that would create a greater risk. For example, removing all the cash flow from a retail outlet would eliminate the opportunity for stealing the money, but it would also eliminate the ability to conduct business.
Risk reduction
When avoiding or eliminating the criminal opportunity conflicts with the ability to conduct business, the next step is reducing the opportunity of potential loss to the lowest level consistent with the function of the business. In the example above, the application of risk reduction might result in the business keeping only enough cash on hand for one day's operation.
Risk spreading
Assets that remain exposed after the application of reduction and avoidance are the subjects of risk spreading. This is the concept that limits loss or potential losses by exposing the perpetrator to the probability of detection and apprehension prior to the consummation of the crime through the application of perimeter lighting, barred windows, and intrusion detection systems. The idea is to reduce the time available for thieves to steal assets and escape without apprehension.
Risk transfer
The two primary methods of accomplishing risk transfer is to insure the assets or raise prices to cover the loss in the event of a criminal act. Generally speaking, when the first three steps have been properly applied, the cost of transferring risks is much lower.
Risk acceptance
All of the remaining risks must simply be assumed by the business as a part of doing business. Included with these accepted losses are deductibles, which have been made as part of the insurance coverage.
Security policy implementations
Intrusion detection
Alarm device.
Access control
Locks, simple or sophisticated, such as biometric authentication and keycard locks.
Physical security
Environmental elements (ex. Mountains, Trees, etc.).
Barricade.
Security guards (armed or unarmed) with wireless communication devices (e.g., two-way radio).
Security lighting (spotlight, etc.).
Security Cameras.
Motion Detectors.
IBNS containers for cash in transit.
Procedures
Coordination with law enforcement agencies.
Fraud management.
Risk Management.
CPTED.
Risk Analysis.
Risk Mitigation.
Contingency Planning.
See also
Alarm management
IT risk
IT risk management
ITIL security management, an information security management system standard based on ISO/IEC 27001
Physical security
Retail loss prevention
Security
Security policy
Gordon–Loeb model for cyber security investments
References
Further reading
BBC NEWS | In Depth. BBC News - Home. Web. 18 Mar. 2011. <http://news.bbc.co.uk/2/shared/spl/hi/guides/456900/456993/html/>.
Rattner, Daniel. "Loss Prevention & Risk Management Strategy." Security Management. Northeastern University, Boston. 5 Mar. 2010. Lecture.
Rattner, Daniel. "Risk Assessments." Security Management. Northeastern University, Boston. 15 Mar. 2010. Lecture.
Rattner, Daniel. "Internal & External Threats." Security Management. Northeastern University, Boston. 8 April. 2010. Lecture.
Asset Protection and Security Management Handbook, POA Publishing LLC, 2003, p. 358
ISO 31000 Risk management — Principles and guidelines, 2009, p. 7
Universal Security Management Systems Standard 2017 - Requirements and guidance for use, 2017, p. 50
Security Management Training & TSCM Training
Network management
Computer security procedures | Security management | Engineering | 1,183 |
27,032,885 | https://en.wikipedia.org/wiki/Peel-Raam%20Line | The Peel-Raam Line (Dutch: Peel-Raamstelling) was a Dutch defence line built in 1939 and attacked and conquered on 10 May 1940 by the German forces.
The defence line was behind the Maas Line (about 9 km to 21 km away) and started at Grave, where a barrack complex was built as part of the Peel-Raam line. From there, the line passed by Mill, Peel along the Zuid-Willemsvaart until the Belgium border nearby Weert. In the north, the defence line was connected to the Grebbe Line. The line could profit from the natural protection of the swamps, rivers and canals in the area. In the north, an artificial barrier was made, the Defensiekanaal, which was a canal. The line was made of casemates (200 m apart) and barbwire obstructions. The railway bridge on the Defensiekanaal near Mill, also had a spargel-obstruction (precursor of the Rommelspargel which the German Army used from 1943 onwards). On the first day of the German invasion, 10 May 1940, a German train crashed into this spargel-obstruction.
There were not many communication lines between the casemates and the main force of the infantry was far behind the line of casemates.
The Dutch would like to have connected their defence line with the one along the Albert Canal, in Belgium, but the Belgian army wanted a new defence line (the Orange Line (Dutch Oranjelinie) along the line Tilburg-Waalwijk and the Bergsche Maas. That meant the line was vulnerable, and the enemy could go around the line to cross into Belgian soil.
The Peel-Raam Line is, for most part, intact; particularly the northern part. The stretch between Griendtsveen and De Peel Air Base and the spot nearby Mill features several visible remains. The fortifications and the casemates in the municipalities of Deurne, Venray and Mill en Sint Hubert are protected as national monuments.
See also
Defense Line of Amsterdam
Dutch Water Line
Grebbe Line
IJssel Line
Defense lines of the Netherlands
References
articles on TracesOfWar.com by Wilco Vermeer, copyright STIWOT
article on waroverholland.com copyright Stichting de Greb / Stichting Kennispunt Mei 1940''
Military history of the Netherlands
World War II defensive lines
World War II sites in the Netherlands
History of Limburg (Netherlands)
History of North Brabant | Peel-Raam Line | Engineering | 523 |
76,808,462 | https://en.wikipedia.org/wiki/Supersilver%20ratio | In mathematics, the supersilver ratio is a geometrical proportion close to . Its true value is the real solution of the equation
The name supersilver ratio results from analogy with the silver ratio, the positive solution of the equation , and the supergolden ratio.
Definition
Two quantities are in the supersilver ratio-squared if
The ratio is here denoted
Based on this definition, one has
It follows that the supersilver ratio is found as the unique real solution of the cubic equation
The decimal expansion of the root begins as .
The minimal polynomial for the reciprocal root is the depressed cubic thus the simplest solution with Cardano's formula,
or, using the hyperbolic sine,
is the superstable fixed point of the iteration
Rewrite the minimal polynomial as , then the iteration results in the continued radical
Dividing the defining trinomial by one obtains , and the conjugate elements of are
with and
Properties
The growth rate of the average value of the n-th term of a random Fibonacci sequence is .
The defining equation can be written
The supersilver ratio can be expressed in terms of itself as fractions
Similarly as the infinite geometric series
in comparison to the silver ratio identities
For every integer one has
From this an infinite number of further relations can be found.
Continued fraction pattern of a few low powers
The supersilver ratio is a Pisot number. Because the absolute value of the algebraic conjugates is smaller than 1, powers of generate almost integers. For example: After ten rotation steps the phases of the inward spiraling conjugate pair – initially close to – nearly align with the imaginary axis.
The minimal polynomial of the supersilver ratio has discriminant and factors into the imaginary quadratic field has class number Thus, the Hilbert class field of can be formed by adjoining
With argument a generator for the ring of integers of , the real root of the Hilbert class polynomial is given by
The Weber-Ramanujan class invariant is approximated with error by
while its true value is the single real root of the polynomial
The elliptic integral singular value has closed form expression
(which is less than 1/294 the eccentricity of the orbit of Venus).
Third-order Pell sequences
These numbers are related to the supersilver ratio as the Pell numbers and Pell-Lucas numbers are to the silver ratio.
The fundamental sequence is defined by the third-order recurrence relation
with initial values
The first few terms are 1, 2, 4, 9, 20, 44, 97, 214, 472, 1041, 2296, 5064,... .
The limit ratio between consecutive terms is the supersilver ratio.
The first 8 indices n for which is prime are n = 1, 6, 21, 114, 117, 849, 2418, 6144. The last number has 2111 decimal digits.
The sequence can be extended to negative indices using
The generating function of the sequence is given by
The third-order Pell numbers are related to sums of binomial coefficients by
.
The characteristic equation of the recurrence is If the three solutions are real root and conjugate pair and , the supersilver numbers can be computed with the Binet formula
with real and conjugates and the roots of
Since and the number is the nearest integer to with and
Coefficients result in the Binet formula for the related sequence
The first few terms are 3, 2, 4, 11, 24, 52, 115, 254, 560, 1235, 2724, 6008,... .
This third-order Pell-Lucas sequence has the Fermat property: if p is prime, The converse does not hold, but the small number of odd pseudoprimes makes the sequence special. The 14 odd composite numbers below to pass the test are n = 3, 5, 5, 315, 99297, 222443, 418625, 9122185, 3257, 11889745, 20909625, 24299681, 64036831, 76917325.
The third-order Pell numbers are obtained as integral powers of a matrix with real eigenvalue
The trace of gives the above
Alternatively, can be interpreted as incidence matrix for a D0L Lindenmayer system on the alphabet with corresponding substitution rule
and initiator . The series of words produced by iterating the substitution have the property that the number of and are equal to successive third-order Pell numbers. The lengths of these words are given by
Associated to this string rewriting process is a compact set composed of self-similar tiles called the Rauzy fractal, that visualizes the combinatorial information contained in a multiple-generation three-letter sequence.
Supersilver rectangle
Given a rectangle of height , length and diagonal length The triangles on the diagonal have altitudes each perpendicular foot divides the diagonal in ratio .
On the right-hand side, cut off a square of side length and mark the intersection with the falling diagonal. The remaining rectangle now has aspect ratio (according to ). Divide the original rectangle into four parts by a second, horizontal cut passing through the intersection point.
The parent supersilver rectangle and the two scaled copies along the diagonal have linear sizes in the ratios The areas of the rectangles opposite the diagonal are both equal to with aspect ratios (below) and (above).
If the diagram is further subdivided by perpendicular lines through the feet of the altitudes, the lengths of the diagonal and its seven distinct subsections are in ratios
Supersilver spiral
A supersilver spiral is a logarithmic spiral that gets wider by a factor of for every quarter turn. It is described by the polar equation with initial radius and parameter If drawn on a supersilver rectangle, the spiral has its pole at the foot of altitude of a triangle on the diagonal and passes through vertices of rectangles with aspect ratio which are perpendicularly aligned and successively scaled by a factor
See also
Solutions of equations similar to :
Silver ratio – the only positive solution of the equation
Golden ratio – the only positive solution of the equation
Supergolden ratio – the only real solution of the equation
References
Cubic irrational numbers
Mathematical constants
History of geometry
Integer sequences | Supersilver ratio | Mathematics | 1,284 |
1,674,555 | https://en.wikipedia.org/wiki/Vibration%20theory%20of%20olfaction | The vibration theory of smell proposes that a molecule's smell character is due to its vibrational frequency in the infrared range. This controversial theory is an alternative to the more widely accepted docking theory of olfaction (formerly termed the shape theory of olfaction), which proposes that a molecule's smell character is due to a range of weak non-covalent interactions between its protein odorant receptor (found in the nasal epithelium), such as electrostatic and Van der Waals interactions as well as H-bonding, dipole attraction, pi-stacking, metal ion, Cation–pi interaction, and hydrophobic effects, in addition to the molecule's conformation.
Introduction
The current vibration theory has recently been called the "swipe card" model, in contrast with "lock and key" models based on shape theory. As proposed by Luca Turin, the odorant molecule must first fit in the receptor's binding site. Then it must have a vibrational energy mode compatible with the difference in energies between two energy levels on the receptor, so electrons can travel through the molecule via inelastic electron tunneling, triggering the signal transduction pathway. The vibration theory is discussed in a popular but controversial book by Chandler Burr.
The odor character is encoded in the ratio of activities of receptors tuned to different vibration frequencies, in the same way that color is encoded in the ratio of activities of cone cell receptors tuned to different frequencies of light. An important difference, though, is that the odorant has to be able to become resident in the receptor for a response to be generated. The time an odorant resides in a receptor depends on how strongly it binds, which in turn determines the strength of the response; the odor intensity is thus governed by a similar mechanism to the "lock and key" model. For a pure vibrational theory, the differing odors of enantiomers, which possess identical vibrations, cannot be explained. However, once the link between receptor response and duration of the residence of the odorant in the receptor is recognised, differences in odor between enantiomers can be understood: molecules with different handedness may spend different amounts of time in a given receptor, and so initiate responses of different intensities.
Seeing as there are some aroma molecules of different shapes that smell the same (eg. benzaldehyde, that gives the same scent to both almonds and/or cyanide), the shape "lock and key" model is not quite sufficient to explain what is going on. Experiments with olfaction, taking quantum mechanics into consideration, suggest that ultimately both theories might work in harmony - first the scent molecules need to fit, as in the docking theory of olfaction model, but then the molecular vibrations of the chemical/atom bonds take over. So in essence your sense of smell could be much more like your sense of hearing, where your nose could be 'listening' to the acoustic/vibrational bonds of aroma molecules.
Some studies support vibration theory while others challenge its findings.
Major proponents and history
The theory was first proposed by Malcolm Dyson in 1928 and expanded by Robert H. Wright in 1954, after which it was largely abandoned in favor of the competing shape theory. A 1996 paper by Luca Turin revived the theory by proposing a mechanism, speculating that the G-protein-coupled receptors discovered by Linda Buck and Richard Axel were actually measuring molecular vibrations using inelastic electron tunneling as Turin claimed, rather than responding to molecular keys fitting molecular locks, working by shape alone. In 2007 a Physical Review Letters paper by Marshall Stoneham and colleagues at University College London and Imperial College London showed that Turin's proposed mechanism was consistent with known physics and coined the expression "swipe card model" to describe it. A PNAS paper in 2011 by Turin, Efthimios Skoulakis, and colleagues at MIT and the Alexander Fleming Biomedical Sciences Research Center reported fly behavioral experiments consistent with a vibrational theory of smell. The theory remains controversial.
Support
Isotope effects
A major prediction of Turin's theory is the isotope effect: that the normal and deuterated versions of a compound should smell different, although they have the same shape. A 2001 study by Haffenden et al. showed humans able to distinguish benzaldehyde from its deuterated version. However, this study has been criticized for lacking double-blind controls to eliminate bias and because it used an anomalous version of the duo-trio test. In another study, tests with animals have shown fish and insects able to distinguish isotopes by smell.
Deuteration changes the heats of adsorption and the boiling and freezing points of molecules (boiling points: 100.0 °C for H2O vs. 101.42 °C for D2O; melting points: 0.0 °C for H2O, 3.82 °C for D2O), pKa (i.e., dissociation constant: 9.71×10−15 for H2O vs. 1.95×10−15 for D2O, cf. Heavy water) and the strength of hydrogen bonding. Such isotope effects are exceedingly common, and so it is well known that deuterium substitution will indeed change the binding constants of molecules to protein receptors. Any binding interaction of an odorant molecule with an olfactory receptor will therefore be likely to show some isotope effect upon deuteration, and the observation of an isotope effect in no way argues exclusively for a vibrational theory of olfaction.
A study published in 2011 by Franco, Turin, Mershin and Skoulakis shows both that flies can smell deuterium, and that to flies, a carbon-deuterium bond smells like a nitrile, which has a similar vibration. The study reports that drosophila melanogaster (fruit fly), which is ordinarily attracted to acetophenone, spontaneously dislikes deuterated acetophenone. This dislike increases with the number of deuteriums. (Flies genetically altered to lack smell receptors could not tell the difference.) Flies could also be trained by electric shocks either to avoid the deuterated molecule or to prefer it to the normal one. When these trained flies were then presented with a completely new and unrelated choice of normal vs. deuterated odorants, they avoided or preferred deuterium as with the previous pair. This suggested that flies were able to smell deuterium regardless of the rest of the molecule. To determine whether this deuterium smell was actually due to vibrations of the carbon-deuterium (C-D) bond or to some unforeseen effect of isotopes, the researchers looked to nitriles, which have a similar vibration to the C-D bond. Flies trained to avoid deuterium and asked to choose between a nitrile and its non-nitrile counterpart did avoid the nitrile, lending support to the idea that the flies are smelling vibrations. Further isotope smell studies are under way in fruit flies and dogs.
Explaining differences in stereoisomer scents
Carvone presented a perplexing situation to vibration theory. Carvone has two isomers, which have identical vibrations, yet one smells like mint and the other like caraway (for which the compound is named).
An experiment by Turin filmed by the 1995 BBC Horizon documentary "A Code in the Nose" consisted of mixing the mint isomer with butanone, on the theory that the shape of the G-protein-coupled receptor prevented the carbonyl group in the mint isomer from being detected by the "biological spectroscope". The experiment succeeded with the trained perfumers used as subjects, who perceived that a mixture of 60% butanone and 40% mint carvone smelled like caraway.
The sulfurous smell of boranes
According to Turin's original paper in the journal Chemical Senses, the well documented smell of borane compounds is sulfurous, though these molecules contain no sulfur. He proposes to explain this by the similarity in frequency between the vibration of the B-H bond and the S-H bond. However, it has been pointed out that for o-carborane, which has a very strong B−H stretch at 2575 cm−1, the "onion-like odor of crude commercial o-carborane is replaced by a pleasant camphoraceous odor on careful purification, reflecting the method for commercial preparation of o-carborane from reactions promoted by onion-smelling diethyl sulfide, which is removed on purification."
Consistency with physics
Biophysical simulations published in Physical Review Letters in 2006 suggest that Turin's proposal is viable from a physics standpoint. However, Block et al. in their 2015 paper in Proceedings of the National Academy of Sciences indicate that their theoretical analysis shows that "the proposed electron transfer mechanism of the vibrational frequencies of odorants could be easily suppressed by quantum effects of nonodorant molecular vibrational modes".
Correlating odor to vibration
A 2004 paper published in the journal Organic Biomolecular Chemistry by Takane and Mitchell shows that odor descriptions in the olfaction literature correlate with EVA descriptors, which loosely correspond to the vibrational spectrum, better than with descriptors based on the two dimensional connectivity of the molecule. The study did not consider molecular shape.
Lack of antagonists
Turin points out that traditional lock-and-key receptor interactions deal with agonists, which increase the receptor's time spent in the active state, and antagonists, which increase the time spent in the inactive state. In other words, some ligands tend to turn the receptor on and some tend to turn it off. As an argument against the traditional lock-and-key theory of smell, very few olfactory antagonists have been found.
In 2004, a Japanese research group published that an oxidation product of isoeugenol is able to antagonize, or prevent, mice olfactory receptor response to isoeugenol.
Additional challenges to the docking theory of olfaction
Similarly shaped molecules with different molecular vibrations have different smells (metallocene experiment and deuterium replacement of molecular hydrogen). However this challenge is contrary to the results obtained with silicon analogues of bourgeonal and lilial, which despite their differences in molecular vibrations have similar smells and similarly activate the most responsive human receptor, hOR17-4, and with studies showing that the human musk receptor OR5AN1 responds identically to deuterated and non-deuterated musks. In the metallocene experiment, Turin observes that while ferrocene and nickelocene have nearly the same molecular sandwich structures, they possess distinct odors. He suggests that "because of the change in size and mass, different metal atoms give different frequencies for those vibrations that involve the metal atoms," an observation which is compatible with the vibration theory. However it has been noted that, in contrast to ferrocene, nickelocene rapidly decomposes in air and the cycloalkene odor observed for nickelocene, but not for ferrocene, could simply reflect decomposition of nickelocene giving trace amounts of hydrocarbons such as cyclopentadiene.
Differently shaped molecules with similar molecular vibrations have similar smells (replacement of carbon double bonds by sulfur atoms and the disparate shaped amber odorants)
Hiding functional groups does not hide the group's characteristic odor. However this is not always the case, since ortho-substituted arylisonitriles and thiophenols have far less offensive odors than the parent compounds.
Challenges
Three predictions by Luca Turin on the nature of smell, using concepts of vibration theory, were addressed by experimental tests published in Nature Neuroscience in 2004 by Vosshall and Keller. The study failed to support the prediction that isotopes should smell different, with untrained human subjects unable to distinguish acetophenone from its deuterated counterpart. This study also pointed to experimental design flaws in the earlier study by Haffenden. In addition, Turin's description of the odor of long-chain aldehydes as alternately (1) dominantly waxy and faintly citrus and (2) dominantly citrus and faintly waxy was not supported by tests on untrained subjects, despite anecdotal support from fragrance industry professionals who work regularly with these materials. Vosshall and Keller also presented a mixture of guaiacol and benzaldehyde to subjects, to test Turin's theory that the mixture should smell of vanillin. Vosshall and Keller's data did not support Turin's prediction. However, Vosshall says these tests do not disprove the vibration theory.
In response to the 2011 PNAS study on flies, Vosshall acknowledged that flies could smell isotopes but called the conclusion that smell was based on vibrations an "overinterpretation" and expressed skepticism about using flies to test a mechanism originally ascribed to human receptors. For the theory to be confirmed, Vosshall stated there must be further studies on mammalian receptors. Bill Hansson, an insect olfaction specialist, raised the question of whether deuterium could affect hydrogen bonds between the odorant and receptor.
In 2013, Turin and coworkers confirmed Vosshall and Keller's experiments showing that even trained human subjects were unable to distinguish acetophenone from its deuterated counterpart. At the same time Turin and coworkers reported that human volunteers were able to distinguish cyclopentadecanone from its fully deuterated analog. To account for the different results seen with acetophenone and cyclopentadecanone, Turin and coworkers assert that "there must be many C-H bonds before they are detectable by smell. In contrast to acetophenone which contains only 8 hydrogens, cyclopentadecanone has 28. This results in more than 3 times the number of vibrational modes involving hydrogens than in acetophenone, and this is likely essential for detecting the difference between isotopomers." Turin and coworkers provide no quantum mechanical justification for this latter assertion. Note that the correct term for compounds differing in the number of isotopic substitutions is isotopologue; isotopomers differ only in the position of the substitutions.
Vosshall, in commenting on Turin's work, notes that "the olfactory membranes are loaded with enzymes that can metabolise odorants, changing their chemical identity and perceived odour. Deuterated molecules would be poor substrates for such enzymes, leading to a chemical difference in what the subjects are testing. Ultimately, any attempt to prove the vibrational theory of olfaction should concentrate on actual mechanisms at the level of the receptor, not on indirect psychophysical testing." Richard Axel co-recipient of the 2004 Nobel prize for physiology for his work on olfaction, expresses a similar sentiment, indicating that Turin's work "would not resolve the debate – only a microscopic look at the receptors in the nose would finally show what is at work. Until somebody really sits down and seriously addresses the mechanism and not inferences from the mechanism... it doesn't seem a useful endeavour to use behavioural responses as an argument".
In response to the 2013 paper on cyclopentadecanone, Block et al. report that the human musk-recognizing receptor, OR5AN1, identified using a heterologous olfactory receptor expression system and robustly responding to cyclopentadecanone and muscone (which has 30 hydrogens), fails to distinguish isotopologues of these compounds in vitro. Furthermore, the mouse (methylthio)methanethiol-recognizing receptor, MOR244-3, as well as other selected human and mouse olfactory receptors, responded similarly to normal, deuterated, and carbon-13 isotopologues of their respective ligands, paralleling results found with the musk receptor OR5AN1. Based on these findings, the authors conclude that the proposed vibration theory does not apply to the human musk receptor OR5AN1, mouse thiol receptor MOR244-3, or other olfactory receptors examined. Additionally, theoretical analysis by the authors shows that the proposed electron transfer mechanism of the vibrational frequencies of odorants could be easily suppressed by quantum effects of nonodorant molecular vibrational modes. The authors conclude: "These and other concerns about electron transfer at olfactory receptors, together with our extensive experimental data, argue against the plausibility of the vibration theory."
In commenting on this work, Vosshall writes "In PNAS, Block et al.... shift the "shape vs. vibration" debate from olfactory psychophysics to the biophysics of the ORs themselves. The authors mount a sophisticated multidisciplinary attack on the central tenets of the vibration theory using synthetic organic chemistry, heterologous expression of olfactory receptors, and theoretical considerations to find no evidence to support the vibration theory of smell." While Turin comments that Block used "cells in a dish rather than within whole organisms" and that "expressing an olfactory receptor in human embryonic kidney cells doesn't adequately reconstitute the complex nature of olfaction...", Vosshall responds "Embryonic kidney cells are not identical to the cells in the nose ... but if you are looking at receptors, it's the best system in the world." In a Letter to the Editor of PNAS, Turin et al. raise concerns about Block et al. and Block et al. respond.
Recently, Saberi and Allaei have suggested that a functional relationship exists between molecular volume and the olfactory neural response. The molecular volume is an important factor, but it is not the only factor that determines the response of ONRs. The binding affinity of an odorant-receptor pair is affected by their relative sizes. The maximum affinity can be attained when the molecular volume of an odorant matches the volume of the binding pocket. A recent study describes the responses of primary olfactory neurons in tissue culture to isotopes and finds that a small fraction of the population (<1%) clearly discriminates between isotopes, some even giving an all-or-or -none response to H or D isotopologues of octanal. The authors attribute this to differences in hydrophobicity between normal and deuterated odorants.
See also
Odotope theory
Docking theory of olfaction
Quantum biology
References
Olfactory system
Quantum biology
Theories | Vibration theory of olfaction | Physics,Biology | 3,790 |
30,303,953 | https://en.wikipedia.org/wiki/Truncus%20%28mathematics%29 | In analytic geometry, a truncus is a curve in the Cartesian plane consisting of all points (x,y) satisfying an equation of the form
where a, b, and c are given constants. The two asymptotes of a truncus are parallel to the coordinate axes. The basic truncus y = 1 / x2 has asymptotes at x = 0 and y = 0, and every other truncus can be obtained from this one through a combination of translations and dilations.
For the general truncus form above, the constant a dilates the graph by a factor of a from the x-axis; that is, the graph is stretched vertically when a > 1 and compressed vertically when 0 < a < 1. When a < 0 the graph is reflected in the x-axis as well as being stretched vertically. The constant b translates the graph horizontally left b units when b > 0, or right when b < 0. The constant c translates the graph vertically up c units when c > 0 or down when c < 0.
The asymptotes of a truncus are found at x = -b (for the vertical asymptote) and y = c (for the horizontal asymptote).
This function is more commonly known as a reciprocal squared function, particularly the basic example .
See also
Rational functions
Multiplicative inverse
References
Curves | Truncus (mathematics) | Mathematics | 291 |
40,458,248 | https://en.wikipedia.org/wiki/IC%202560 | IC 2560 is a spiral galaxy lying over 110 million light-years away from Earth in the constellation of Antlia. It was discovered by Lewis Swift in 1897.
The luminosity class of IC 2560 is II with a broad HI line containing regions of ionized hydrogen. Moreover, IC 2560 is an active Type 2 Seyfert Galaxy. It has a distinct bar structure in the center with the supermassive black hole at the core having a mass of .
One supernova, SN 2020ejm (type Ia, mag. 16), was discovered in IC 2560 on 11 March, 2020.
NGC 3223 group
IC 2560 is a member of the NGC 3223 Group. There are 15 other galaxies in the group including NGC 3223, NGC 3224, NGC 3258, NGC 3268, NGC 3289, IC 2552 and IC 2559. Together, the group is part of the Antlia Cluster.
References
External links
Barred spiral galaxies
Antlia
2560
029993
-05-25-001
375-4
10140-3318
Seyfert galaxies
Discoveries by Lewis Swift
Astronomical objects discovered in 1897 | IC 2560 | Astronomy | 237 |
13,622,958 | https://en.wikipedia.org/wiki/Bryant%20surface | In Riemannian geometry, a Bryant surface is a 2-dimensional surface embedded in 3-dimensional hyperbolic space with constant mean curvature equal to 1. These surfaces take their name from the geometer Robert Bryant, who proved that every simply-connected minimal surface in 3-dimensional Euclidean space is isometric to a Bryant surface by a holomorphic parameterization analogous to the (Euclidean) Weierstrass–Enneper parameterization.
References
Hyperbolic geometry
Riemannian geometry
Minimal surfaces | Bryant surface | Chemistry | 102 |
1,022,948 | https://en.wikipedia.org/wiki/Detroit%E2%80%93Windsor%20tunnel | The Detroit–Windsor tunnel (), also known as the Detroit–Canada tunnel, is an international highway tunnel connecting the cities of Detroit, Michigan, United States and Windsor, Ontario, Canada. It is the second-busiest crossing between the United States and Canada, the first being the Ambassador Bridge, which also connects the two cities, which are situated on the Detroit River.
Overview
The tunnel is long (nearly a mile). At its lowest point, the two-lane roadway is below the river surface. There is a wide no-anchor zone enforced on river traffic around the tunnel.
The tunnel has three main levels. The bottom level brings in fresh air under pressure, which is forced into the mid level, where the traffic lanes are located. The ventilation system forces vehicle exhaust into the third level, which is then vented at each end of the tunnel.
History
Construction
Construction began on the tunnel in the summer of 1928.
The Detroit–Windsor tunnel was built by the firm Parsons, Klapp, Brinckerhoff and Douglas (the same firm that built the Holland Tunnel). The executive engineer was Burnside A. Value, the engineer of design was Norwegian-American engineer Søren Anton Thoresen, while fellow Norwegian-American Ole Singstad consulted, and designed the ventilation.
Three different methods were used to construct the tunnel. The approaches were constructed using the cut-and-cover method. Beyond the approaches, a tunneling shield method was used to construct hand-bored tunnels. Most of the river section used the immersed tube method in which steam-powered dredgers dug a trench in the river bottom and then covered over with of mud. The nine -long tubes measured in diameter.
The Detroit–Windsor tunnel was completed in 1930 at a total cost of approximately $25 million (around $ in dollars). It was the third underwater vehicular tunnel constructed in the United States, following the Holland Tunnel, between Jersey City, New Jersey, and downtown Manhattan, New York, and the Posey Tube, between Oakland and Alameda, California.
Its creation followed the opening of cross-border rail freight tunnels including the St. Clair Tunnel between Port Huron, Michigan, and Sarnia, Ontario, in 1891 and the Michigan Central Railway Tunnel between Detroit and Windsor in 1910.
The cities of Detroit and Windsor hold the distinction of jointly creating both the second and third tunnels between two nations in the world. The Detroit–Windsor tunnel is the world's third tunnel between two nations, and the first international vehicle tunnel. The Michigan Central Railway Tunnel, also under the Detroit River, was the second tunnel between two nations. The St. Clair Tunnel, between Port Huron, Michigan, and Sarnia, Ontario, under the St. Clair River, was the first.
Operations since 2007
In 2007, billionaire Manuel Moroun, owner of the nearby Ambassador Bridge, attempted to purchase the American side of the tunnel. In 2008, the City of Windsor controversially attempted to purchase the American side for $75 million, but the deal fell through after a scandal involving then-Detroit Mayor Kwame Kilpatrick.
Soon afterward, the city's finances were badly hit in a recession and the tunnel's future was in question. Following Detroit's July 2013 bankruptcy filing, Windsor Mayor Eddie Francis said that his city would consider purchasing Detroit's half of the tunnel if it was offered for sale.
On July 25, 2013, the lessor, manager and operator of the tunnel, Detroit Windsor Tunnel LLC, and its parent company, American Roads, LLC, voluntarily filed for chapter 11 bankruptcy protection in the United States Bankruptcy Court for the Southern District of New York. The American lease was eventually purchased by Syncora Guarantee, a Bermuda-based insurance company. Soon afterward, the lease with Detroit was extended to 2040. Both Syncora and Windsor retained the Windsor-Detroit Tunnel Corporation to manage the daily operations and upkeep of the tunnel. In May 2018, Syncora sold its interest in American Roads, LLC for $220 million to DIF Capital Partners, a Dutch-based investment fund management company specializing in infrastructure assets.
A $21.6 million renovation of the tunnel began in October 2017 to replace the aging concrete ceiling, along with other improvements to the infrastructure. Completion of the project was initially scheduled for June 2018, but is ongoing as of 2021.
Usage
The Detroit–Windsor tunnel crosses the Canada–United States border; an International Boundary Commission plaque marking the boundary in the tunnel is between flags of the two countries. The tunnel is the second-busiest crossing between the United States and Canada after the nearby Ambassador Bridge. A 2004 Border Transportation Partnership study showed that 150,000 jobs in the region and $13 billion (U.S.) in annual production depend on the Windsor-Detroit international border crossing. Between 2001 and 2005, profits from the tunnel peaked, with the cities receiving over $6 million annually. A steep decline in traffic eliminated profits from the tunnel from 2008 until 2012, with a modest recovery in the years since.
Traffic
About 13,000 vehicles a day use the tunnel despite having one lane in each direction and not allowing large trucks. Historically, the tunnel carried a smaller amount of commercial traffic than other nearby crossings because of physical and cargo restraints, as well as limits on accessing roadways. Passenger automobile traffic on the tunnel increased from 1972, until it peaked in 1999 at just under 10 million vehicle crossings annually. After 1999, automobile crossings through the tunnel declined, dropping under 5 million for the first time in over three decades in 2007. Traffic on the tunnel later recovered slightly in the following years when the economy began to improve after 2008.
Tolls
Tolls were last increased on the Canadian side in July 2021, 37% for those using Canadian currency and 11% using American currency. Standard tolls for non-commercial Canada-bound vehicles are US$7.50 and C$7.50; United States-bound tolls are also US$6.75 but C$6.75. For frequent crossers, the Nexpress Toll Card for cheaper rates. Commercial vehicles and buses are charged higher rates. Motorcycles, scooters and bicycles are prohibited.
Features
Tunnel truck for disabled vehicles
When the tunnel first opened in the 1930s the operators had a unique rescue vehicle to tow out disabled vehicles without having to back in or turn around to perform this role. The vehicle had two drivers, one facing in the opposite direction of the other. The vehicle was driven in, the disabled vehicle was hooked up, then the driver facing the other way drove it out. This emergency vehicle also had of water hose with power drive and chemical fire extinguishers.
CKLW, WJR and the tunnel
In the late 1960s, Windsor radio station CKLW AM 800 engineered a wiring setup which has allowed the station's signal to be heard clearly by automobiles traveling through the tunnel. Currently Detroit radio station WJR AM 760 can be heard clearly in the tunnel.
Ventilation
The upper and lower levels of the tunnel are used as exhaust and intake air ducts. One hundred-foot ventilation towers on both ends of the tunnel enable air exchange once every 90 seconds.
Photo gallery
See also
Ambassador Bridge
Gordie Howe International Bridge, a second bridge crossing currently under construction
Detroit International Riverfront
Transportation in metropolitan Detroit
Detroit–Windsor
References
External links
Windsor Detroit Borderlink Limited (Windsor Plaza)
Detroit Windsor Tunnel LLC (Detroit Plaza)
Tunnel Bus
Detroit News archives: The Building of the Detroit–Windsor Tunnel
Transport in Windsor, Ontario
Transportation buildings and structures in Detroit
Tunnels in Michigan
Road tunnels in Ontario
Crossings of the Detroit River
Toll tunnels in the United States
Canada–United States border crossings
Historic Civil Engineering Landmarks
Tunnels completed in 1930
Buildings and structures in Windsor, Ontario
Toll tunnels in Canada
Articles containing video clips
Road tunnels in the United States
Immersed tube tunnels in Canada
Immersed tube tunnels in the United States
International tunnels
1930 establishments in Michigan
1930 establishments in Ontario | Detroit–Windsor tunnel | Engineering | 1,599 |
59,573,391 | https://en.wikipedia.org/wiki/Conservation%20paleobiology | Conservation paleobiology is a field of paleontology that applies the knowledge of the geological and paleoecological record to the conservation and restoration of biodiversity and ecosystem services. Despite the influence of paleontology on ecological sciences can be traced back at least at the 18th century, the current field has been established by the work of K.W. Flessa and G.P. Dietl in the first decade of the 21st century. The discipline utilizes paleontological and geological data to understand how biotas respond to climate and other natural and anthropogenic environmental change. These information are then used to address the challenges faced by modern conservation biology, like understanding the extinction risk of endangered species, providing baselines for restoration and modelling future scenarios for species range's contraction or expansion.
Description of the discipline
The main strength of conservation paleobiology is the availability of long term data on species, communities and ecosystems that exceeds the timeframe of direct human experience. The discipline takes one of two approaches: near-time and deep-time.
Near-time conservation paleobiology
The near-time approach uses the recent fossil record (usually from the Late Pleistocene or the Holocene) to provide a long-term context to extant ecosystems dynamics. The fossil record is, in many cases, the only source of information on conditions previous to human impacts. These records can be used as reference baselines for comparisons in order to identify targets for restoration ecology, to analyze species responses to perturbations (natural and anthropogenic), understand historical species distributions and their variability, discriminate the factors that distinguish natural from non-natural changes in biological populations and identify ecological legacies only explicable by referring to past events or conditions.
Example - Conservation of the European bison
The European bison or wisent (Bison bonasus) is a large herbivore once widespread in Europe that saw a range decrease over the last thousand years, surviving only in Central European forests with the last wild population going extinct in Bialowieza forest in 1921. Starting from 1929, reintroduction of animals from zoos allowed the species to recover in the wild. The historical range of Bison bonasus was limited to forested areas, so since at least the sixteenth century conservation measures to preserve the species were based on the assumption that a forest would be the optimal habitat of the species. Ecological, morphological and paleoecological evidences, however, shows that B. bonasus is best adapted to open or mixed environments, indicating that the species was "forced" into a suboptimal habitat due to human influences such as habitat loss, competition with livestock, diseases and hunting. This information has been applied recently to adopt measures more suitable for the conservation of the species.
Deep-time conservation paleobiology
The deep-time approach uses examples of species, communities and ecosystem responses to environmental changes on a longer geologic record, as an archive of natural ecological and evolutionary laboratory. This approach provides examples to infer possible settings concerning climate warming, introduction of invasive species and decline in cultural eutrophication. This also permits the identification of species responses to perturbations of various types and scale to serve as a model for the future scenarios, for example abrupt climate change or volcanic winters. Given its deep-time nature, this approach allows for testing how organisms or ecosystems react to a bigger set of conditions than what is observable in the modern world or in the recent past.
Example - Insect damage and increasing temperatures
A pressing issue related to current global warming is the potential expansion in the range of tropical and subtropical crop pests, however the signal related to this poleward expansion is not clear. The analyses of the fossil record from past warm intervals of Earth's history (Paleogene-Eocene Thermal Maximum) provides an adequate comparison to test this hypothesis. Data shows that, during warmer climates, the frequency and diversity of insect damage to North American plants increased significantly, providing support to the hypothesis of pests expansion due to global warming.
Relevance to conservation biology
Over the years, numerous attempts have been made to increase the synergy between paleobiologists and conservation scientists and managers. Despite being recognized as a useful tool to address current biodiversity problems, fossil data is still rarely included in contemporary conservation-related research, with the vast majority of studies focusing on short timescales. However, a few authors have used comparisons of extinction in the geologic past to taxon losses in modern times providing important perspectives on the severity of the modern biodiversity crisis
Marine Paleobiology is an interdisciplinary study that utilizes the tools of paleontology and applies them to marine conservation biology. Looking at the deep-time fossil record separates this field from historical ecology.
References
Paleontology
Conservation biology | Conservation paleobiology | Biology | 960 |
35,927,838 | https://en.wikipedia.org/wiki/Britten%E2%80%93Davidson%20model | The Britten–Davidson model, also known as the gene-battery model, is a hypothesis for the regulation of protein synthesis in eukaryotes. Proposed by Roy John Britten and Eric H. Davidson in 1969, the model postulates four classes of DNA sequence: an integrator gene, a producer gene, a receptor site, and a sensor site. The sensor site regulates the integrator gene, responsible for synthesis of activator RNA. The integrator gene cannot synthesize activator RNA unless the sensor site is activated. Activation and deactivation of the sensor site is done by external stimuli, such as hormones. The activator RNA then binds with a nearby receptor site, which stimulates the synthesis of mRNA at the structural gene.
This theory would explain how several different integrators could be concurrently synthesized, and would explain the pattern of repetitive DNA sequences followed by a unique DNA sequence that exists in genes.
See also
Transcriptional regulation
Operon
References
Genetics
Biology theories | Britten–Davidson model | Biology | 204 |
42,214,280 | https://en.wikipedia.org/wiki/Chlorophyllum%20nothorachodes | Chlorophyllum nothorachodes is a species of agaric fungus in the family Agaricaceae. Found in Australia, it was officially described in 2003 from a collection made from a garden in Stirling, Australian Capital Territory. The fruit bodies of the fungus have caps up to wide covered with dark brown patches and small scales. The gills are free from attachment to the stipe and closely crowded. The spores are thick walled and measure 9–12 by 6–8 μm; the basidia (spore-bearing cells) are four-spored, lack clamps at their bases, and have dimensions of 29–36 by 9–11 μm. Cheilocystidia, which also lack a clamp at the base, measure 22–44 by 6.5–17 μm. The species epithet derives from the Ancient Greek νόθος ("false") and rachodes, referring to its resemblance to Chlorophyllum rhacodes.
References
External links
Agaricaceae
Fungi described in 2003
Fungi of Australia
Fungus species | Chlorophyllum nothorachodes | Biology | 218 |
71,051,857 | https://en.wikipedia.org/wiki/Crepidotus%20variabilis | Crepidotus variabilis is a species of saprophytic fungi in the family Crepidotaceae. It is commonly known as a variable oysterling in the United Kingdom and is seen there in autumn. May occur solitary, but more often in small scattered groups from summer to autumn on twigs and other woody debris of broad-leaved trees. Very common but often confused with Crepidotus cesatii.
Description
Cap: The cap (pileus) of C. variabilis is generally about 0.5 to 2 cm in diameter is white and emerges kidney shaped soon becoming irregular and wavy forming patches of overlapping fruit bodies. The surface is very finely downy to velvety with a more or less smooth margin.
Gills: On the underside, the gills (lamellae) appear somewhat fringed and are classified as free with no stipe to connect to. The colour of the gills depends on maturity ranging from off-white when young to ochraceous flesh-coloured as the spores mature.
Spores: The spore print is pinkish-buff, reflecting the colour of the gills. The ellipsoid-shaped basidiospore of C. variabilis are 5.7 by 3–3.5 μm in size.
Absent features: No stipe (stem) or annulus (ring).
References
Crepidotaceae
Fungus species | Crepidotus variabilis | Biology | 283 |
31,104,249 | https://en.wikipedia.org/wiki/Straight-seven%20engine | A straight-seven engine or inline-seven engine is a straight engine with seven cylinders. It is more common in marine applications because these engines are usually based on a modular design, with individual heads per cylinder.
Marine engines
Straight-seven engines produced for marine usage include:
Wärtsilä-Sulzer RTA96-C two-stroke crosshead diesel engine
Wärtsilä 32 trunk piston engines
MAN Diesel IMO two-stroke crosshead diesel engine
Burmeister & Wain 722VU37 two-stroke diesel engine (commenced 1937, used in the Danish Havmanden-class submarines
Sulzer 7QD42 diesel engine (1939-1940, used in the Dutch O 21-class submarines).
Land use
The AGCO Sisu 98HD is a straight-seven diesel engine that was released in 2008. Intended for farming machinery, the engine shares various components with the company's straight-six engine.
References
Straight-07
Seven-cylinder engines
Straight-07 | Straight-seven engine | Engineering | 201 |
22,685,072 | https://en.wikipedia.org/wiki/RR%20Lyrae | RR Lyrae is a variable star in the Lyra constellation, figuring in its west near to Cygnus. As the brightest star in its class, it became the eponym for the RR Lyrae variable class of stars and it has been extensively studied by astronomers. RR Lyrae variables serve as important standard candles that are used to measure astronomical distances. The period of pulsation of an RR Lyrae variable depends on its mass, luminosity and temperature, while the difference between the measured luminosity and the actual luminosity allows its distance to be determined via the inverse-square law. Hence, understanding the period-luminosity relation for a local set of such stars allows the distance of more distant stars of this type to be determined.
History
The variable nature of RR Lyrae was discovered by the Scottish astronomer Williamina Fleming at Harvard Observatory in 1901.
The distance of RR Lyrae remained uncertain until 2002 when the Hubble Space Telescope's fine guidance sensor was used to determine the distance of RR Lyrae within a 5% margin of error, yielding a value of . When combined with measurements from the Hipparcos satellite and other sources, the result is a distance estimate of .
Variable star class
This type of low-mass star has consumed the hydrogen at its core, evolved away from the main sequence, and passed through the red giant stage. Energy is now being produced by the thermonuclear fusion of helium at its core, and the star has entered an evolutionary stage called the horizontal branch (HB). The effective temperature of an HB star's outer envelope will gradually increase over time. When its resulting stellar classification enters a range known as the instability strip—typically at stellar class A—the outer envelope can begin to pulsate. RR Lyrae shows just such a regular pattern of pulsation, which is causing its apparent magnitude to vary between 7.06 and 8.12 over a short cycle lasting 0.567 days (13 hours, 36 minutes). Each radial pulsation causes the radius of the star to vary between 5.1 and 5.6 times the Sun's radius.
This star belongs to a subset of RR Lyrae-type variables that show a characteristic behavior called the Blazhko effect, named after Russian astronomer Sergey Blazhko. This effect is observed as a periodic modulation of a variable star's pulsation strength or phase; sometimes both. It causes the light curve of RR Lyrae to change from cycle to cycle. In 2014, Time-series photometric observations demonstrated the physical origin of this effect.
Other stellar classifications
As with other RR Lyrae-type variables, RR Lyrae itself has a low abundance of elements other than hydrogen and helium – what astronomers term its metallicity: It belongs to the Population II category of stars that formed during the early period of the Universe when there was a lower abundance of metals in star-forming regions.
The trajectory of this star is carrying it along an orbit that is close to the plane of the Milky Way, taking it no more than above or below this plane. The Blazhko period for RR Lyrae is . The orbit has a high eccentricity, bringing RR Lyrae as close as to the Galactic Center at periapsis, and taking it as far as at apapsis.
References
External links
image RR Lyrae
182989
Lyra
RR Lyrae variables
F-type giants
Lyrae, RR
A-type giants
095497
J19252793+4247040
BD+42 3338
TIC objects | RR Lyrae | Astronomy | 749 |
950,454 | https://en.wikipedia.org/wiki/Electric%20fish | An electric fish is any fish that can generate electric fields, whether to sense things around them, for defence, or to stun prey. Most fish able to produce shocks are also electroreceptive, meaning that they can sense electric fields. The only exception is the stargazer family (Uranoscopidae). Electric fish, although a small minority of all fishes, include both oceanic and freshwater species, and both cartilaginous and bony fishes.
Electric fish produce their electrical fields from an electric organ. This is made up of electrocytes, modified muscle or nerve cells, specialized for producing strong electric fields, used to locate prey, for defence against predators, and for signalling, such as in courtship. Electric organ discharges are two types, pulse and wave, and vary both by species and by function.
Electric fish have evolved many specialised behaviours. The predatory African sharptooth catfish eavesdrops on its weakly electric mormyrid prey to locate it when hunting, driving the prey fish to develop electric signals that are harder to detect. Bluntnose knifefishes produce an electric discharge pattern similar to the electrolocation pattern of the dangerous electric eel, probably a form of Batesian mimicry to dissuade predators. Glass knifefish that are using similar frequencies move their frequencies up or down in a jamming avoidance response; African knifefish have convergently evolved a nearly identical mechanism.
Evolution and phylogeny
All fish, indeed all vertebrates, use electrical signals in their nerves and muscles. Cartilaginous fishes and some other basal groups use passive electrolocation with sensors that detect electric fields; the platypus and echidna have separately evolved this ability. The knifefishes and elephantfishes actively electrolocate, generating weak electric fields to find prey. Finally, fish in several groups have the ability to deliver electric shocks powerful enough to stun their prey or repel predators. Among these, only the stargazers, a group of marine bony fish, do not also use electrolocation.
In vertebrates, electroreception is an ancestral trait, meaning that it was present in their last common ancestor. This form of ancestral electroreception is called ampullary electroreception, from the name of the receptive organs involved, ampullae of Lorenzini. These evolved from the mechanical sensors of the lateral line, and exist in cartilaginous fishes (sharks, rays, and chimaeras), lungfishes, bichirs, coelacanths, sturgeons, paddlefish, aquatic salamanders, and caecilians. Ampullae of Lorenzini were lost early in the evolution of bony fishes and tetrapods. Where electroreception does occur in these groups, it has secondarily been acquired in evolution, using organs other than and not homologous with ampullae of Lorenzini. Most common bony fish are non-electric. There are some 350 species of electric fish.
Electric organs have evolved eight times, four of these being organs powerful enough to deliver an electric shock. Each such group is a clade. Most electric organs evolved from myogenic tissue (which forms muscle), however, one group of Gymnotiformes, the Apteronotidae, derived their electric organ from neurogenic tissue (which forms nerves). In Gymnarchus niloticus (the African knifefish), the tail, trunk, hypobranchial, and eye muscles are incorporated into the organ, most likely to provide rigid fixation for the electrodes while swimming. In some other species, the tail fin is lost or reduced. This may reduce lateral bending while swimming, allowing the electric field to remain stable for electrolocation. There has been convergent evolution in these features among the mormyrids and gymnotids. Electric fish species that live in habitats with few obstructions, such as some bottom-living fish, display these features less prominently. This implies that convergence for electrolocation is indeed what has driven the evolution of the electric organs in the two groups.
Actively electrolocating fish are marked on the phylogenetic tree with a small yellow lightning flash . Fish able to deliver electric shocks are marked with a red lightning flash . Non-electric and purely passively electrolocating species are not shown.
Weakly electric fish
Weakly electric fish generate a discharge that is typically less than one volt. These are too weak to stun prey and instead are used for navigation, electrolocation in conjunction with electroreceptors in their skin, and electrocommunication with other electric fish. The major groups of weakly electric fish are the Osteoglossiformes, which include the Mormyridae (elephantfishes) and the African knifefish Gymnarchus, and the Gymnotiformes (South American knifefishes). These two groups have evolved convergently, with similar behaviour and abilities but different types of electroreceptors and differently sited electric organs.
Strongly electric fish
Strongly electric fish, namely the electric eels, the electric catfishes, the electric rays, and the stargazers, have an electric organ discharge powerful enough to stun prey or be used for defence, and navigation. The electric eel, even when very small in size, can deliver substantial electric power, and enough current to exceed many species' pain threshold. Electric eels sometimes leap out of the water to electrify possible predators directly, as has been tested with a human arm.
The amplitude of the electrical output from these fish can range from 10 to 860 volts with a current of up to 1 ampere, according to the surroundings, for example different conductances of salt and freshwater. To maximize the power delivered to the surroundings, the impedances of the electric organ and the water must be matched:
Strongly electric marine fish give low voltage, high current electric discharges. In salt water, a small voltage can drive a large current limited by the internal resistance of the electric organ. Hence, the electric organ consists of many electrocytes in parallel.
Freshwater fish have high voltage, low current discharges. In freshwater, the power is limited by the voltage needed to drive the current through the large resistance of the medium. Hence, these fish have numerous cells in series.
Electric organ
Anatomy
Electric organs vary widely among electric fish groups. They evolved from excitable, electrically active tissues that make use of action potentials for their function: most derive from muscle tissue, but in some groups the organ derives from nerve tissue. The organ may lie along the body's axis, as in the electric eel and Gymnarchus; it may be in the tail, as in the elephantfishes; or it may be in the head, as in the electric rays and the stargazers.
Physiology
Electric organs are made up of electrocytes, large, flat cells that create and store electrical energy, awaiting discharge. The anterior ends of these cells react to stimuli from the nervous system and contain sodium channels. The posterior ends contain sodium–potassium pumps. Electrocytes become polar when triggered by a signal from the nervous system. Neurons release the neurotransmitter acetylcholine; this triggers acetylcholine receptors to open and sodium ions to flow into the electrocytes. The influx of positively charged sodium ions causes the cell membrane to depolarize slightly. This in turn causes the gated sodium channels at the anterior end of the cell to open, and a flood of sodium ions enters the cell. Consequently, the anterior end of the electrocyte becomes highly positive, while the posterior end, which continues to pump out sodium ions, remains negative. This sets up a potential difference (a voltage) between the ends of the cell. After the voltage is released, the cell membranes go back to their resting potentials until they are triggered again.
Discharge patterns
Electric organ discharges (EODs) need to vary with time for electrolocation, whether with pulses, as in the Mormyridae, or with waves, as in the Torpediniformes and Gymnarchus, the African knifefish. Many electric fishes also use EODs for communication, while strongly electric species use them for hunting or defence. Their electric signals are often simple and stereotyped, the same on every occasion.
Electrocommunication
Weakly electric fish can communicate by modulating the electrical waveform they generate. They may use this to attract mates and in territorial displays.
Sexual behaviour
In sexually dimorphic signalling, as in the brown ghost knifefish (Apteronotus leptorhynchus), the electric organ produces distinct signals to be received by individuals of the same or other species. The electric organ fires to produce a discharge with a certain frequency, along with short modulations termed "chirps" and "gradual frequency rises", both varying widely between species and differing between the sexes. For example, in the glass knifefish genus Eigenmannia, females produce a nearly pure sine wave with few harmonics, males produce a far sharper non-sinusoidal waveform with strong harmonics.
Male bluntnose knifefishes (Brachyhypopomus) produce a continuous electric "hum" to attract females; this consumes 11–22% of their total energy budget, whereas female electrocommunication consumes only 3%. Large males produced signals of larger amplitude, and these are preferred by the females. The cost to males is reduced by a circadian rhythm, with more activity coinciding with night-time courtship and spawning, and less at other times.
Antipredator behaviour
Electric catfish (Malapteruridae) frequently use their electric discharges to ward off other species from their shelter sites, whereas with their own species they have ritualized fights with open-mouth displays and sometimes bites, but rarely use electric organ discharges.
The electric discharge pattern of bluntnose knifefishes is similar to the low voltage electrolocative discharge of the electric eel. This is thought to be a form of bluffing Batesian mimicry of the powerfully protected electric eel.
Fish that prey on electrolocating fish may "eavesdrop" on the discharges of their prey to detect them. The electroreceptive African sharptooth catfish (Clarias gariepinus) may hunt the weakly electric mormyrid, Marcusenius macrolepidotus in this way. This has driven the prey, in an evolutionary arms race, to develop more complex or higher frequency signals that are harder to detect.
Jamming avoidance response
It had been theorized as early as the 1950s that electric fish near each other might experience some type of interference. In 1963, Akira Watanabe and Kimihisa Takeda discovered the jamming avoidance response in Eigenmannia. When two fish are approaching one another, their electric fields interfere. This sets up a beat with a frequency equal to the difference between the discharge frequencies of the two fish. The jamming avoidance response comes into play when fish are exposed to a slow beat. If the neighbour's frequency is higher, the fish lowers its frequency, and vice versa. A similar jamming avoidance response was discovered in the distantly related Gymnarchus niloticus, the African knifefish, by Walter Heiligenberg in 1975, in a further example of convergent evolution between the electric fishes of Africa and South America. Both the neural computational mechanisms and the behavioural responses are nearly identical in the two groups.
See also
Feature detection (nervous system)
References
Electroreceptive animals
Neuroethology
Articles containing video clips
Bioelectricity | Electric fish | Biology | 2,370 |
266,339 | https://en.wikipedia.org/wiki/Vending%20machine | A vending machine is an automated machine that dispenses items such as snacks, beverages, cigarettes, and lottery tickets to consumers after cash, a credit card, or other forms of payment are inserted into the machine or payment is otherwise made. The first modern vending machines were developed in England in the early 1880s and dispensed postcards. Vending machines exist in many countries and, in more recent times, specialized vending machines that provide less common products compared to traditional vending machine items have been created.
History
The earliest known reference to a vending machine is in the work of Hero of Alexandria, an engineer and mathematician in first-century Roman Egypt. His machine accepted a coin and then dispensed wine or holy water. When the coin was deposited, it fell upon a pan attached to a lever. The lever opened a valve which let some water flow out. The pan continued to tilt with the weight of the coin until it fell off, at which point a counterweight snapped the lever up and turned off the valve.
Coin-operated machines that dispensed tobacco were being operated as early as 1615 in the taverns of England. The machines were portable and made of brass. An English bookseller, Richard Carlile, devised a newspaper dispensing machine for the dissemination of banned works in 1822. Simon Denham was awarded British Patent no. 706 for his stamp dispensing machine in 1867, the first fully automatic vending machine.
Modern vending machines
The first modern coin-operated vending machines were introduced in London, England in the early 1880s, dispensing postcards. The machine was invented by Percival Everitt in 1883 and soon became a widespread feature at railway stations and post offices, dispensing envelopes, postcards, and notepaper. The Sweetmeat Automatic Delivery Company was founded in 1887 in England as the first company to deal primarily with installing and maintaining vending machines. Also at about that time in England, Dixon Henry Davies and inventor John Mensy Tourtel patented a coin-operated reading lamp for use on trains and founded the Railway Automatic Electric Light Syndicate, Ltd. The system ran off batteries and delivered 30 minutes of light for 1d., but was not a long-term success. Tourtel also invented a similarly coin-operated gas meter. In 1893, Stollwerck, a German chocolate manufacturer, was selling its chocolate in 15,000 vending machines. It set up separate companies in various territories to manufacture vending machines to sell not just chocolate, but cigarettes, matches, chewing gum, and soap products.
The first vending machine in the U.S. was built in 1888 by the Thomas Adams Gum Company, selling gum on New York City train platforms. The idea of adding games to these machines as a further incentive to buy came in 1897 when the Pulver Manufacturing Company added small figures, which would move around whenever somebody bought some gum from their machines. This idea spawned a whole new type of mechanical device known as the "trade stimulators".
Growth
The vending machine industry in the United States is a multi-billion dollar sector. In 2023, it was estimated to be worth $18.2 billion, with approximately 3 million machines generating an average monthly revenue of $525. However, this is an average, and the industry is trending toward more sophisticated and automated vending machines, particularly in North America.
This trend is driven by the increasing demand for convenience and the development of advanced technologies. For instance, the hot food vending machine sector is valued at $4.8 billion and is seeing significant growth as robotics companies introduce automated solutions for dispensing pasta, burgers, and groceries. The broader fresh food vending segment is projected to reach $8 billion by 2029, offering consumers more options for nutritious and convenient meals and snacks.
Mechanisms
Internal communication in vending machines is typically based on the MDB standard, supported by National Automatic Merchandising Association (NAMA) and European Vending & Coffee Service Association (EVA).
After payment has been tendered, a product may become available by:
the machine releasing it, so that it falls in an open compartment at the bottom, or into a cup, either released first, or put in by the customer, or
the unlocking of a door, drawer, or turning of a knob.
Some products need to be prepared to become available. For example, tickets are printed or magnetized on the spot, and coffee is freshly concocted. One of the most common forms of vending machine, the snack machine, often uses a metal coil which when ordered rotates to release the product.
The main example of a vending machine giving access to all merchandise after paying for one item is a newspaper vending machine (also called vending box) found mainly in the U.S. and Canada. It contains a pile of identical newspapers. After a sale the door automatically returns to a locked position. A customer could open the box and take all of the newspapers or, for the benefit of other customers, leave all of the newspapers outside of the box, slowly return the door to an unlatched position, or block the door from fully closing, each of which are frequently discouraged, sometimes by a security clamp. The success of such machines is predicated on the assumption that the customer will be honest (hence the nickname "honor box"), and need only one copy.
Common vending machines
Change machine
A change machine is a vending machine that accepts large denominations of currency and returns an equal amount of currency in smaller bills or coins. Typically these machines are used to provide coins in exchange for paper currency, in which case they are also often known as bill changers.
Cigarette vending
In the past, cigarettes were commonly sold in the United States through these machines, but this is increasingly rare due to concerns about underage buyers. Sometimes a pass has to be inserted in the machine to prove one's age before a purchase can be made. In the United Kingdom, legislation banning them outright came into effect on 1 October 2011. In Germany, Austria, Italy, the Czech Republic, and Japan, cigarette machines are still common.
Since 2007, however, age verification has been mandatory in Germany and Italy – buyers must be 18 or over. The various machines installed in pubs and cafés, other publicly accessible buildings, and on the street accept one or more of the following as proof of age: the buyer's identity card, bank debit card (smart card), or European Union driver's license. In Japan, age verification has been mandatory since 1 July 2008 via the Taspo card, issued only to persons aged 20 or over. The Taspo card uses RFID, stores monetary value, and is contactless.
Birth control and condom vending machines
A birth control machine is a vending machine for the sale of birth control, such as condoms or emergency contraception. Condom machines are often placed in public toilets, subway stations, airports, or schools as a public health measure to promote safe sex. Many pharmacies also keep one outside, for after-hours access. Rare examples exist that dispense female condoms or the morning after pill.
Food and snack vending machines
Various types of food and snack vending machines exist in the world. Food vending machines that provide shelf-stable foods such as chips, cookies, cakes, and other such snacks are common. Some food vending machines are refrigerated or frozen, such as for chilled soft drinks and ice cream treats, and some machines provide hot food.
Some unique food vending machines exist that are specialized and less common, such as the French fry vending machine and hot pizza vending machines, such as Let's Pizza. The Beverly Hills Caviar Automated Boutique dispenses frozen caviar and other high-end foods.
Bulk candy and gumball vending
The profit margins in the bulk candy business can be quite high – gumballs, for instance, can be purchased in bulk for around 2 cents per piece and sold for 25 cents in gumball machines in the U.S., and other countries. Gumballs and candy have a relatively long shelf life, enabling vending machine operators to manage many machines without too much time or cost involved. In addition, the machines are typically inexpensive compared to soft drink or snack machines, which often require power and sometimes refrigeration to work. Many operators donate a percentage of the profits to charity so that locations will allow them to place the machines for free.
Bulk vending may be a more practical choice than soft drink/snack vending for an individual who also works a full-time job, since the restaurants, retail stores, and other locations suitable for bulk vending may be more likely to be open during the evening and on weekends than venues such as offices that host soft drink and snack machines.
The Bulk vending machines of today provide many different vending choices with the use of adjustable gumball and candy wheels. Adjustable gumball wheels allow an operator to not only offer the traditional 1-inch gumball, but they can also vend larger gumballs, and non-edible items such as toy capsules and bouncy balls. Adjustable candy wheels allow an operator to offer a variety of pressed candies, jelly candy, chocolates and even nuts.
Full-line vending
A full-line vending company may set up several types of vending machines that sell a wide range of products. Products may include candy, cookies, chips, fresh fruit, milk, cold food, coffee and other hot drinks, bottles and cans of soda and other drinks, and even frozen products like ice cream. These products can be sold from machines that include hot coffee, snack, cold food, and bottle machines. In the United States, almost all machines accept bills with more and more machines accepting $5 bills, along with payment from traditional debit and credit cards, or a mobile payment system. This is an advantage to the vendor because it virtually eliminates the need for a bill changer. Larger corporations with cafeterias will often request full line vending to supplement their food service.
Newspaper vending machine
A newspaper vending machine or newspaper rack is a vending machine designed to distribute newspapers. Newspaper vending machines are used worldwide, and they can be one of the main distribution methods for newspaper publishers. According to the Newspaper Association of America, in recent times in the United States, circulation via newspaper vending machines has dropped significantly: in 1996, around 46% of single-sale newspapers were sold in newspaper boxes, and in 2014, only 20% of newspapers were sold in the boxes.
Photo booth
A photo booth is a vending machine or modern kiosk that contains an automated, usually coin-operated, camera and film processor. Today, the vast majority of photo booths are digital. Traditionally, photo booths contain a seat or bench designed to seat the one or two patrons being photographed. The seat is typically surrounded by a curtain of some sort to allow for some privacy and help avoid outside interference during the photo session. Once the payment is made, the photo booth will take a series of photographs and the customer is then provided with prints. Older photo booth vending machines used film and involved the process of developing the film using liquid chemicals.
Stamp vending machine
A stamp vending machine is a mechanical, electrical or electro-mechanical device which can be used to automatically vend postage stamps to users in exchange for a pre-determined amount of money, normally in coin.
Ticket machines
A ticket machine is a vending machine that produces tickets. For instance, ticket machines dispense train tickets at railway stations, transit tickets at metro stations and tram tickets at some tram stops and in some trams. The typical transaction consists of a user using the display interface to select the type and quantity of tickets and then choosing a payment method of either cash, credit/debit card or smartcard. The ticket or tickets are then printed and dispensed to the user.
Specialized vending machines
From 2000 to 2010, the specialization of vending machines became more common. Vending extended increasingly into non-traditional areas like electronics, or even artwork or short stories. Machines of this new category are generally called automated retail kiosks. When using an automated retail machine, consumers select products, sometimes using a touchscreen interface, pay for purchases using a credit or debit card and then the product is dispensed, sometimes via an internal robotic arm in the machine. The trend of specialization and proliferation of vending machines is perhaps most apparent in Japan where vending machines sell products from toilet paper to hot meals and pornography, and there is 1 vending machine per 23 people.
Automobile vending machine
In November 2013, online auto retailer Carvana opened the first car vending machine in the U.S., located in Atlanta dispensing various models of used cars.
In late 2016, Autobahn Motors, a car dealership in Singapore, opened a 15-story-tall luxury car vending machine containing 60 cars, dispensing Ferrari and Lamborghini vehicles.
Bait vending machine
A bait machine is a vending machine that dispenses live fishing bait, such as worms and crickets, for fishing.
Book vending machine
Book vending machines dispense books, which may be full-sized. Some libraries use book vending machines. GoLibrary is a book lending vending machine used by libraries in Sweden and the U.S. state of California. The Biblio-Mat is a random antiquarian book vending machine located at The Monkey's Paw bookstore in Toronto, Canada.
Burger vending machine
In 2022 RoboBurger introduced a machine to cook and vend a fresh hamburger.
Cotton candy vending machine
The cotton candy vending machine is a vending machine that dispenses freshly spun cotton candy.
French fry vending machine
A French fry vending machine is a vending machine that dispenses hot French fries, also known as chips. The first known french fry vending machine was developed circa 1982 by the defunct Precision Fry Foods Pty Ltd. in Australia. A few companies have developed and manufactured French fry vending machines and prototypes. Furthermore, a prototype machine was also developed at Wageningen University in the Netherlands.
Pizza vending machine
Let's Pizza is the name of a vending machine that makes fresh pizza from scratch. It was developed in 2009 by Italian company Sitos srl. The machine combines water, flour, tomato sauce, and fresh ingredients to make a pizza in approximately three minutes. It includes windows so customers can watch the pizza as it is made. The pizza is cooked in an infrared oven. The device was invented by Claudio Torghele, an entrepreneur in Rovereto, Italy. The vending machine began in Italy and is now spreading into the United Kingdom and becoming popular there.
Life insurance
From the 1950s until the 1970s, vending machines were used at American airports to sell life insurance policies covering death, in case the buyer's flight crashed. However, this practice gradually disappeared due to the tendency of American courts to strictly construe such policies against their sellers, such as the Fidelity and Casualty Company of New York (which later became part of CNA Financial).
Marijuana vending machine
The marijuana vending machine originally found a niche market for selling or dispensing cannabis. In the early 21st century with legalization of cannabis in many countries, marijuana vending machines became widespread, selling products such as marijuana, hemp and CBD based products and smoke paraphernalia. The first experiments in distributing marijuana through vending machines started in the early 2010s, when they were already in use in the United States and Canada. The primary challenge faced in selling restricted or controlled merchandise like cannabis is to verify the identity of the buyer, which is overcome by the application of biometrics and smart vending software technology, the same technology used to verify the buyer's age in the automatic sales of tobacco.
Mold-A-Rama
The Mold-A-Rama is a brand name for a type of vending machine that makes blow-molded plastic figurines. Mold-A-Rama machines debuted in late 1962 and grew in prominence at the 1964 New York World's Fair. The machines can still be found operating in dozens of museums and zoos.
Fresh-squeezed orange juice
This type of machine contains fresh oranges and a mechanism to cut and squeeze them in order to produce fresh juice.
Prize vending machine
This type of machine sells a container that may contain a prize. Some such machines advertise the possible prizes that may be won. Examples include smart phones, holiday packages, and toys.
Social-networked vending machine
With the rise of the social networks, vending machine has been integrated to social media in order to proliferate the interaction of the vending machine with the users from the physical machine to the social networks. The common application of social-networked vending machine is that the user can connect their social account to a specific social media designated by the vending machine, the user will be getting some rewards in return, normally in the form of free gift dispensed from the vending machine.
Make-Up vending machines
Vending machines are also being used by entrepreneurs to sell cosmetics to those on the go as an easy convenience.
Giving Machine
A Giving Machine is a specialized "reserve vending machine" that allows people to purchase donations for various nonprofits. They are placed in various public areas by the Church of Jesus Christ of Latter-day Saints during the Christmas and holiday season.
Popularity in Japan
Vending machines are a common sight in Japan, and are considerably popular. There are more than 5.5 million machines installed throughout the nation, and Japan holds the highest ratio of machines per person for any country with one machine for every twenty-three people.
Regarding the development of advanced technology, Japanese vending machines provide more services by selling different kinds of products. Food, smartphones, SIM cards, and even clothing can be found in these machines. Apart from the most popular drink vending machines, Japanese vending machines also offer certain products depending on the demand and need for different locations. For example, products like sanitary napkins and tampons can be found in vending machines in female restrooms, while machines selling condoms are usually located in male restrooms.
Convenience, low cost of running, security, and stability seem to be the main reasons for Japan to invest in vending machines.
A patent for an "automatic goods vending machine" was filed in 1888 in Japan; early surviving vending machines from around the 1900s include one that dispenses stamps and postcards, and one that dispenses sake. Confectionery vending machines became widespread in the 1920s, and juice vending machines became popular in the late 1950s and 1960s. By 2000, the number of vending machines in Japan had grown to 5.6 million. However, from around the early 2000s, the number of vending machines in Japan decreased slightly to 5.03 million, and the sales amount also decreased gradually, in part due to the rise of digital technology and market competition. In recent years, attention has been drawn towards older machines, such as the collection of vintage vending machines installed at the Sagamihara Vending Machine Park.
In 2024, it was reported that a sizeable portion of the vending machines in Japan would require updates to their acceptors in order to accept the new designs for the Japanese yen banknotes that were due to be released that year.
Smart vending machines
Similar to the development of traditional mobile phones into smartphones, vending machines have also progressively, though at a much slower pace, evolved into smart vending machines. Newer technologies at a lower cost of adoption, such as the large digital touch display, internet connectivity, cameras and various types of sensors, more cost-effective embedded computing power, digital signage, various advanced payment systems, and a wide range of identification technology (NFC, RFID, etc.) have contributed to this development. These smart vending machines enable a more interactive user experience, and reduce operating costs while improving the efficiency of the vending operations through remote manageability and intelligent back-end analytic. Integrated sensors and cameras also represent a source of such data as customer demographics, purchase trends, and other locality-specific information. It also enables better customer-engagement for the brands through interactive multimedia and social media connectivity. Smart vending machines were 79 by JWT Intelligence on its list of 100 Things to Watch in 2014. According to market research by Frost & Sullivan, global shipments of smart vending machines are forecasted to reach around 2 million units by 2018, and further to 3.6 million units by 2020 with penetration rate of 20.3 percent.
See also
Automat – a fast food restaurant where simple foods and drink are served by vending machines
Arcade game
Automated charging machine
Automated retail
Automated teller machine
Capitol Hill's mystery soda machine
ChargeBox
Charging station
Coffee vending machine
Death by vending machine
Eu'Vend – a vending industry trade show
Fortune teller machine
Freedom Toaster
Gashapon
Gold to Go
Gumball machine
Interactive kiosk
Jukebox
Kiddie ride
Love tester machine
Parking meter
Pinball machine
Reverse vending machine
Self-service
Slot machine
Slug (coin)
Stamp vending machines in the United Kingdom
Strength tester machine
Telephone booth
Ticket machine
Tower viewer
Types of retail outlets
Vending Times – a trade magazine focusing on the U.S. vending industry
Washing machine
Water cooler
References
Further reading
Krug, Bryon. (2003). Vending Business-in-a-Box. BooksOnStuff.
External links
In Praise of Vending Machines - slideshow by Life magazine
Retail formats
Commercial machines
Hellenistic engineering
Ancient inventions
Egyptian inventions
Greek inventions
Ancient Egyptian technology
Ancient Greek technology
1888 introductions
Confectionery
Soft drinks
Newspaper distribution
Articles containing video clips
Dispensers | Vending machine | Physics,Technology,Engineering | 4,483 |
51,880,209 | https://en.wikipedia.org/wiki/Grandin%20brothers | The Grandin Brothers; John Livingston Grandin (December 20, 1836 – September 10, 1912), William James Grandin (August 16, 1838 – December 7, 1904) and Elijah Bishop Grandin (December 20, 1840 – December 3, 1917) were a sibling trio of American entrepreneurs who were among the first to begin business ventures in commercial oil prospecting in the United States, and who later became involved in banking and Bonanza wheat farming. They eventually became titans of the wheat industry, operating the largest corporate wheat farm in the Dakota Territory (in Grandin, North Dakota) in the late 19th century.
Historical background
Grandin family ancestors reportedly came to America from the Isle of Jersey in the early 1700s. The first generations of Grandins in America found success in the mercantile industry. Samuel Grandin (1800-1888) was born in Sussex County, New Jersey where he was educated only until age 8 or 10, and then left school to apprentice as a tailor and follow his family in mercantile work. In 1822, Samuel Grandin decided to move to Pennsylvania in search of new opportunities. He continued mercantile work there for another 18 years. During this time he married Saran Ann Henry in 1832, and they had seven children, five sons and two daughters; Morris Worts Grandin (deceased in infancy), Stephen Girard Grandin (b.1835; deceased at 16), John Livingston Grandin (b.1836), William James Grandin (b.1838), Elijah Bishop Grandin (b.1840), Maria Jane Grandin (b.1843), and Emma Ann Grandin (b.1849; deceased during childhood).
Early life
The Grandin Brothers; Stephen Girard Grandin (b.1835), John Livingston Grandin (b.1836), and William James Grandin (b.1838), were all born in Pleasantville, Venango County, Pennsylvania. In 1840 their father moved the family to the nearby town of Tidioute. In Tidioute, Elijah Bishop Grandin was born (b.1840) and the boys' father ended his career as a tailor and entered the lumber industry, buying 33 acres of land and building a lumber mill. He began shipping timber down the Allegany River, much of which came from his land. At young ages the Grandin boys quickly followed their father into lumber work. In 1851, Stephen Girard tragically drowned at the age of 16, leaving only three of the five brothers surviving. John Livingston and William James worked in the lumber business early on and at the general store in Tidioute. Their younger brother Elijah Bishop, left home in early adulthood and went to work for the Hyde Bros. Lumber Company in Hydetown, Crawford County, Pennsylvania.
Second oil well, oil business
On August 27, 1859, Edwin Drake dug a successful commercial oil well in Titusville, Pennsylvania which is commonly credited as the first oil well dug specifically as a commercial well in the United States. Days after the Drake well came in, news reached the Grandin General Store in Tidioute approximately 20 miles away. After hearing that Drake was selling barrels of oil at 75 cents each, John Livingston (then aged 23), who knew of petroleum seepage in the area, immediately began buying up small tracts of land surrounding an “oil spring” he knew of. On August 31, 1859, 4 days after the Drake well, John Livingston set up a spring-pole well, a simpler setup than Drake's technique. This well is credited as being the second well dug specifically for commercial oil drilling purposes in the United States. Despite drilling down 134 feet, nearly twice the depth of Drake's well, in an area Grandin had seen petroleum seepage at surface level, Grandin's first well was unsuccessful and dry. John Livingston then recruited his brother William James and after that first unsuccessful attempt, they dug several more wells that were then successful and turned extremely profitable. Around this time Elijah Bishop returned to Pennsylvania to join his brothers in their oil business. Together they set up pipelines and containers. Along with Edwin Drake's, their efforts were a precursor to the Pennsylvania oil rush. According to legend, over the next several years the Grandin Brothers' oil endeavors eventually became so successful that John Livingston set up an appointment to meet John D. Rockefeller, another oil prospector at the time, to negotiate a partnership, but Rockefeller kept him waiting and after a period of time, refusing to wait any longer, Grandin walked out before Rockefeller arrived, possibly forfeiting a chance for them to get in on the ground floor with Standard Oil.
Grandin Brothers Bank
In 1868, after several years of very successful oil endeavors, John Livingston Grandin and a business partner, A. Clark Baum, started a general banking business in Tidioute, founded with oil money. Two years later, John Livingston's brother William James purchased Baum's stake in the business and it became the Grandin Brothers Bank. Around this time the Philadelphia Financier Jay Cooke was undertaking a campaign to sell Northern Pacific Railway securities in his role as principal financier of the project. The Northern Pacific had turned to Cooke after his very successful efforts raising money for the Union Army during the Civil War by selling Civil War securities to investors in England. As the Grandin Bros. Bank grew its capital, the Grandin Brothers decided to use Jay Cooke & Company as one of their bank's depositories. However, in 1873 Cooke & Company went bankrupt, a major cause of the Panic of 1873. Cooke then told his creditors he could only repay them 15 cents on the dollar. Rather than accept 15 cents on the dollar for the considerable sum they were owed, and in turn lose a large portion of their oil profits, the Grandin Brothers decided to consider accepting the only collateral security available on the money. This collateral security was Northern Pacific Railway bonds with certain land purchasing rights to government land grants in the Dakota Territory made to the railroad. The theory for these bonds had been that once the railroad was completed, thus making the surrounding land much more valuable, the investors of the railway could exercise these bonds for a profit. However, since funding for the railroad ceased in 1873, so did the construction, and this land was thousands of acres of uninhabited prairie and wilderness in the Dakota Territory and beyond with no finished railway to reach them.
Western exploration, Dakota bonanza farming
In 1875, John Livingston went out to the Red River Valley of the Dakota Territory to inspect this land himself. He traveled to the farthest settlement into the territory, Fargo, which was then just a town of tents. At Fargo he hired an experienced Colonel and together they trekked fifty miles north in the Red River Valley to the government railroad land. Along the way they encountered a frontiersman living on the Red River and growing a small amount of wheat for himself. Considering what could be done with the land, John Livingston noted the rich surface soil and thick clay sub-soil, and recognized the good conditions for growing more wheat. He surveyed and marked off two townships, having the possibility to buy the odd sections. On reporting this survey to the Department of Interior in Moorhead, Minnesota, the survey was accepted. Deciding to exercise the Grandin Brothers' right to the Railroad Bonds with government land purchasing rights, he then purchased of Rail Road land. He also bought another of government land at a discount, by promising the U.S. Government at the Department of Interior that they would put aside land to put up a town thus helping to develop the country as there were no known settlers living on the prairie there at that time. (The town they put up became Grandin, North Dakota)
Knowing little about farming and interested in getting back to the banking, oil and lumber businesses in Pennsylvania, John Livingston contacted Oliver Dalrymple, a land speculator also from Pennsylvania who was cultivating half a section [] of wheat in nearby Minnesota, the largest known farm in the area at the time. The Grandin Brothers hired Dalrymple to get a corporate wheat farm up and running on a large portion of their land. As their success grew, they purchased more land and at the Grandin Farm's height they possessed by some accounts of the best wheat land in the American continent, and employed over 400 workers. Not a single year passed where their profits did not exceed the original amount owed to them by Jay Cooke & Company.
Since the Grandin Brothers now owned several miles of land along the Red River, in order to quickly get the cultivated wheat to market they set up the Grandin Steamboat Line of steamboats and barges to transport both wheat and passengers down to Fargo. In Fargo they set up a grain elevator on the railroad line to carry the wheat to market. By 1888 however, much of the steamboat traffic had been replaced by trains.
Grandin Farm land liquidation
After amassing sizable fortunes, the Grandin Brothers began to slowly reduce the Grandin Farm holdings, by selling off half sections []and full sections [] at a time, in then developed North Dakota, for vastly more than they originally paid for them. By their deaths in the early 20th century, much of the land had been sold off. In 1920, John Livingston Grandin Jr., the son of John Livingston Grandin sold the remaining as the final disposition of the Grandin Land. However the buyer was deceased 3 years later, and with the mortgage unpaid, the land was repossessed. It was then supervised by a member of the Grandin Family and rented from 1923 until 1934, when the land was conveyed to the Grandin Land Trust, to be sold when another buyer was found. The land was finally again sold outright in 1948, marking the true final disposition of Grandin land.
Descendants
John Livingston Grandin Jr. (son of John Livingston Grandin) continued the family business endeavors until his death in 1963. He attended Harvard University; in 1905 was a passenger on the first purpose built cruise ship, the Prinzessin Victoria Luise; and during WWI was the Director for the Red Cross Bureau of Supplies for the Northeast Division.
Temple Grandin (great-granddaughter of John Livingston Grandin), professor of Animal Science and autism advocate.
See also
Grandin, Missouri
References
Petroleum engineers
Cass County, North Dakota
Traill County, North Dakota
Brother trios
Business families of the United States
Grandin family | Grandin brothers | Engineering | 2,105 |
77,676,319 | https://en.wikipedia.org/wiki/List%20of%20star%20systems%20within%20200%E2%80%93250%20light-years | This is a list of star systems within 200–250 light years of Earth.
See also
List of star systems within 150–200 light-years
List of star systems within 250–300 light-years
References
Lists by distance
Star systems
Lists of stars | List of star systems within 200–250 light-years | Physics,Astronomy | 51 |
77,911,947 | https://en.wikipedia.org/wiki/HotDog%20domain | In molecular biology, the HotDog domain is a protein structural motif found in a diverse superfamily of enzymes, primarily thioesterases and dehydratases. The name "HotDog" refers to its characteristic structure, where a central α-helix (the "sausage") is wrapped by a curved β-sheet (the "bun").
Structure
The HotDog domain consists of a central α-helix (typically 5 turns long) and an antiparallel β-sheet (usually 5-7 strands) that wraps around the α-helix. The basic structural unit of HotDog domain proteins is typically a homodimer, formed by the association of two monomers or two tandem copies of the domain. However, more complex quaternary structures, including tetramers and hexamers, have been observed.
Function
Proteins containing the HotDog domain are primarily involved in thioester hydrolysis, various ehydration reactions and acyl transfer reactions. Hotdog fold protein play roles in various metabolic pathways, such as fatty acid biosynthesis and degradation, polyketide biosynthesis and phenylacetic acid degradation.
Enzyme families
The HotDog domain superfamily includes several enzyme families, such as:
4-hydroxybenzoyl-CoA thioesterases
FabA-like dehydratases
YbgC-like acyl-CoA thioesterases
TesB-like thioesterases
MaoC dehydratase-like enzymes
Catalytic mechanism
The catalytic mechanism of HotDog domain enzymes varies depending on the specific enzyme and reaction. However, many of these enzymes share common features in their active sites including a conserved catalytic triad or dyad, often including aspartate, glutamate, or serine residues. A nucleophilic attack mechanism, typically involving an activated water molecule and substrate binding sites that accommodate the CoA moiety and the acyl group.
Evolution and distribution
HotDog domain proteins are found in all three domains of life: Bacteria, Archaea, and Eukaryota. Their widespread distribution suggests an ancient evolutionary origin. Despite low overall sequence similarity, the structural conservation of the HotDog fold implies a common ancestor for these diverse enzymes.
See also
Protein fold
Thioesterase
Dehydratase
Fatty acid synthesis
References
External links
InterPro: HotDog domain superfamily (IPR029069)
Protein domains
Protein folds
Protein superfamilies
Enzymes | HotDog domain | Biology | 499 |
17,238,630 | https://en.wikipedia.org/wiki/Multiple-try%20Metropolis | Multiple-try Metropolis (MTM) is a sampling method that is a modified form of the Metropolis–Hastings method, first presented by Liu, Liang, and Wong in 2000.
It is designed to help the sampling trajectory converge faster,
by increasing both the step size and the acceptance rate.
Background
Problems with Metropolis–Hastings
In Markov chain Monte Carlo, the Metropolis–Hastings algorithm (MH) can be used to sample from a probability distribution which is difficult to sample from directly. However, the MH algorithm requires the user to supply a proposal distribution, which can be relatively arbitrary. In many cases, one uses a Gaussian distribution centered on the current point in the probability space, of the form . This proposal distribution is convenient to sample from and may be the best choice if one has little knowledge about the target distribution, . If desired, one can use the more general multivariate normal distribution, , where is the covariance matrix which the user believes is similar to the target distribution.
Although this method must converge to the stationary distribution in the limit of infinite sample size, in practice the progress can be exceedingly slow. If is too large, almost all steps under the MH algorithm will be rejected. On the other hand, if is too small, almost all steps will be accepted, and the Markov chain will be similar to a random walk through the probability space. In the simpler case of , we see that steps only takes us a distance of . In this event, the Markov Chain will not fully explore the probability space in any reasonable amount of time. Thus the MH algorithm requires reasonable tuning of the scale parameter ( or ).
Problems with high dimensionality
Even if the scale parameter is well-tuned, as the dimensionality of the problem increases, progress can still remain exceedingly slow. To see this, again consider . In one dimension, this corresponds to a Gaussian distribution with mean 0 and variance 1. For one dimension, this distribution has a mean step of zero, however the mean squared step size is given by
As the number of dimensions increases, the expected step size becomes larger and larger. In dimensions, the probability of moving a radial distance is related to the Chi distribution, and is given by
This distribution is peaked at which is for large . This means that the step size will increase as the roughly the square root of the number of dimensions. For the MH algorithm, large steps will almost always land in regions of low probability, and therefore be rejected.
If we now add the scale parameter back in, we find that to retain a reasonable acceptance rate, we must make the transformation . In this situation, the acceptance rate can now be made reasonable, but the exploration of the probability space becomes increasingly slow. To see this, consider a slice along any one dimension of the problem. By making the scale transformation above, the expected step size is any one dimension is not but instead is . As this step size is much smaller than the "true" scale of the probability distribution (assuming that is somehow known a priori, which is the best possible case), the algorithm executes a random walk along every parameter.
The multiple-try Metropolis algorithm
Suppose is an arbitrary proposal function. We require that only if . Additionally, is the likelihood function.
Define where is a non-negative symmetric function in and that can be chosen by the user.
Now suppose the current state is . The MTM algorithm is as follows:
1) Draw k independent trial proposals from . Compute the weights for each of these.
2) Select from the with probability proportional to the weights.
3) Now produce a reference set by drawing from the distribution . Set (the current point).
4) Accept with probability
It can be shown that this method satisfies the detailed balance property and therefore produces a reversible Markov chain with as the stationary distribution.
If is symmetric (as is the case for the multivariate normal distribution), then one can choose which gives .
Disadvantages
Multiple-try Metropolis needs to compute the energy of other states at every step.
If the slow part of the process is calculating the energy, then this method can be slower.
If the slow part of the process is finding neighbors of a given point, or generating random numbers, then again this method can be slower.
It can be argued that this method only appears faster because it puts much more computation into a "single step" than Metropolis-Hastings does.
See also
Markov chain Monte Carlo
Metropolis–Hastings algorithm
Detailed balance
References
Liu, J. S., Liang, F. and Wong, W. H. (2000). The multiple-try method and local optimization in Metropolis sampling, Journal of the American Statistical Association, 95(449): 121–134 JSTOR
Monte Carlo methods
Markov chain Monte Carlo | Multiple-try Metropolis | Physics | 974 |
1,637,397 | https://en.wikipedia.org/wiki/Biorefinery | A biorefinery is a refinery that converts biomass to energy and other beneficial byproducts (such as chemicals). The International Energy Agency Bioenergy Task 42 defined biorefining as "the sustainable processing of biomass into a spectrum of bio-based products (food, feed, chemicals, materials) and bioenergy (biofuels, power and/or heat)". As refineries, biorefineries can provide multiple chemicals by fractioning an initial raw material (biomass) into multiple intermediates (carbohydrates, proteins, triglycerides) that can be further converted into value-added products. Each refining phase is also referred to as a "cascading phase". The use of biomass as feedstock can provide a benefit by reducing the impacts on the environment, as lower pollutants emissions and reduction in the emissions of hazard products. In addition, biorefineries are intended to achieve the following goals:
Supply the current fuels and chemical building blocks
Supply new building blocks for the production of novel materials with disruptive characteristics
Creation of new jobs, including rural areas
Valorization of waste (agricultural, urban, and industrial waste)
Achieve the ultimate goal of reducing GHG emissions
Classification of biorefinery systems
Biorefineries can be classified based in four main features:
Platforms: Refers to key intermediates between raw material and final products. The most important intermediates are:
Biogas from anaerobic digestion
Syngas from gasification
Hydrogen from water-gas shift reaction, steam reforming, water electrolysis and fermentation
C6 sugars from hydrolysis of sucrose, starch, cellulose and hemicellulose
C5 sugars (e.g., xylose, arabinose: C5H10O5), from hydrolysis of hemicellulose and food and feed side streams
Lignin from the processing of lignocellulosic biomass.
Liquid from pyrolysis (pyrolysis oil)
Products: Biorefineries can be grouped in two main categories according to the conversion of biomass in an energetic or non-energetic product. In this classification the main market must be identified:
Energy-driven biorefinery systems: The main product is a second energy carrier as biofuels, power and heat.
Material-driven biorefinery systems: The main product is a biobased product
Feedstock: Dedicated feedstocks (Sugar crops, starch crops, lignocellulosic crops, oil-based crops, grasses, marine biomass); and residues (oil-based residues, lignocellulosic residues, organic residues and others)
Processes: Conversion process to transform biomass into a final product:
Mechanical/physical: The chemical structure of the biomass components is preserved. This operation includes pressing, milling, separation, distillation, among others
Biochemical: Processes under low temperature and pressure, using microorganism or enzymes.
Chemical processes: The substrate suffer change by the action of an external chemical (e.g., hydrolysis, transesterification, hydrogenation, oxidation, pulping)
Thermochemical: Severe conditions are apply to the feedstock (high pressure and high temperature, with or without catalyst).
The aforementioned features are used to classified biorefineries systems according to the following method:
Identify the feedstock, the main technologies included in the process, platform, and the final products
Draw the scheme of the refinery using the features identified in step 1.
Label the refinery system according by citing the number of platforms, products, feedstock, and processes involved
Elaborate a table with the features identified, and the source of internal energy demand
Some examples of classifications are:
C6 sugar platform biorefinery for bioethanol and animal feed from starch crops.
Syngas platform biorefinery for FT-diesel and phenols from straw
C6 and C5 sugar and syngas platform biorefinery for bioethanol, FT-diesel and furfural from saw mill residues.
Economic viability of biorefinery systems
Techno-economic assessment (TEA) is a methodology to evaluate whether a technology or process is economically attractive. TEA research has been developed to provide information about the performance of the biorefinery concept in diverse production systems as sugarcane mills, biodiesel production, pulp and paper mills, and the treatment of industrial and municipal solid waste.
Bioethanol plants and sugarcane mills are well-established processes where the biorefinery concept can be implemented since sugarcane bagasse is a feasible feedstock to produce fuels and chemicals; lignocellulosic bioethanol (2G) is produced in Brazil in two plants with capacities of 40 and 84 Ml/y (about 0.4% of the production capacity in Brazil). TEA of ethanol production using mild liquefaction of bagasse plus simultaneous saccharification and co-fermentation shows a minimum selling price between 50.38 and 62.72 US cents/L which is comparable with the market price. The production of xylitol, citric acid and glutamic acid from sugarcane lignocellulose (bagasse and harvesting residues), each in combination with electricity have been evaluated; the three biorefinery systems were simulated to be annexed to an existing sugar mill in South Africa. The production of xylitol and glutamic acid has shown economic feasibility with an Internal Rate of Return (IRR) of 12.3% and 31.5%, exceeding the IRR of the base case (10.3%). Likewise, the production of ethanol, lactic acid or methanol and ethanol-lactic acid from sugarcane bagasse have been studied; lactic acid demonstrated to be economically attractive by showing the greatest net present value (M$476–1278); in the same way; the production of ethanol and lactic acid as co-product was found to be a favorable scenario (net present value between M$165 and M$718) since this acid has applications in the pharmaceutical, cosmetic, chemical and food industry.
As for biodiesel production, this industry also has the potential to integrate biorefinery systems to convert residual biomasses and wastes into biofuel, heat, electricity and bio-based green products. Glycerol is the main co-product in biodiesel production and can be transformed into valuable products through chemocatalytic technologies; the valorization of glycerol for the production of lactic acid, acrylic acid, allyl alcohol, propanediols, and glycerol carbonate has been evaluated; all glycerol valorization routes shown to be profitable, being the most attractive the manufacture of glycerol carbonate. Palm empty fruit bunches (EFB) are an abundant lignocellulosic residues from the palm oil/biodiesel industry, the conversion of this residue into ethanol, heat and power, and cattle feed were evaluated according to techno-economic principles, the scenarios under study shown reduced economic benefits, although their implementation represented a reduction in the environmental impact (climate change and fossil fuel depletion) compared to the traditional biodiesel production. The economic feasibility for bio-oil production from EFB via fast pyrolysis using the fluidized-bed was studied, crude bio-oil can potentially be produced from EFB at a product value of 0.47 $/kg with a payback period and return on investment of 3.2 years and 21.9%, respectively. The integration of microalgae and Jatropha as a viable route for the production of biofuels and biochemicals has been analyzed in the United Arab Emirates (UAE) context. Three scenarios were examined; in all of them, biodiesel and glycerol is produced; in the first scenario biogas and organic fertilizer is produced by anaerobic fermentation of Jatropha fruit cake and seedcake; the second scenario includes the production of lipids from Jatropha and microalgae to produce biodiesel and the production of animal feed, biogas and organic fertilizer; the third scenario involves the production of lipids from microalgae for the production of biodiesel as well as hydrogen and animal feed as final product; only the first scenario was profitable.
In regard to the pulp and paper industry; lignin is a natural polymer co-generated and is generally used as boiler fuel to generate heat or steam to cover the energy demand in the process. Since lignin accounts for 10–30 wt% of the available lignocellulosic biomass and is equivalent to ~40% of its energy contents; the economics of biorefineries depend on the cost-effective processes to transform lignin into value-added fuels and chemicals. The conversion of an existing Swedish kraft pulp mill to the production of dissolving pulp, electricity, lignin, and hemicellulose has been studied; self-sufficiency in terms of steam and the production of excess steam was a key factor for the integration of a lignin separation plant; in this case; the digester has to be upgraded for preserving the same production level and represents 70% of the total investment cost of conversion. The potential of using the kraft process for producing bioethanol from softwoods in a repurposed or co-located kraft mill has been studied, a sugar recovery higher than 60% enables the process to be competitive for the production of ethanol from softwood. The repurposing of a kraft pulp mill to produce both ethanol and dimethyl ether has been investigated; in the process, cellulose is separated by and an alkaline pretreatment and then is hydrolyzed and fermented to produce ethanol, while the resulting liquor containing dissolved lignin is gasified and refined to dimethyl ether; the process demonstrate to be self-sufficient in terms of hot utility (fresh steam) demand but with a deficit of electricity; the process can be feasible, economically speaking, but is highly dependent on the development of biofuel prices. The exergetic and economic evaluation for the production of catechol from lignin was performed to determine its feasibility; the results showed that the total capital investment was 4.9 M$ based on the plant capacity of 2,544 kg/d of feedstock; besides, the catechol price was estimated to be 1,100 $/t and the valorization ratio was found to be 3.02.
The high generation of waste biomass is an attractive source for conversion to valuable products, several biorefinery routes has been proposed to upgrade waste streams in valuable products. The production of biogas from banana peel (Musa x paradisiaca) under the biorefinery concept is a promissory alternative since is possible to obtain biogas and other co-products including ethanol, xylitol, syngas, and electricity; this process also provides high profitability for high production scales. The economic assessment of the integration of organic waste anaerobic digestion with other mixed culture anaerobic fermentation technologies was studied; the highest profit is obtained by dark fermentation of food waste with separation and purification of acetic and butyric acids (47 USD/t of food waste). The technical feasibility, profitability and extent of investment risk to produce sugar syrups from food and beverage waste was analyzed; the returns on investment shown to be satisfactory for the production of fructose syrup (9.4%), HFS42 (22.8%) and glucose-rich syrup (58.9%); the sugar syrups also have high cost competitiveness with relatively low net production costs and minimum selling prices. The valorization of municipal solid waste through integrated mechanical biological chemical treatment (MBCT) systems for the production of levulinic acid has been studied, the revenue from resource recovery and product generation (without the inclusion of gate fees) is more than enough to out- weigh the waste collection fees, annual capital and operating costs.
Environmental impact of biorefinery systems
One of the main goals of biorefineries is to contribute to a more sustainable industry by the conservation of resources and by reducing greenhouse gas emissions and other pollutants. Nevertheless, other environmental impacts may be associated to the production of biobased products; as land use change, eutrophication of water, the pollution of the environment with pesticides, or higher energy and material demand that lead to environmental burdens. Life cycle assessment (LCA) is a methodology to evaluate the environmental load of a process, from the extraction of raw materials to the end use. LCA can be used to investigate the potential benefits of biorefinery systems; multiple LCA studies has been developed to analyse whether biorefineries are more environmentally friendly compared to conventional alternatives.
Feedstock is one of the main sources of environmental impacts in the biofuel production, the source of this impacts are related to the field operation to grow, handle and transport the biomass to the biorefinery gate. Agricultural residues are the feedstock with the lowest environmental impact followed by lignocellulosic crops; and finally by first-generation arable crops, although the environmental impacts are sensitive to factors such as crop management practices, harvesting systems, and crop yields. The production of chemicals from biomass feedstock has shown environmental benefits; bulk chemicals from biomass-derived feedstocks have been studied showing savings on non renewable energy use and greenhouse gas emissions.
The environmental assessment for 1G and 2G ethanol shows that these two biorefinery systems are able to mitigate climate change impacts in comparison to gasoline, but higher climate change benefits are achieved with 2G ethanol production (up to 80% reduction). The conversion of palm empty fruit bunches into valuable products (ethanol, heat and power, and cattle feed) reduces the impact for climate change and fossil fuel depletion compared to the traditional biodiesel production; but the benefits for toxicity and eutrophication are limited. Propionic acid produced by fermentation of glycerol leads to significant reduction of GHG emissions compared to fossil fuel alternatives; however the energy input is double and the contribution to eutrophication is significantly higher The LCA for the integration of butanol from prehydrolysate in a Canadian Kraft dissolving pulp mill shows that the carbon footprint of this butanol may be 5% lower compare to gasoline; but is not as low as corn butanol (23% lower than that of gasoline).
The majority of the LCA studies for the valorization of food waste have been focused on the environmental impacts on biogas or energy production, with only few on the synthesis of high value-added chemicals; hydroxymethylfurfural (HMF) has been listed as one of the top 10 bio-based chemicals by the US Department of Energy; the LCA of eight food waste valorization routes for the production of HMF shows that the most environmentally favorable option uses less polluting catalyst (AlCl3) and co-solvent (acetone), and provides the highest yield of HMF (27.9 Cmol%), metal depletion and toxicity impacts (marine ecotoxicity, freshwater toxicity, and human toxicity) were the categories with the highest values.
Biorefinery in the pulp and paper industry
The pulp and paper industry is considered as the first industrialized biorefinery system; in this industrial process other co-products are produced including tall oil, rosin, vanillin, and lignosulfonates. Apart from these co-products; the system includes energy generation (in for of steam and electricity) to cover its internal energy demand; and it has the potential to feed heat and electricity to the grid.
This industry has consolidated as the highest consumer of biomass; and uses not only wood as feedstock, it is capable of processing agricultural waste as bagasse, rice straw and corn stover. Other important features of this industry are a well-established logistic for biomass production, avoiding competition with food production for fertile land, and presenting higher biomass yields.
Examples
The fully operational Blue Marble Energy company has multiple biorefineries located in Odessa, WA and Missoula, MT.
Canada's first Integrated Biorefinery, developed on anaerobic digestion technology by Himark BioGas is located in Alberta. The biorefinery utilizes Source Separated Organics from the metro Edmonton region, open pen feedlot manure, and food processing waste.
Chemrec's technology for black liquor gasification and production of second-generation biofuels such as biomethanol or BioDME is integrated with a host pulp mill and utilizes a major sulfate or sulfite process waste product as feedstock.
Novamont has converted old petrochemical factories into biorefineries, producing protein, plastics, animal feed, lubricants, herbicides and elastomers from cardoon.
C16 Biosciences produces synthetic palm oil from carbon-containing waste (i.e. food waste, glycerol) by means of yeast.
MacroCascade aims to refine seaweed into food and fodder, and then products for healthcare, cosmetics, and fine chemicals industries. The side streams will be used for the production of fertilizer and biogas. Other seaweed biorefinery projects include MacroAlgaeBiorefinery (MAB4), SeaRefinery and SEAFARM.
FUMI Ingredients produces foaming agents, heat-set gels and emulsifiers from micro-algae with the help of micro-organisms such as brewer's yeast and baker's yeast.
The BIOCON platform is researching the processing of wood into various products. More precisely, their researchers are looking at transforming lignin and cellulose into various products. Lignin for example can be transformed into phenolic components which can be used to make glue, plastics and agricultural products (e.g. crop protection). Cellulose can be transformed into clothes and packaging.
In South Africa, Numbitrax LLC bought a Blume Biorefinery system for producing bioethanol as well as additional high-return offtake products from local and readily available resources such as the prickly pear cactus.
Circular Organics (part of Kempen Insect Valley) grows black soldier fly larvae on waste from the agricultural and food industry (i.e. fruit and vegetable surplus, remaining waste from fruit juice and jam production). These larvae are used to produce protein, grease, and chitin. The grease is usable in the pharmaceutical industry (cosmetics, surfactants for shower gel), replacing other vegetable oils such as palm oil, or it can be used in fodder.
Biteback Insect makes insect cooking oil, insect butter, fatty alcohols, insect frass protein and chitin from superworm (Zophobas morio).
See also
Microalgae
Food waste: can be made into PHA (thus a 2nd generation feedstock bioplastic)
Tomato: can be made into tomato flesh (food), tomato seeds (containing fatty acids) and tomato peel (containing lycopene)
Biomaterials use in sustainable textile
Tobacco: GM tobacco could provide industrial enzymes for biofuel production. Tobacco can also supply nicotine (i.e. as used in e-liquids).
Citrus: can be made into juice (food) and citrus peel (containing succinic acid, pectin, essential oil, cellulose; also just usable as zest)
Biomass (can be used in CHP systems)
Gasification
Carbon neutrality
Renewable energy commercialization
Maggot farming
References
External links
Tactical Biorefinery
Saccharification
Biosynergy
Biorefinery from biomass
Aqueous-Phase Reforming.
Wisconsin Biorefining Development Initiative.
Biorefinery Film
Active Biorefinery Facilities
Top Value Added Chemicals from Biomass: list of chemicals that can be extracted from biomass
Biofuels technology
Oil refineries
Sustainable technologies
Bright green environmentalism | Biorefinery | Chemistry,Biology | 4,170 |
4,522,192 | https://en.wikipedia.org/wiki/Oxacillin | Oxacillin (trade name Bactocill) is a narrow-spectrum second-generation beta-lactam antibiotic of the penicillin class developed by Beecham.
It was patented in 1960 and approved for medical use in 1962.
Medical uses
Oxacillin is a penicillinase-resistant β-lactam. It is similar to methicillin, and has replaced methicillin in clinical use. Other related compounds are nafcillin, cloxacillin, dicloxacillin, and flucloxacillin. Since it is resistant to penicillinase enzymes, such as that produced by Staphylococcus aureus, it is widely used clinically in the US to treat penicillin-resistant Staphylococcus aureus. However, with the introduction and widespread use of both oxacillin and methicillin, antibiotic-resistant strains called methicillin-resistant and oxacillin-resistant Staphylococcus aureus (MRSA/ORSA) have become increasingly prevalent worldwide. MRSA/ORSA can be treated with vancomycin or other new antibiotics.
Contraindications
The use of oxacillin is contraindicated in individuals that have experienced a hypersensitivity reaction to any medication in the penicillin family of antibiotics. Cross-allergenicity has been documented in individuals taking oxacillin that experienced a previous hypersensitivity reaction when given cephalosporins and cephamycins.
Adverse effects
Commonly reported adverse effects associated with the use of oxacillin include skin rash, diarrhea, nausea, vomiting, hematuria, agranulocytosis, eosinophilia, leukopenia, neutropenia, thrombocytopenia, hepatotoxicity, acute interstitial nephritis, and fever. High doses of oxacillin have been reported to cause renal, hepatic, and nervous system toxicity. Common to all members of the penicillin class of drugs, oxacillin may cause acute or delayed hypersensitivity reactions. As an injection, oxacillin may cause injection site reactions, which may be characterized by redness, swelling, and itching.
Pharmacology
Mechanism of Action
Oxacillin, through its β-lactam ring, covalently binds to penicillin-binding proteins, which are enzymes involved in the synthesis of the bacterial cell wall. This binding interaction interferes with the transpeptidation reaction and inhibits the synthesis of peptidoglycan, a prominent component of the cell wall. By decreasing the integrity of the bacterial cell wall, it is thought that oxacillin and other penicillins kill actively growing bacteria through cell autolysis.
Chemistry
As with other members of the penicillin family, the chemical structure of oxacillin features a 6-aminopenicillanic acid nucleus with a substituent attached to the amino group. The 6-aminopenicillanic acid nucleus consists of a thiazolidine ring attached to a β-lactam ring, which is the active moiety responsible for the antibacterial activity of the penicillin family. The substituent present on oxacillin is thought to impart resistance to degradation via bacterial β-lactamases.
History
Oxacillin, a derivative of methicillin, was first synthesized in the early 1960s as part of a research initiative led by Peter Doyle and John Naylor of Beecham, in consort with Bristol-Myers. Members of the isoxazolyl penicillin family, which includes cloxacillin, dicloxacillin, and oxacillin, were synthesized to counter the increasing prevalence of infections caused by penicillin-resistant Staphylococcus aureus. While methicillin could only be administered via injection, the isoxazolyl penicillins, including oxacillin, could be given orally or by injection. Following the synthesis of cloxacillin and oxacillin, Beecham retained the right to commercially develop cloxacillin in the United Kingdom while Bristol-Myers was given the marketing rights for oxacillin in the United States.
Society and Culture
FDA Approval History
April 8, 1971: Oxacillin Sodium Injectable
Applicant: Sandoz
July 27, 1973: Bactocill Capsule
Applicant: GlaxoSmithKline
March 10, 1980: Oxacillin Sodium Capsule
Applicant: Ani Pharms Inc
May 15, 1980: Oxacillin Sodium for Solution
Applicant: TEVA
June 2, 1981: Bactocill for Solution
Applicant: GlaxoSmithKline
December 23, 1986: Oxacillin Sodium Powder
Applicant: Sandoz
September 29, 1988: Oxacillin Sodium Injectable
Applicant: Watson Labs Inc
October 26, 1988: Oxacillin Sodium Injectable
Applicant: Watson Labs Inc
October 26, 1989: Bactocill in Plastic Container Injectable
Applicant: Baxter Healthcare
March 30, 2012: Oxacillin Sodium Injectable
Applicant: Sagent Pharms
January 18, 2013: Oxacillin Sodium Injectable
Applicant: Aurobindo Pharma LTD
August 25, 2014: Oxacillin Sodium Injectable
Applicant: Mylan Labs LTD
December 11, 2015: Oxacillin Sodium Injectable
Applicant: Hospira Inc
July 31, 2017: Oxacillin Sodium Injectable
Applicant: Wockhardt Bio/Ag
Pricing
The average wholesale price (AWP) for oxacillin products are provided as follows. The prices listed below are intended to serve as reference values and do not represent the pricing determined by any single manufacturer or entity.
Bactocill in Dextrose Intravenous
1 g/50 mL: $20.37
2 g/50 mL: $32.48
Oxacillin Sodium Injection
1 g: $17.52
2 g: $33.99
10 g: $138.77
References
ChemBank
Penicillins
Enantiopure drugs
Isoxazoles | Oxacillin | Chemistry | 1,291 |
3,368,365 | https://en.wikipedia.org/wiki/Interstitial%20cell%20of%20Cajal | Interstitial cells of Cajal (ICC) are interstitial cells found in the gastrointestinal tract. There are different types of ICC with different functions. ICC and another type of interstitial cell, known as platelet-derived growth factor receptor alpha (PDGFRα) cells, are electrically coupled to smooth muscle cells via gap junctions, that work together as an SIP functional syncytium. Myenteric interstitial cells of Cajal (ICC-MY) serve as pacemaker cells that generate the bioelectrical events known as slow waves. Slow waves conduct to smooth muscle cells and cause phasic contractions.
The picture to the right shows an isolated Interstitial cell of Cajal from the Myenteric plexus of the mouse small intestine grown in a primary cell culture. This cell type can be characterized morphologically as having a small cell body often triangular or stellate-shaped with several long processes branching out into secondary and tertiary extensions - these processes often contact smooth muscle cells. They have contractile behaviour in both the cell body and the extended processes.
Embryology
These cells are derived from mesoderm, unlike the enteric neurons that arise from neural crest cells.
Function
Intramuscular Interstitial cells of Cajal (ICC-IM) are involved in mediating responses to neurotransmission. All ICC in the gastrointestinal tract express calcium-activated chloride channels encoded by the gene ANO1. These channels are activated by release of calcium in ICC and are important for both the pacemaker activity of ICC and their responses to neurotransmitters. A recent review noted that carbachol increases ICC activity through this channel. ANO1-knockout mice fail to produce slow waves and ANO1 channel inhibitors block slow waves.
ICC are also thought to be present in other types of smooth muscle tissues. But with few exceptions the function of these cells is not well understood and currently an area of active research.
Role in slow wave activity
ICC serve as electrical pacemakers and generate spontaneous electrical slow waves in the gastrointestinal (GI) tract. Electrical slow waves spread from ICC to smooth muscle cells and the resulting depolarization initiates calcium ion entry and contraction. Slow waves organize gut contractions into phasic contractions that are the basis for peristalsis and segmentation.
Frequency of ICC pacemaker cells
The frequency of ICC pacemaker activity differs in different regions of the GI tract:
3 per minute in the stomach
11-12 per minute in the duodenum
8-9 per minute in the ileum
3-4 per minute in the colon
ICC also mediate neural input from enteric motor neurons. Animals lacking ICC have greatly reduced responses to the neurotransmitter acetylcholine, released from excitatory motor neurons, and to the transmitter nitric oxide, released from inhibitory motor neurons. Loss of ICC in disease, therefore, may interrupt normal neural control of gastrointestinal (GI) contractions and lead to functional GI disorders, such as irritable bowel syndrome.
ICC also express mechano-sensitive mechanisms that cause these cells to respond to stretch. Stretching GI muscles can affect the resting potentials of ICC and affect the frequency of pacemaker activity. Carbachol increases ICC activity through ANO1 activation.
ICC are also critical in the propagation of electrical slow waves. ICC form a network through which slow wave activity can propagate. If this network is broken, then 2 regions of muscle will function independently.
Pathology
ICCs are thought to be the cells from which gastrointestinal stromal tumours (GISTs) arise. Also, abnormalities in the ICC network is one cause of chronic intestinal pseudo-obstruction.
Eponym
The interstitial cells of Cajal are named after Santiago Ramón y Cajal, a Spanish pathologist and Nobel laureate.
See also
List of human cell types derived from the germ layers
Telocyte, a similar, and potentially equivalent, cell
References
External links
Digestive system | Interstitial cell of Cajal | Biology | 830 |
41,734 | https://en.wikipedia.org/wiki/Spread%20spectrum | In telecommunications, especially radio communication, spread spectrum are techniques by which a signal (e.g., an electrical, electromagnetic, or acoustic) generated with a particular bandwidth is deliberately spread in the frequency domain over a wider frequency band. Spread-spectrum techniques are used for the establishment of secure communications, increasing resistance to natural interference, noise, and jamming, to prevent detection, to limit power flux density (e.g., in satellite downlinks), and to enable multiple-access communications.
Telecommunications
Spread spectrum generally makes use of a sequential noise-like signal structure to spread the normally narrowband information signal over a relatively wideband (radio) band of frequencies. The receiver correlates the received signals to retrieve the original information signal. Originally there were two motivations: either to resist enemy efforts to jam the communications (anti-jam, or AJ), or to hide the fact that communication was even taking place, sometimes called low probability of intercept (LPI).
Frequency-hopping spread spectrum (FHSS), direct-sequence spread spectrum (DSSS), time-hopping spread spectrum (THSS), chirp spread spectrum (CSS), and combinations of these techniques are forms of spread spectrum. The first two of these techniques employ pseudorandom number sequences—created using pseudorandom number generators—to determine and control the spreading pattern of the signal across the allocated bandwidth. Wireless standard IEEE 802.11 uses either FHSS or DSSS in its radio interface.
Techniques known since the 1940s and used in military communication systems since the 1950s "spread" a radio signal over a wide frequency range several magnitudes higher than minimum requirement. The core principle of spread spectrum is the use of noise-like carrier waves, and, as the name implies, bandwidths much wider than that required for simple point-to-point communication at the same data rate.
Resistance to jamming (interference). Direct sequence (DS) is good at resisting continuous-time narrowband jamming, while frequency hopping (FH) is better at resisting pulse jamming. In DS systems, narrowband jamming affects detection performance about as much as if the amount of jamming power is spread over the whole signal bandwidth, where it will often not be much stronger than background noise. By contrast, in narrowband systems where the signal bandwidth is low, the received signal quality will be severely lowered if the jamming power happens to be concentrated on the signal bandwidth.
Resistance to eavesdropping. The spreading sequence (in DS systems) or the frequency-hopping pattern (in FH systems) is often unknown by anyone for whom the signal is unintended, in which case it obscures the signal and reduces the chance of an adversary making sense of it. Moreover, for a given noise power spectral density (PSD), spread-spectrum systems require the same amount of energy per bit before spreading as narrowband systems and therefore the same amount of power if the bitrate before spreading is the same, but since the signal power is spread over a large bandwidth, the signal PSD is much lower — often significantly lower than the noise PSD — so that the adversary may be unable to determine whether the signal exists at all. However, for mission-critical applications, particularly those employing commercially available radios, spread-spectrum radios do not provide adequate security unless, at a minimum, long nonlinear spreading sequences are used and the messages are encrypted.
Resistance to fading. The high bandwidth occupied by spread-spectrum signals offer some frequency diversity; i.e., it is unlikely that the signal will encounter severe multipath fading over its whole bandwidth. In direct-sequence systems, the signal can be detected by using a rake receiver.
Multiple access capability, known as code-division multiple access (CDMA) or code-division multiplexing (CDM). Multiple users can transmit simultaneously in the same frequency band as long as they use different spreading sequences.
Invention of frequency hopping
The idea of trying to protect and avoid interference in radio transmissions dates back to the beginning of radio wave signaling. In 1899, Guglielmo Marconi experimented with frequency-selective reception in an attempt to minimize interference. The concept of Frequency-hopping was adopted by the German radio company Telefunken and also described in part of a 1903 US patent by Nikola Tesla. Radio pioneer Jonathan Zenneck's 1908 German book Wireless Telegraphy describes the process and notes that Telefunken was using it previously. It saw limited use by the German military in World War I, was put forward by Polish engineer Leonard Danilewicz in 1929, showed up in a patent in the 1930s by Willem Broertjes ( issued Aug. 2, 1932), and in the top-secret US Army Signal Corps World War II communications system named SIGSALY.
During World War II, Golden Age of Hollywood actress Hedy Lamarr and avant-garde composer George Antheil developed an intended jamming-resistant radio guidance system for use in Allied torpedoes, patenting the device under "Secret Communications System" on August 11, 1942. Their approach was unique in that frequency coordination was done with paper player piano rolls, a novel approach which was never put into practice.
Clock signal generation
Spread-spectrum clock generation (SSCG) is used in some synchronous digital systems, especially those containing microprocessors, to reduce the spectral density of the electromagnetic interference (EMI) that these systems generate. A synchronous digital system is one that is driven by a clock signal and, because of its periodic nature, has an unavoidably narrow frequency spectrum. In fact, a perfect clock signal would have all its energy concentrated at a single frequency (the desired clock frequency) and its harmonics.
Background
Practical synchronous digital systems radiate electromagnetic energy on a number of narrow bands spread on the clock frequency and its harmonics, resulting in a frequency spectrum that, at certain frequencies, can exceed the regulatory limits for electromagnetic interference (e.g. those of the FCC in the United States, JEITA in Japan and the IEC in Europe).
Spread-spectrum clocking avoids this problem by reducing the peak radiated energy and, therefore, its electromagnetic emissions and so comply with electromagnetic compatibility (EMC) regulations. It has become a popular technique to gain regulatory approval because it requires only simple equipment modification. It is even more popular in portable electronics devices because of faster clock speeds and increasing integration of high-resolution LCD displays into ever smaller devices. As these devices are designed to be lightweight and inexpensive, traditional passive, electronic measures to reduce EMI, such as capacitors or metal shielding, are not viable. Active EMI reduction techniques such as spread-spectrum clocking are needed in these cases.
Method
In PCIe, USB 3.0, and SATA systems, the most common technique is downspreading, via frequency modulation with a lower-frequency source. Spread-spectrum clocking, like other kinds of dynamic frequency change, can also create challenges for designers. Principal among these is clock/data misalignment, or clock skew. A phase-locked loop on the receiving side needs a high enough bandwidth to correctly track a spread-spectrum clock.
Even though SSC compatibility is mandatory on SATA receivers, it is not uncommon to find expander chips having problems dealing with such a clock. Consequently, an ability to disable spread-spectrum clocking in computer systems is considered useful.
Effect
Note that this method does not reduce total radiated energy, and therefore systems are not necessarily less likely to cause interference. Spreading energy over a larger bandwidth effectively reduces electrical and magnetic readings within narrow bandwidths. Typical measuring receivers used by EMC testing laboratories divide the electromagnetic spectrum into frequency bands approximately 120 kHz wide. If the system under test were to radiate all its energy in a narrow bandwidth, it would register a large peak. Distributing this same energy into a larger bandwidth prevents systems from putting enough energy into any one narrowband to exceed the statutory limits. The usefulness of this method as a means to reduce real-life interference problems is often debated, as it is perceived that spread-spectrum clocking hides rather than resolves higher radiated energy issues by simple exploitation of loopholes in EMC legislation or certification procedures. This situation results in electronic equipment sensitive to narrow bandwidth(s) experiencing much less interference, while those with broadband sensitivity, or even operated at other higher frequencies (such as a radio receiver tuned to a different station), will experience more interference.
FCC certification testing is often completed with the spread-spectrum function enabled in order to reduce the measured emissions to within acceptable legal limits. However, the spread-spectrum functionality may be disabled by the user in some cases. As an example, in the area of personal computers, some BIOS writers include the ability to disable spread-spectrum clock generation as a user setting, thereby defeating the object of the EMI regulations. This might be considered a loophole, but is generally overlooked as long as spread-spectrum is enabled by default.
See also
Direct-sequence spread spectrum
Electromagnetic compatibility (EMC)
Electromagnetic interference (EMI)
Frequency allocation
Frequency-hopping spread spectrum
George Antheil
HAVE QUICK military frequency-hopping UHF radio voice communication system
Hedy Lamarr
Open spectrum
Orthogonal variable spreading factor (OVSF)
Spread-spectrum time-domain reflectometry
Time-hopping spread spectrum
Ultra-wideband
Notes
Sources
NTIA Manual of Regulations and Procedures for Federal Radio Frequency Management
National Information Systems Security Glossary
History on spread spectrum, as given in "Smart Mobs, The Next Social Revolution", Howard Rheingold,
Władysław Kozaczuk, Enigma: How the German Machine Cipher Was Broken, and How It Was Read by the Allies in World War Two, edited and translated by Christopher Kasparek, Frederick, MD, University Publications of America, 1984, .
Andrew S. Tanenbaum and David J. Wetherall, Computer Networks, Fifth Edition.
External links
A short history of spread spectrum
CDMA and spread spectrum
Spread Spectrum Scene newsletter
Channel access methods
Multiplexing
Radio resource management
Radio modulation modes
Spectrum (physical sciences) | Spread spectrum | Physics | 2,061 |
52,800,937 | https://en.wikipedia.org/wiki/Modulation%20doping | Modulation doping is a technique for fabricating semiconductors such that the free charge carriers are spatially separated from the donors. Because this eliminates scattering from the donors, modulation-doped semiconductors have very high carrier mobilities.
History
Modulation doping was conceived in Bell Labs in 1977 following a conversation between Horst Störmer and Ray Dingle, and implemented shortly afterwards by Arthur Gossard. In 1977, Störmer and Dan Tsui used a modulation-doped wafer to discover the fractional quantum Hall effect.
Implementation
Modulation-doped semiconductor crystals are commonly grown by epitaxy to allow successive layers of different semiconductor species to be deposited. One common structure uses a layer of AlGaAs deposited over GaAs, with Si n-type donors in the AlGaAs.
Applications
Field effect transistors
Modulation-doped transistors can reach high electrical mobilities and therefore fast operation. A modulation-doped field-effect transistor is known as a MODFET.
Low-temperature electronics
One advantage of modulation doping is that the charge carriers cannot become trapped on the donors even at the lowest temperatures. For this reason, modulation-doped heterostructures allow electronics operating at cryogenic temperatures.
Quantum computing
Modulation-doped two-dimensional electron gases can be gated to create quantum dots. Electrons trapped in these dots can then be operated as quantum bits.
References
Semiconductor device fabrication | Modulation doping | Materials_science | 287 |
142,615 | https://en.wikipedia.org/wiki/William%20Godwin | William Godwin (3 March 1756 – 7 April 1836) was an English journalist, political philosopher and novelist. He is considered one of the first exponents of utilitarianism and the first modern proponent of anarchism. Godwin is most famous for two books that he published within the space of a year: An Enquiry Concerning Political Justice, an attack on political institutions, and Things as They Are; or, The Adventures of Caleb Williams, an early mystery novel which attacks aristocratic privilege. Based on the success of both, Godwin featured prominently in the radical circles of London in the 1790s. He wrote prolifically in the genres of novels, history and demography throughout his life.
In the conservative reaction to British radicalism, Godwin was attacked, in part because of his marriage to the feminist writer Mary Wollstonecraft in 1797 and his candid biography of her after her death from childbirth. Their daughter, later known as Mary Shelley, would go on to write Frankenstein and marry the poet Percy Bysshe Shelley. With his second wife, Mary Jane Clairmont, Godwin set up The Juvenile Library, allowing the family to write their own works for children (sometimes using noms de plume) and translate and publish many other books, some of enduring significance. Godwin has had considerable influence on British literature and literary culture.
Early life and education
Godwin was born in Wisbech, Isle of Ely, Cambridgeshire, to John and Anne Godwin, becoming the seventh of his parents' thirteen children. Godwin's family on both sides were middle-class and his parents adhered to a strict form of Calvinism. Godwin's mother came from a wealthy family but due to her uncle's frivolities the family wealth was squandered. Fortunately for the family, her father was a successful merchant involved in the Baltic Sea trade. Shortly following William's birth, his father John, a Nonconformist minister, moved the family to Debenham in Suffolk and later to Guestwick in Norfolk, which had a radical history as a Roundhead stronghold during the English Civil War. At the local meeting house, John Godwin often found himself sitting in "Cromwell's Chair", which had been a gift to the town by the Lord Protector.
William Godwin came from a long line of English Dissenters, who faced religious discrimination by the British government, and was inspired by his grandfather and father to take up the dissenting tradition and become a minister himself. At eleven years old, he became the sole pupil of Samuel Newton, a hard-line Calvinist and a disciple of Robert Sandeman. Although Newton's strict method of discipline left Godwin with a lasting anti-authoritarianism, Godwin internalized the Sandemanian creed, which emphasised rationalism, egalitarianism and consensus decision-making. Despite Godwin's later renunciation of Christianity, he maintained his Sandemanian roots, which he held responsible for his commitment to rationalism, as well as his stoic personality. Godwin later characterised Newton as, "... a celebrated north country apostle, who, after Calvin damned ninety-nine in a hundred of mankind, has contrived a scheme for damning ninety-nine in a hundred of the followers of Calvin." In 1771, Godwin was finally dismissed by Newton and returned home, but his father died the following year, which prompted his mother to urge him to continue his education.
At seventeen years old, Godwin began higher education at the Dissenting Academy in Hoxton, where he studied under Andrew Kippis, the biographer, and Abraham Rees, who was responsible for the Cyclopaedia, or an Universal Dictionary of Arts and Sciences. A hotspot for classical liberalism, at the Academy, Godwin familiarized himself with John Locke's approach to psychology, Isaac Newton's scientific method and Francis Hutcheson's ethical system, which all informed Godwin's philosophies of determinism and immaterialism. Although Godwin had joined the Academy as a committed Tory, the outbreak of the American Revolution led him to support the Whig opposition and, after reading the works of Jonathan Swift, he became a staunch republican. He soon familiarised himself with the French philosophes, learning of Jean-Jacques Rousseau's belief in the inherent goodness of human nature and opposition to private property, as well as Claude Adrien Helvétius's utilitarianism and Paul-Henri Thiry's materialism.
In 1778, Godwin graduated from the academy and was quickly appointed as a minister in Ware, where he met Joseph Fawcett, one of his main direct influences. By 1780, he had been reassigned to Stowmarket, where he first read Paul-Henri Thiry's System of Nature, adopting his philosophies of determinism and materialism. But after a conflict with other dissenting ministers of Suffolk over the administration of the eucharist, he stepped down and left for London in April 1782, resigning his career as a minister to become a writer.
Early writing
Throughout 1783, Godwin published a series of written works, beginning with an anonymously-published biography of William Pitt the Elder, followed by a couple of pro-Whig political pamphlets. He also briefly attempted to return to ministerial work in Beaconsfield, where he preached that "faith should be subordinated to reason". A few months later, during the opening of a seminary in Epsom, Godwin gave a politically-charged speech in which he denounced state power as "artificial" and exalted the libertarian potential of education, which he believed could bring an end to authoritarian governments. Godwin then worked for a spell as a satirical literary critic, publishing The Herald of Literature, in which he reviewed non-existent works by real authors, imitating their writing styles in lengthy quotations.
His work on the Herald secured him further work as a critic for John Murray's English Review and a commission to translate Simon Fraser's memoirs. In 1784, he published the romantic novels Damon and Delia and Imogen, the latter of which was framed as a translation of a found manuscript from ancient Wales. That same year, he also published Sketches of History, which compiled six of his sermons about the characters of Aaron, Hazael and Jesus. Drawing from John Milton's Paradise Lost, which depicted Satan as a rebel against his creator, Godwin denounced the Christian God as a theocrat and a tyrant that had no right to rule.
As his early works were financially unsuccessful, in 1784, William Godwin hoped John Collins, a wealthy owner of a sugar plantation in St. Vincent would fund his writing. He did not succeed but the close connection between Godwin and members of the Collins family continued for fifty years. John Collin's eldest daughter Harriet de Boinville and William met seventy-two times between 1809 and 1827, and she championed Godwin's An Enquiry Concerning Political Justice and Its Influence on Morals and Happiness (1793) at her salons during that time period.
In further attempts to earn money, Godwin started writing for well-paying Whig journals on Grub Street, starting work as a political journalist for the New Annual Register after being introduced to Georgie Robinson by Andrew Kippis. Godwin's work was then picked up by the Political Herald, where he wrote under the pseudonym of "Mucius" in order to attack the Tories. He subsequently reported on the Pitt ministry's colonial rule in Ireland and India; penned a history of the Dutch Revolt and predicted the outbreak of a revolutionary wave in Europe.
After the death of the Political Herald's editor, Godwin turned down Richard Brinsley Sheridan's offer of succeeding to the editorship, out of concern that his editorial independence would be compromised by a direct financial connection to the Whig Party. But it was through Sheridan that Godwin became acquainted with a life-long friend Thomas Holcroft, whose arguments convinced Godwin to finally reject Christianity and embrace atheism. At the same time, Godwin took up a side job as a tutor for the young Thomas Abthorpe Cooper. After a fractious relationship between the two, Godwin eventually became the orphaned boy's adoptive father, which altered his style of pedagogy to one that emphasised "an open and honest relationship between tutor and pupil."
With the outbreak of the French Revolution, Godwin was among the Radicals that enthusiastically welcomed the events as the spiritual successor to Britain's own Glorious Revolution of 1688. As a member of the Revolution Society, Godwin met the political activist Richard Price, whose Discourse on the Love of Our Country espoused a radical form of patriotism that controversially upheld freedom of religion, representative democracy and the right of revolution. Price's Discourse ignited a pamphlet war, beginning with Edmund Burke's publication of his Reflections on the Revolution in France, which defended traditionalist conservatism and opposed revolution. In response to Burke, Thomas Paine published his Rights of Man with the help of Godwin, who declared that "the seeds of revolution it contains are so vigorous in their stamina, that nothing can overpower them."
But Godwin's voice remained largely absent from the Revolution Controversy, as he had started writing a work of political philosophy that developed on his radical principles. With George Robinson's financial support, Godwin quit his work at the New Annual Register and committed himself wholly to his magnum opus, which he hoped would condense the "best and most liberal in the science of politics into a coherent system". After sixteen months' work, while the revolution in France had culminated with the execution of Louis XVI and the outbreak of war, Godwin published his Enquiry Concerning Political Justice in February 1793.
Marriage to Mary Wollstonecraft
Godwin first met Mary Wollstonecraft at the home of their mutual publisher. Joseph Johnson was hosting a dinner for another of his authors, Thomas Paine, and Godwin remarked years later that on that evening he heard too little of Paine and too much of Wollstonecraft; he did not see her again for some years. In the interim, Wollstonecraft went to live in France to witness the Revolution for herself, and had a child, Fanny Imlay, with an American adventurer named Gilbert Imlay. In pursuit of Gilbert Imlay's business affairs, Wollstonecraft travelled to Scandinavia, and soon afterwards published a book based on the voyage. Godwin read it, and later wrote that "If ever there was a book calculated to make a man in love with its author, this appears to me to be the book."
When Godwin and Wollstonecraft were reintroduced in 1796, their respect for each other soon grew into friendship, sexual attraction, and love. Once Wollstonecraft became pregnant, they decided to marry so that their child would be considered legitimate by society. Their marriage revealed the fact that Wollstonecraft had never been married to Imlay, and as a result she and Godwin lost many friends. Godwin received further criticism because he had advocated the abolition of marriage in Political Justice. After their marriage at St. Pancras on 29 March 1797, they moved into two adjoining houses in Somers Town so that they could both still retain their independence; they often communicated by notes delivered by servants.
Mary Wollstonecraft Godwin was born in Somers Town on 30 August 1797, the couple's only child. Godwin had hoped for a son and had been planning on naming the child "William". On 10 September 1797 Wollstonecraft died of complications following the birth. By all accounts, it had been a happy and stable, though brief, relationship. Now Godwin, who had been a bachelor until a few months before, was distraught at the loss of the love of his life. Simultaneously, he became responsible for the care of these two young girls, the new-born Mary and toddler Fanny.
When Mary was three years old, Godwin left his daughters in the care of James Marshall while he travelled to Ireland. Godwin's tone in his letters demonstrates how much he cared about them. His letters show the stress he placed on giving his two daughters a sense of security. "And now what shall I say for my poor little girls? I hope they have not forgot me. I think of them every day, and should be glad, if the wind was more favourable, to blow them a kiss a-piece from Dublin to the Polygon.. but I have seen none that I love so well or think half so good as my own."
In December 1800 his play Antonio, or the Soldier's Return was put on at the Theatre Royal, Drury Lane without success.
Second marriage and book publishing
In 1801, Godwin married his neighbour Mary Jane Clairmont. She brought two of her own children into the household, Charles and Claire. Journalist H.N. Brailsford wrote in 1913, "She was a vulgar and worldly woman, thoroughly feminine, and rather inclined to boast of her total ignorance of philosophy." While Fanny eventually learned to live with Clairmont, Mary's relationship with her stepmother was tense. Mary writes, "As to Mrs Godwin, something very analogous to disgust arises whenever I mention her", "A woman I shudder to think of".
In 1805, the Godwins set up a shop and publishing house called the Juvenile Library, significant in the history of children's literature. Through this, Godwin wrote children's primers on Biblical and classical history, and using the pseudonym Edward Baldwin, he wrote a variety of books for children, including a version of Jack and the Beanstalk, and a biography of the Irish artist William Mulready, who illustrated works for them. They kept alive family ties, publishing the first book by Margaret King (then Lady Mount Cashell), who had been a favoured pupil of Mary Wollstonecraft. They published works never since out of print, such as Charles and Mary Lamb's Tales from Shakespeare. The Juvenile Library also translated European authors. The first English edition of Swiss Family Robinson was translated (from the French, not the German) and edited by them. The business was the family's mainstay for decades.
In 1807 his tragedy Faulkener was performed at the Theatre Royal Drury Lane without more success than his earlier play.
Children
The eldest of Godwin's children was Fanny Imlay (1794–1816), who committed suicide as a young woman. Charles Gaulis Clairmont ended up as Chair of English literature at Vienna University and taught sons of the royal family; news of his sudden death in 1849 distressed Maximilian. Mary Godwin (1797–1851) gained fame as Mary Shelley, author of Frankenstein. Half a year younger than her was Claire Clairmont, Mary Jane's only daughter, to whom she showed favouritism. The youngest, and the only child of the second marriage, was William Godwin the Younger (1803–1832). Godwin sent him first to Charterhouse School and then to various other establishments of a practical bent. Nonetheless, he eventually earned his living by the pen. He died at 29, leaving the manuscript of a novel, which Godwin saw into print. All of Godwin's children who lived into adulthood worked as writers or educators, carrying on his legacy and that of his wives. Only two of them had children who in turn survived: Percy Florence Shelley, and the son and daughter of Charles. Godwin did not welcome the birth of Allegra Byron, but Claire's only child died aged five.
Godwin had high hopes for Mary, giving her a more rigorous intellectual experience than most women of her period, and describing her as "very intelligent". He wished to give his daughter a more "masculine education" and prepared her to be a writer. However, Godwin withdrew his support as Mary became a woman and pursued her relationship with Percy Bysshe Shelley. Mary's first two novels, Frankenstein and Mathilda, may be seen as a reaction to her childhood. Both explore the role of the father in the child's socialisation and the control the father has on the child's future. Shelley's last two novels, Lodore and Falkner, re-evaluate the father-daughter relationship. They were written at a time when Shelley was raising her only surviving child alone and supporting her ageing father. In both novels, the daughter eludes the father's control by giving him the traditional maternal figure he asks for. This relationship gives the daughter control of the father.
Later years and death
Godwin was awarded a sinecure position as Office Keeper and Yeoman Usher of the Receipt of the Exchequer, which came with grace and favour accommodation in New Palace Yard, part of the complex of the Palace of Westminster, i.e. the Houses of Parliament. One of his duties was to oversee the sweeping of the chimneys of these extensive buildings. On 16 October 1832, a fire broke out and most of the Palace burned down. Literary critic Marilyn Butler concluded her review of a 1980 biography of Godwin by comparing him favourably to Guy Fawkes: Godwin was more successful in his opposition to the status quo.
In later years, Godwin came to expect support and consolation from his daughter. Two of the five children he had raised had pre-deceased him, and two more lived abroad. Mary responded to his expectations and she cared for him until he died in 1836.
In 1836, Harriet de Boinville described Godwin's death, in a letter to his daughter Mary, as "the extinction of a mastermind. ... Everything is interesting which relates to such a man, one of the gifted few under whose moral influences society is now vibrating."
Legacy and memorials
Godwin was buried next to Mary Wollstonecraft in the graveyard of St Pancras, the church where they had married in 1797. His second wife outlived him, and eventually was buried there too. The three share a gravestone. In the 1850s, Mary Shelley's only surviving child, Percy Florence Shelley, had the remains of Godwin and Wollstonecraft moved from what had become a run-down area of the capital to the more salubrious surroundings of Bournemouth, to his family tomb at St Peter's Church.
The surviving manuscripts for many of Godwin's best-known works are held in the Forster Collection at the Victoria and Albert Museum. The V&A's manuscripts for Political Justice and Caleb Williams were both digitised in 2017 and are now included in the Shelley-Godwin Archive.
His birthplace, Wisbech, has two memorials to him. A cul-de-sac was named in his honour Godwin Close, and a wall plaque adorns a building adjacent to the Angles Theatre in Alexandra Road.
Works and ideas
Enquiry Concerning Political Justice and Caleb Williams
In 1793, while the French Revolution was in full swing, Godwin published his great work on political science, Enquiry concerning Political Justice, and its Influence on General Virtue and Happiness. The first part of this book was largely a recap of Edmund Burke's A Vindication of Natural Society – a critique of the state. Godwin acknowledged the influence of Burke for this portion. The rest of the book is Godwin's positive vision of how an anarchist (or minarchist) society might work. Political Justice was extremely influential in its time: after the writings of Burke and Paine, Godwin's was the most popular written response to the French Revolution. Godwin's work was seen by many as illuminating a middle way between the fiery extremes of Burke and Paine. Prime Minister William Pitt famously said that there was no need to censor it, because at over £1 it was too costly for the average Briton to buy. However, as was the practice at the time, numerous "corresponding societies" took up Political Justice, either sharing it or having it read to the illiterate members. Eventually, it sold over 4000 copies and brought literary fame to Godwin.
Godwin augmented the influence of Political Justice with the publication of a novel that proved equally popular, Things as They Are; or, The Adventures of Caleb Williams. This tells the story of a servant who finds out a dark secret about Falkland, his aristocratic master, and is forced to flee because of his knowledge. Caleb Williams is essentially the first thriller: Godwin wryly remarked that some readers were consuming in a night what took him over a year to write. Not the least of its merits is a portrait of the justice system of England and Wales at the time and a prescient picture of domestic espionage. His literary method, as he described it in the introduction to the novel, also proved influential: Godwin began with the conclusion of Caleb being chased through Britain, and developed the plot backwards. Dickens and Poe both commented on Godwin's ingenuity in doing this.
Political writing
In response to a treason trial of some of his fellow British Jacobins, among them Thomas Holcroft, Godwin wrote Cursory Strictures on the Charge Delivered by Lord Chief Justice Eyre to the Grand Jury, 2 October 1794 in which he forcefully argued that the prosecution's concept of "constructive treason" allowed a judge to construe any behaviour as treasonous. It paved the way for a major victory for the Jacobins, as they were acquitted.
However, Godwin's own reputation was eventually besmirched after 1798 by the conservative press, in part because he chose to write a candid biography of his late wife, Mary Wollstonecraft, entitled Memoirs of the Author of A Vindication of the Rights of Woman, including accounts of her two suicide attempts and her affair (before her relationship with Godwin) with the American adventurer Gilbert Imlay, which resulted in the birth of Fanny Imlay.
Godwin, stubborn in his practice, practically lived in secret for 30 years because of his reputation. However, in its influence on writers such as Shelley, who read the work on multiple occasions between 1810 and 1820, and Kropotkin, Political Justice takes its place with Milton's Areopagitica and Rousseau's Émile as a defining anarchist and libertarian text.
Interpretation of political justice
By political justice, the author meant "the adoption of any principle of morality and truth into the practice of a community," and the work was therefore an inquiry into the principles of society, government, and morals. For many years Godwin had been "satisfied that monarchy was a species of government unavoidably corrupt," and from desiring a government of the simplest construction, he gradually came to consider that "government by its very nature counteracts the improvement of original mind," demonstrating anti-statist beliefs that would later be considered anarchist.
Believing in the perfectibility of the human race, that there are no innate principles, and therefore no original propensity to evil, he considered that "our virtues and our vices may be traced to the incidents which make the history of our lives, and if these incidents could be divested of every improper tendency, vice would be extirpated from the world." All control of man by man was more or less intolerable, and the day would come when each man, doing what seems right in his own eyes, would also be doing what is in fact best for the community, because all will be guided by principles of pure reason.
Such optimism was combined with a strong empiricism to support Godwin's belief that the evil actions of men are solely reliant on the corrupting influence of social conditions, and that changing these conditions could remove the evil in man. This is similar to the ideas of his wife, Mary Wollstonecraft, concerning the shortcomings of women as due to discouragement during their upbringing.
Peter Kropotkin remarked of Godwin that when "speaking of property, he stated that the rights of every one 'to every substance capable of contributing to the benefit of a human being' must be regulated by justice alone: the substance must go 'to him who most wants it'. His conclusion was communism."
Debate with Malthus
In 1798, Thomas Robert Malthus wrote An Essay on the Principle of Population in response to Godwin's views on the "perfectibility of society". Malthus wrote that populations are inclined to increase in times of plenty, and that only distress, from causes such as food shortages, disease, or war, serves to stem population growth. Populations in his view are therefore always doomed to grow until distress is felt, at least by the poorer segment of the society. Consequently, poverty was felt to be an inevitable phenomenon of society.Let us imagine for a moment Mr. Godwin's beautiful system of equality realized in its utmost purity, and see how soon this difficulty might be expected to press under so perfect a form of society.... Let us suppose all the causes of misery and vice in this island removed. War and contention cease. Unwholesome trades and manufactories do not exist. Crowds no longer collect together in great and pestilent cities.... Every house is clean, airy, sufficiently roomy, and in a healthy situation.... And the necessary labours of agriculture are shared amicably among all. The number of persons, and the produce of the island, we suppose to be the same as at present. The spirit of benevolence, guided by impartial justice, will divide this produce among all the members of the society according to their wants....With these extraordinary encouragements to population, and every cause of depopulation, as we have supposed, removed, the numbers would necessarily increase faster than in any society that has ever yet been known....
Malthus went on to argue that under such ideal conditions, the population could conceivably double every 25 years. However, the food supply could not continue doubling at this rate for even 50 years. The food supply would become inadequate for the growing population, and then:...the mighty law of self-preservation expels all the softer and more exalted emotions of the soul.... The corn is plucked before it is ripe, or secreted in unfair proportions; and the whole black train of vices that belong to falsehood are immediately generated. Provisions no longer flow in for the support of the mother with a large family. The children are sickly from insufficient food.... No human institutions here existed, to the perverseness of which Mr. Godwin ascribes the original sin of the worst men. No opposition had been produced by them between public and private good. No monopoly had been created of those advantages which reason directs to be left in common. No man had been goaded to the breach of order by unjust laws. Benevolence had established her reign in all hearts: and yet in so short a period as within fifty years, violence, oppression, falsehood, misery, every hateful vice, and every form of distress, which degrade and sadden the present state of society, seem to have been generated by the most imperious circumstances, by laws inherent in the nature of man, and absolutely independent of it human regulations.
In Political Justice Godwin had acknowledged that an increase in the standard of living as he envisioned could cause population pressures, but he saw an obvious solution to avoiding distress: "project a change in the structure of human action, if not of human nature, specifically the eclipsing of the desire for sex by the development of intellectual pleasures". In the 1798 version of his essay, Malthus specifically rejected this possible change in human nature. In the second and subsequent editions, however, he wrote that widespread moral restraint, i.e., postponement of marriage and pre-nuptial celibacy (sexual abstinence), could reduce the tendency of a population to grow until distress was felt.
Godwin also saw new technology as being partly responsible for the future change in human nature into more intellectually developed beings. He reasoned that increasing technological advances would lead to a decrease in the amount of time individuals spent on production and labour, and thereby, to more time spent on developing "their intellectual and moral faculties". Instead of population growing exponentially, Godwin believed that this moral improvement would outrun the growth of population. Godwin pictured a social utopia where society would reach a level of sustainability and engage in "voluntary communism".
In July 1820, Godwin published Of Population: An Enquiry Concerning the Power of Increase in the Numbers of Mankind as a rebuttal to Malthus' essays. Godwin's main argument was against Malthus' notion that population tends to grow exponentially. Godwin believed that for population to double every twenty-five years (as Malthus had asserted had occurred in the United States, due to the expanse of resources available there), every married couple would have to have at least eight children, given the rate of childhood deaths. Godwin himself was one of thirteen children, but he did not observe the majority of couples in his day having eight children. He therefore concluded:In reality, if I had not taken up the pen with the express purpose of confuting all the errors of Mr Malthus's book, and of endeavouring to introduce other principles, more cheering, more favourable to the best interests of mankind, and better prepared to resist the inroads of vice and misery, I might close my argument here, and lay down the pen with this brief remark, that, when this author shall have produced from any country, the United States of North America not excepted, a register of marriages and births, from which it shall appear that there are on an average eight births to a marriage, then, and not till then, can I have any just reason to admit his doctrine of the geometrical ratio.
Interest in earthly immortality
In his first edition of Political Justice Godwin included arguments favouring the possibility of "earthly immortality" (what would now be called physical immortality), but later editions of the book omitted this topic. Although the belief in such a possibility is consistent with his philosophy regarding perfectibility and human progress, he probably dropped the subject because of political expedience when he realised that it might discredit his other views.
Works
Novels
Damon and Delia, A Tale (1784)
Imogen: A Pastoral Romance From the Ancient British (1784)
Things as They Are; or, The Adventures of Caleb Williams (1794)
St. Leon (1799)
The Looking Glass: A True History of the Early Years of an Artist (1805)
Fleetwood (1805)
Mandeville (1817)
Cloudesley: A Tale (1830)
Deloraine (1833)
Other fiction
Antonio: A Tragedy In Five Acts (1800) – play
Fables, Ancient And Modern: Adapted For The Use Of Children (1840) – posthumously published
Major non-fiction
Enquiry concerning Political Justice, and its Influence on General Virtue and Happiness (1793)
The Enquirer (London: George Robinson, 1797; rev. 1823)
Memoirs of the Author of A Vindication of the Rights of Woman (1798)
Life of Geoffrey Chaucer (1804)
The Pantheon: Or, Ancient History of the Gods of Greece and Rome (1814)
Lives Of Edward And John Philips, Nephews And Pupils Of Milton (1815)
Life of Lady Jane Grey, and of Lord Guildford Dudley, Her Husband (1824)
History of the Commonwealth (book) (1824–1828)
Thoughts on Man, his Nature, Productions, and Discoveries, Interspersed with some particulars respecting the author (1831)
Lives of the Necromancers (1834)
Transfusion (1835)
Family tree
References
Bibliography
External links
William Godwin's Diary
Detailed notes on people appearing in William Godwin's Diary
William Godwin
Works of William Godwin at eBooks@Adelaide
The Shelley-Godwin Archive
Letters and artefacts associated with Godwin at the Bodleian Library's Shelley's Ghost online exhibition
1756 births
1836 deaths
18th-century atheists
18th-century English male writers
18th-century English non-fiction writers
18th-century English novelists
18th-century English philosophers
19th-century atheists
19th-century English male writers
19th-century English non-fiction writers
19th-century English novelists
19th-century English philosophers
Anarchist theorists
Anarchist writers
Anti-consumerists
Atheist philosophers
British radicals
Burials at St Pancras Old Church
Consequentialists
English anarchists
English atheists
English Calvinist and Reformed ministers
English Dissenters
English former Christians
English libertarians
English male non-fiction writers
English male novelists
English political philosophers
English political writers
English publishers (people)
Enlightenment philosophers
Former Calvinist and Reformed Christians
Glasites
Godwin family
Individualist anarchists
Life extensionists
Materialists
People from Somers Town, London
People from Wisbech
Philosophy writers
Utilitarians
Philosophical anarchists
Burials at St Peter's Church, Bournemouth | William Godwin | Physics | 6,773 |
12,538,238 | https://en.wikipedia.org/wiki/Clarion%20Hotel%20and%20Casino | Clarion Hotel and Casino was located near the Las Vegas Strip in Winchester, Nevada. It included a 12-story hotel with approximately 200 rooms, and a small casino. The property opened as a Royal Inn on April 19, 1970. It was renamed Royal Americana in 1980, and then Paddlewheel in 1983.
Actress Debbie Reynolds purchased the property in 1992, and renamed it a year later as the Debbie Reynolds Hotel. The renovated property included a museum featuring Reynolds' collection of Hollywood memorabilia. The hotel struggled financially, entering bankruptcy in 1997. It was sold a year later to the World Wrestling Federation, which planned to demolish the hotel and build a wrestling themed resort on the land. The project was ultimately canceled, and ownership would change several more times. Following another renovation, the property operated as the Greek Isles from 2001 to 2010, and then under the Clarion brand until its closure on September 1, 2014.
Developer Lorenzo Doumani bought the hotel-casino a month after its closure, and had it demolished for redevelopment. The hotel tower was imploded on February 10, 2015. Four years later, Doumani unveiled plans to build a high-rise hotel, Majestic Las Vegas, on the site. However, the start of construction has been delayed several times as of 2024.
History
Early years (1970–1991)
The property originated as part of the Royal Inns of America chain, with construction beginning on August 1, 1969. The $3 million Royal Inn opened on April 19, 1970. It was built on , located just east of the Las Vegas Strip and down the street from the Las Vegas Convention Center. The 12-story hotel contained 200 rooms, and was considered small by Las Vegas standards.
In 1972, Michael Gaughan and Frank Toti bought out the property's gaming operations, and managed the casino for much of the remaining decade. In 1979, fast food operator (and former automat chain) Horn & Hardart purchased the Royal Inn for $17 million. By late 1980, the property was rebranded as the Royal Americana Hotel, with a New York theme. A $3.5 million renovation increased the room count to 300. Nevertheless, the Royal Americana was experiencing substantial losses, and Horn & Hardart decided to close it in 1982. The casino soon reopened with limited offerings, in order to maintain the property's gaming license.
An investment group, which included two Horn & Hardart executives, took over the Royal Americana at the end of 1982, and spent $5.7 million on remodeling. The property debuted as the Paddlewheel on November 21, 1983. Two adjoining structures, containing 113 rooms, were demolished. The original hotel tower was kept, and its west exterior was updated to feature a mural of a paddle steamer crashing through the building. The Paddlewheel had a child-friendly atmosphere, with arcade games and amusement rides, but shifted to an adult focus in the late 1980s, including a male revue. Horn & Hardart put the Paddlewheel back up for sale in 1990, and closed the casino in October 1991. It had 300 slot machines and four table games.
Debbie Reynolds ownership (1992–1998)
Actress Debbie Reynolds and her husband Richard Hamlett, at his suggestion, bought the shuttered property at auction in 1992, for $2.2 million. Reynolds planned to spend $15 million on renovations, which would include a museum to house her collection of Hollywood memorabilia.
The property reopened in July 1993, as the Debbie Reynolds Hollywood Hotel. Shortly thereafter, Hollywood Casino Corp. filed a trademark infringement lawsuit against the hotel-casino. A settlement was reached by the end of 1993, with "Hollywood" dropped from the name. The property is best remembered under the Debbie Reynolds name, and the adjacent 1,000-foot Mel Avenue was eventually renamed Debbie Reynolds Drive in 1996. A sign from the Debbie Reynolds Hotel would later be acquired by the city's Neon Museum.
Because Reynolds and her husband had no experience in operating a resort, the various amenities were leased out, leaving the couple to focus on live entertainment offerings and the museum. Reynolds herself performed at the property, in a 500-seat theater designed by her son Todd Fisher. The casino, operated by Jackpot Enterprises, measured . It included 184 slot machines and two table games.
Reynolds struggled with the financing to complete the project. She took the company public in 1994 to raise money, and the museum finally opened the following year. Rooms in the top three floors of the hotel were sold as timeshares to help raise money, and the property eventually accumulated more than 1,000 unit owners. Reynolds and Hamlett had a troubled marriage, and she eventually paid him $270,000 to buy out his interest. They divorced in 1996.
Fisher said the property was undercapitalized from the time it opened. He blamed early financial problems on mismanagement, and took over operations at the end of 1995. The casino closed in March 1996, after Fisher terminated the agreement with Jackpot as unprofitable. Reynolds could not get a gaming license to operate it in-house because of the company's poor finances.
Reynolds and the hotel both filed for bankruptcy protection in July 1997, and several deals to sell the property failed over the next year. Among the prospective buyers was Westgate Resorts, which planned to add additional timeshare units. Westgate owner David Siegel invested approximately $200,000 to keep the property operational during bankruptcy.
To maintain the site's gaming status, Capado Gaming was brought on to reopen the casino in September 1997, with 25 slot machines. The Debbie Reynolds Hotel was put up for auction in August 1998. Reynolds called it "a sad ending to a lot of hard work and special dreams," saying further, "This represents a long six years of hard work and dedication and love. But you can't look back. That's not the way I want to deal with this."
Later years (1998–2014)
The winning bidder of the 1998 auction, at $10.65 million, was the World Wrestling Federation (WWF). The company planned to level the building and construct a 35-story, wrestling themed hotel and casino with 1,000 rooms. The WWF stripped much of the interior to prepare for demolition, but ultimately decided the site was not big enough. The project's cancellation was also attributed to cost and unfamiliarity with the gaming industry.
As of 2000, the property was operating as Convention Center Drive Hotel. At the end of the year, the WWF sold it to Chicago-based Mark IV Realty Group for $11.2 million. Mark IV hoped to redevelop the site with 1,000 rooms, but instead remodeled the property with a Greek theme and renamed it the Greek Isles. The renovation project cost $1 million and included a new pool. The casino portion opened on July 20, 2001. It included 100 slot machines and was operated by United Coin. The hotel opened later in 2001, and had 192 rooms. It eventually contracted with Delta Air Lines to house flight crews during layovers.
The property offered various shows during the Greek Isles era. Among these was a Rat Pack tribute show that opened in 2002 and ran for several years. Others included a magic show, a fire-themed production, and a musical tribute to composer Harold Arlen.
In July 2007, the Greek Isles was sold to an investment group, which planned to eventually demolish the hotel-casino and redevelop the land as a mixed-use project. However, a year later, the group defaulted on a $56 million loan that was provided by Canpartners Realty. The property entered bankruptcy in April 2009, and was taken over four months later by Canpartners, which blamed the financial problems on poor management.
In 2010, the property was rebranded as a Clarion hotel, the only location at that time to include a casino. It had two performance venues at that time, with magician Jan Rouven among its entertainers. In 2012, one of the venues was used as a filming location for Lana Del Rey's 2012 short film Ride.
In its final years, the hotel included 202 rooms. The Clarion closed on September 1, 2014, and its inventory was liquidated.
Demolition and redevelopment
A month after its closure, developer Lorenzo Doumani purchased the Clarion from Canpartners for $22.5 million. He announced plans to demolish the hotel-casino for redevelopment as a mixed-use property. The Clarion's hotel tower was demolished by implosion on February 10, 2015, shortly before 3 a.m. It was the first hotel-casino in Las Vegas to be imploded since the New Frontier in 2007. The Clarion implosion did not go as planned; an elevator shaft on the tower's west side was left standing afterward. Debris from the collapsing tower locked the shaft in place, only allowing it to drop slightly. Later in the day, cables were lassoed around the shaft to bring it down.
On the vacant land, Doumani intends to build a non-gaming high-rise hotel known as Majestic Las Vegas. He announced the project in 2019, but it has been delayed several times, and construction has yet to begin as of 2024.
See also
Notes
References
External links
Debbie Reynolds Hotel gallery by Todd Fisher
Clarion Hotel official website, archived via the Wayback Machine
1970 establishments in Nevada
2014 disestablishments in Nevada
Buildings and structures demolished in 2015
Hotels in Winchester, Nevada
Defunct casinos in the Las Vegas Valley
Demolished hotels in Clark County, Nevada
Buildings and structures demolished by controlled implosion
Defunct hotels in the Las Vegas Valley
Hotels established in 1970
Hotel buildings completed in 1970
Casino hotels | Clarion Hotel and Casino | Engineering | 1,974 |
20,092,561 | https://en.wikipedia.org/wiki/Armoured%20cable | In electrical power distribution, armoured cable usually means steel wire armoured cable (SWA) which is a hard-wearing power cable designed for the supply of mains electricity. It is one of a number of armoured electrical cables – which include 11 kV Cable and 33 kV Cable – and is found in underground systems, power networks and cable ducting.
Aluminium can also be used for armouring, and historically iron was used. Armouring is also applied to submarine communications cables.
Construction
The typical construction of an SWA cable can be broken down as follows:
Conductor: consists of plain stranded copper (cables are classified to indicate the degree of flexibility. Class 2 refers to rigid stranded copper conductors as stipulated by British Standard BS EN 60228:2005)
Insulation: Cross-linked polyethylene (XLPE) is used in a number of power cables because it has good water resistance and excellent electrical properties. Insulation in cables ensures that conductors and other metal substances do not come into contact with each other.
Bedding: Polyvinyl chloride (PVC) bedding is used to provide a protective boundary between inner and outer layers of the cable.
Armour: Steel wire armour provides mechanical protection, which means the cable can withstand higher stresses, be buried directly and used in external or underground projects. The armouring is normally connected to earth and can sometimes be used as the circuit protective conductor ("earth wire") for the equipment supplied by cable.
Sheath: a black PVC sheath holds all components of the cable together and provides additional protection from external stresses.
The PVC version of SWA cable, described above, meets the requirements of both British Standard BS 5467 and International Electrotechnical Commission standard IEC 60502. It is known as SWA BS 5467 Cable and it has a voltage rating of 600/1000 V.
SWA cable can be referred to more generally as mains cable, armoured cable, power cable and booklet armoured cable. The name power cable, however, applies to a wide range of cables including 6381Y, NYCY, NYY-J and 6491X Cable.
Aluminium wire armoured cable
Steel wire armour is only used on multicore versions of the cable. A multicore cable, as the name suggests, is one where there are a number of different cores. When cable has only one core, aluminium wire armour (AWA) is used instead of steel wire. This is because the aluminium is non-magnetic. A magnetic field is produced by the current in a single core cable. This would induce an electric current in the steel wire, which could cause overheating.
Use of armour for earthing
The use of the armour as the means of providing earthing to the equipment supplied by the cable (a function technically known as the circuit protective conductor or CPC) is a matter of debate within the electrical installation industry. It is sometimes the case that an additional core within the cable is specified as the CPC (for instance, instead of using a two core cable for line and neutral and the armouring as the CPC, a three core cable is used) or an external earth wire is run alongside the cable to serve as the CPC. Primary concerns are the relative conductivity of the armouring compared to the cores (which reduces as the cable size increases) and reliability issues. Recent articles by authoritative sources have analysed the practice in detail and concluded that, for the majority of situations, the armouring is adequate to serve as the CPC under UK wiring regulations.
SWA BS 6724 cable
The construction of an SWA cable depends on the intended use. When the power cable needs to be installed in a public area, for example, a Low Smoke Zero Halogen (LSZH) equivalent, called SWA BS 6724 Cable must be used. After the King’s Cross fire in London in 1987 it became mandatory to use LSZH sheathing on all London Underground cables – a number of the fatalities were due to toxic gas and smoke inhalation. As a result, LSZH cables are now recommended for use in highly populated enclosed public areas. This is because they emit non-toxic levels of Halogen and low levels of smoke when exposed to fire. SWA Cable BS 6724 – which meets the requirements of British standard BS 6724 – has LSZH bedding and a black LSZH sheath.
Use in telecommunications
Armoured cable is used for submarine communications cable to protect against damage by fishing vessels and wildlife. Early cables carrying telegraph used iron wire armouring, but later switched to steel. The first of these was a cable across the English Channel laid by the Submarine Telegraph Company in 1851. Many more telegraph, and later, telephone cables soon followed with multiple cores. Modern cables are fibre-optic cables rather than copper cores. The first transatlantic fibre-optic cable was TAT-8 in 1988.
See also
Electrical cable
Electrical wiring
References
Electrical wiring
Power cables | Armoured cable | Physics,Engineering | 996 |
2,393,428 | https://en.wikipedia.org/wiki/International%20Organization%20of%20Legal%20Metrology | The International Organization of Legal Metrology ( - OIML), is an intergovernmental organisation that was created in 1955 to promote the global harmonisation of the legal metrology procedures that underpin and facilitate international trade.
Such harmonisation ensures that certification of measuring devices in one country is compatible with certification in another, thereby facilitating trade in the measuring devices and in products that rely on the measuring devices. Such products include weighing devices, taxi meters, speedometers, agricultural measuring devices such as cereal moisture meters, health related devices such as exhaust measurements and alcohol content of drinks.
Since its establishment, the OIML has developed a number of guidelines to assist its Members, particularly developing nations, to draw up appropriate legislation concerning metrology across all facets of society and guidelines on certification and calibration requirements of new products, particularly where such calibration has a legal impact such as in trade, health care and taxation.
The OIML works closely with other international organisations such as the International Bureau of Weights and Measures (BIPM) and International Organization for Standardization (ISO) to ensure compatibility between each organisation's work. The OIML has no legal authority to impose solutions on its Members, but its Recommendations are often used by Member States as part of their own national legislation.
, 64 countries had signed up as Member States and a further 63 as Corresponding (non-voting) Members including all the G20, EU and BRICS countries. Between them, the OIML Members cover 86 % of the world's population and 96 % of its economy.
The Headquarters of the OIML is located in Paris, France.
Definition of "legal metrology"
The definition of "legal metrology" varies amongst jurisdictions, reflecting the extent to which metrology is bound into the jurisdiction's own legal and regulatory code. The OIML, in their publication International Vocabulary of Terms in Legal Metrology defined "legal metrology" as
In the glossary of their book Metrology - in short Howarth and Redgrave state that "legal metrology"
These two statements are held together by the words "regulatory", "accuracy" and "reliability". The word "regulatory" encompasses the "legal" aspects of the term – the role played by governments, national metrology institutes and standards organisations in creating a framework to ensure confidence in the accuracy and reliability of a measurement. This framework requires that the specified test and conformance operations are carried out, and that the certificates pertaining to these operations are filed in a manner that enables third parties to assess them should the need arise.
The OIML has identified four main activities that fulfil the purposes of legal metrology:
Setting up of legal requirements,
Control/conformity assessment of regulated products and regulated activities,
Supervision of regulated products and of regulated activities,
Providing the necessary infrastructure for correct measurements.
History
The International Organization of Legal Metrology (OIML), an intergovernmental organisation, was established under a diplomatic treaty signed in Paris on 12 October 1955 to promote the global harmonisation of legal metrology procedures that underpin and facilitate international trade. Under French law, its principal body, the International Conference on Legal Metrology, is accorded diplomatic status.
The Convention that set up the OIML listed eight objectives behind its establishment. At the 2011 meeting in Prague of the International Committee of Legal Metrology (CIML), the OIML updated its mission to read:
At the same meeting, its objectives were then stated as follows:
"To develop, in cooperation with our stakeholders, standards and related documents for use by legal metrology authorities and industry that when implemented will achieve the mission of the OIML".
"To provide mutual recognition systems which reduce trade barriers and costs in a global market".
"To represent the interests of the legal metrology community within international organisations and forums concerned with metrology, standardisation, testing, certification and accreditation".
"To promote and facilitate the exchange of knowledge and competencies within the legal metrology community worldwide".
"In co-operation with other metrology bodies, to raise awareness of the contribution that a sound legal metrology infrastructure can make to a modern economy".
"To identify areas for the OIML to improve the effectiveness and efficiency of its work".
Structure
The OIML, which has an annual operating budget of about two million euros that comes from Member subscriptions is organised around a three-layer model:
The overall direction of the OIML is vested in the International Conference () which meets every four years. The Conference is attended by delegations from Member States and [non-voting] Corresponding Members of the Organisation.
The management of the OIML is vested in the International Committee ( - CIML). The Committee consists of one member from each Member State. These members normally have active official functions in legal metrology in their country. The Committee elects a non-salaried President for a six-year term of office from amongst its Members. The Committee meets annually under the chairmanship of its President.
Secretarial services, day-to-day running and financial management of the OIML are provided by the BIML (). The BIML is the OIML headquarters, located in the 9th Arrondissement of Paris and is headed by a salaried director who is, ex-officio, secretary to both the International Conference and the International Committee.
Senior postholders
CIML Presidents
1956-1962 M. Jacob (Belgium)
1962-1968 J. Stulla-Götz (Austria)
1968-1980 A. van Male (Netherlands)
1980-1994 K. Birkeland (Norway)
1994-2003 G. Faber (Netherlands)
2003-2005 M. Kochsiek (Germany - Acting President )
2005-2011 A. Johnston (Canada)
2011-2017 P. Mason (United Kingdom)
2017-2023 R. Schwartz (Germany)
2023-date B. Mathew (Switzerland)
BIML Directors
1956-1973 M.V.D. Costamagna
1974-2001 B. Athané
2001-2010 J-F. Magaña
2011-2018 S. Patoray
2019-date A. Donnellan
Participation and membership
The OIML has two categories of membership; "Member State" and "Corresponding Member". The Member State category is for countries or economies that are prepared to finance and actively participate in the work of the OIML and which have acceded to the OIML Convention.
The Corresponding Member category is for countries or economies that want to be informed of OIML activities, but cannot, or prefer not to, be a Member State. , a total of 64 states are Member States and 63 are Corresponding Members.
Member States
Corresponding Members
Work
Technical Committees
The technical work of the OIML is carried out by Technical Committees (TC), each committee having responsibility for a different aspect of legal metrology. In some cases the Technical Committee is broken up into one or more Subcommittees (SC). Within each TC or SC the actual technical work is carried out by Project Groups led by conveners. TCs, SCs and Project Groups are led by volunteer experts from OIML Member States. there are 18 Technical Committees and 46 Subcommittees. The Technical Committees are:
TC 1 Terminology
TC 2 Units of measurement
TC 3 Metrological control (5 SCs)
TC 4 Measurement standards and calibration and verification devices
TC 5 General requirements for measuring instruments (2 SCs)
TC 6 Prepackaged products
TC 7 Measuring instruments for length and associated quantities (4 SCs)
TC 8 Measurement of quantities of fluids (5 SCs)
TC 9 Instruments for measuring mass and density (4 SCs)
TC 10 Instruments for measuring pressure, force and associated quantities (5 SCs)
TC 11 Instruments for measuring temperature and associated quantities (3 SCs)
TC 12 Instruments for measuring electrical quantities
TC 13 Measuring instruments for acoustics and vibration
TC 14 Measuring instruments used for optics
TC 15 Measuring instruments for ionizing radiations (2 SCs)
TC 16 Instruments for measuring pollutants (4 SCs)
TC 17 Instruments for physico-chemical measurements (8 SCs)
TC 18 Medical measuring instruments (3 SCs)
Publications
The OIML produces a number of publications, including:
Vocabularies (prefixed by the letter "V") that provide standardised terminology in the field of metrology. The OIML has produced two principal works:
International Vocabulary of Terms in Legal Metrology (VIML) which defines the terms used in legal metrology. The first edition of this work (1978) was the joint effort of seven international organisations - BIPM, IEC, IFCC, ISO, IUPAC, IUPAP and the OIML.
Alphabetical list of terms defined in OIML Recommendations and Documents which defines the technical terms used in the various OIML Recommendations.
In addition, the OIML was a partner in the JCGM which produced the International vocabulary of metrology - Basic and general concepts and associated terms (VIM), a document published by the BIPM on behalf of the JCGM
Recommendations (prefixed by the letter "R") which are model regulations that establish the metrological characteristics required of certain measuring instruments and which specify methods and equipment for checking their conformity. Most of the Recommendations have a similar structure. The four main topics covered in the reports are metrological requirements, technical requirements, methods and equipment for testing and verifying conformity to requirements and test report format. Recommendations are written in such a manner that they can be adopted "as is" by countries that wish to do so, or countries can select those parts that they wish to include in their own legislation. 104 Recommendations have been published, usually in both English and French. Recommendations may be downloaded free of charge from the OIML website.
International Documents (prefixed by the letter "D"), which are informative in nature and intended to improve the work of the metrological services. 31 OIML Documents had been published in this series. Documents may be downloaded free of charge from the OIML website.
The OIML also published Basic Publications, Guides, Seminar Reports, Expert Reports and the OIML Bulletin.
OIML Certification System (OIML-CS)
The OIML-CS is a single Certification System comprising two Schemes: Scheme A and Scheme B. It was launched on 1 January 2018, replacing the OIML Basic Certificate System and the OIML Mutual Acceptance Arrangement (MAA).
The aim of the OIML-CS is to facilitate, accelerate and harmonise the work of national and regional bodies that are responsible for type evaluation and approval of measuring instruments subject to legal metrological control.
The objectives of the OIML-CS are
a) to promote the global harmonisation, uniform interpretation and implementation of legal metrological requirements for measuring instruments and/or modules,
b) to avoid unnecessary re-testing when obtaining national type evaluations and approvals, and to support the recognition of measuring instruments and/or modules under legal metrological control, while achieving and maintaining confidence in the results in support of facilitating the global trade of individual instruments, and
c) to establish rules and procedures for fostering mutual confidence among participating OIML Member States and Corresponding Members in the results of type evaluations that indicate conformity of measuring instruments and/or modules, under legal metrological control, to the metrological and technical requirements established in the applicable OIML Recommendation(s).
There are three categories of participants:
OIML Issuing Authorities are participants from Member States that issue OIML type evaluation reports and OIML Certificates under the OIML-CS.
Utilizers are participants from OML Member States that accept and utilise OIML Certificates and/or OIML type evaluation reports issued by OIML Issuing Authorities.
Associates are participants from Corresponding Members that accept and utilise OIML Certificates and/or OIML type evaluation reports issued by OIML Issuing Authorities. Associates do not have voting rights in the Management Committee.
The requirements for the participation of OIML Issuing Authorities and their associated Test Laboratories in Scheme A or Scheme B are the same, but the method of demonstrating compliance is different. OIML Issuing Authorities are required to demonstrate compliance with ISO/IEC 17065 and Test Laboratories are required to demonstrate compliance with ISO/IEC 17025. For participation in Scheme B, it is sufficient to demonstrate compliance on the basis of “self-declaration” with additional supporting evidence. However, for participation in Scheme A, compliance shall be demonstrated by peer evaluation on the basis of accreditation or peer assessment.
Relationships
The work of the OIML overlaps with the work of a number of other international organisations. In order to minimise the impact of this overlap and also to ensure that the work of the OIML and other organisations can intermesh with each other, the OIML and other organisations have exchanged memoranda of understanding (MoU) with each other. the MoU in existence were:
On 3 December 2008 the International Bureau of Weights and Measures (BIPM), the United Nations Industrial Development Organization (UNIDO) and the OIML signed a three-way MoU whereby each would apply their own expertise in the best way possible to ensure the better implementation of capacity building activities in standards and conformity, as well as compliance with sanitary and phytosanitary (SPS) measures.
An MoU was signed with the International Organization for Standardization (ISO) on 10 June 1966, which was revised on 9 December 2008 whereby both organisations would cooperate via joint technical committees where applicable. A number of OIML Recommendations that were at variance with certain ISO standards were withdrawn. A process to fast-track OIML Recommendations into ISO standards was also agreed. Joint reports issued by both organisations could be downloaded from the OIML website as with any other OIML document.
An MoU was signed with the International Electrotechnical Commission (IEC) on 13 October 2011 during the 46th CIML Meeting in Prague. Under the MoU, the two organisations agreed to keep each other informed of their activities, and where appropriate, to set up joint technical committees and issue joint recommendations. On 10 October 2018 at the 53rd CIML Meeting, held in Hamburg, Germany, a renewed MoU was signed by Mr. Frans Vreeswijk (General Secretary and CEO of the IEC) and Dr. Roman Schwartz (CIML President). On 14 November 2023 at the IEC Secretariat in Geneva, Switzerland, a renewed MoU between the IEC and the OIML was signed by Mr Philippe Metzger (General Secretary and CEO of the IEC) and Dr Bobjoseph Mathew (CIML President).
An MoU was signed with the International Laboratory Accreditation Cooperation (ILAC) on 12 November 2006 whereby ILAC and OIML would cooperate in the use of the OIML Mutual Acceptance Arrangement (MAA) tool and both organisations would cooperate in the field of harmonisation of accreditation by ILAC full members and peer assessments organised by the BIML. On 28 October 2007, during the ILAC/IAF General Assembly in Sydney, the MoU was extended to include the International Accreditation Forum (IAF) whereby ILAC and the OIML would cooperate in the use of the OIML Mutual Acceptance Arrangement (MAA) tool and both organisations would cooperate in the field of harmonisation of accreditation by ILAC full members and peer assessments organised by the document Guide for the application of ISO/IEC Guide 65 to legal metrology was to be revised to bring it into line with other ISO standards, notably ISO 17021 and ISO 9001.
See also
Measurement Canada
WELMEC
Notes
References
External links
Official site
Organizations based in Paris
Organizations established in 1955
Standards organizations in France
International scientific organizations
Manufacturing
Metrology organizations
1955 establishments in France | International Organization of Legal Metrology | Engineering | 3,238 |
28,646,503 | https://en.wikipedia.org/wiki/Defective%20coloring | In graph theory, a mathematical discipline, coloring refers to an assignment of colours or labels to vertices, edges and faces of a graph. Defective coloring is a variant of proper vertex coloring. In a proper vertex coloring, the vertices are coloured such that no adjacent vertices have the same colour. In defective coloring, on the other hand, the vertices are allowed to have neighbours of the same colour to a certain extent.
History
Defective coloring was introduced nearly simultaneously by Andrews and Jacobson, Harary and Jones and Cowen, Cowen and Woodall. Surveys of this and related colorings are given by Marietjie Frick. Cowen, Cowen and Woodall focused on graphs embedded on surfaces and gave a complete characterization of all k and d such that every planar is (k, d)-colorable. Namely, there does not exist a d such that every planar graph is (1, d)- or (2, d)-colorable; there exist planar graphs which are not (3, 1)-colorable, but every planar graph is (3, 2)-colorable. Together with the (4, 0)-coloring implied by the four color theorem, this solves defective chromatic number for the plane. Poh and Goddard showed that any planar graph has a special (3,2)-coloring in which each color class is a linear forest, and this can be obtained from a more general result of Woodall.
For general surfaces, it was shown that for each genus , there exists a such that every graph on the surface of genus is (4, k)-colorable. This was improved to (3, k)-colorable by Dan Archdeacon.
For general graphs, a result of László Lovász from the 1960s, which has been rediscovered many times provides a O(∆E)-time algorithm for defective coloring graphs of maximum degree ∆.
Definitions and terminology
Defective coloring
A (k, d)-coloring of a graph G is a coloring of its vertices with k colours such that each vertex v has at most d neighbours having the same colour as the vertex v. We consider k to be a positive integer (it is inconsequential to consider the case when k = 0) and d to be a non-negative integer. Hence, (k, 0)-coloring is equivalent to proper vertex coloring.
d-defective chromatic number
The minimum number of colours k required for which G is (k, d)-colourable is called the d-defective chromatic number, .
For a graph class G, the defective chromatic number of G is minimum integer k such that for some integer d, every graph in G is (k,d)-colourable. For example, the defective chromatic number of the class of planar graphs equals 3, since every planar graph is (3,2)-colourable and for every integer d there is a planar graph that is not (2,d)-colourable.
Impropriety of a vertex
Let c be a vertex-coloring of a graph G. The impropriety of a vertex v of G with respect to the coloring c is the number of neighbours of v that have the same color as v. If the impropriety of v is 0, then v is said to be properly colored.
Impropriety of a vertex-coloring
Let c be a vertex-coloring of a graph G. The impropriety of c is the maximum of the improprieties of all vertices of G. Hence, the impropriety of a proper vertex coloring is 0.
Example
An example of defective colouring of a cycle on five vertices, , is as shown in the figure. The first subfigure is an example of proper vertex colouring or a (k, 0)-coloring. The second subfigure is an example of a (k, 1)-coloring and the third subfigure is an example of a (k, 2)-coloring. Note that,
Properties
It is enough to consider connected graphs, as a graph G is (k, d)-colourable if and only if every connected component of G is (k, d)-colourable.
In graph theoretic terms, each colour class in a proper vertex coloring forms an independent set, while each colour class in a defective coloring forms a subgraph of degree at most d.
If a graph is (k, d)-colourable, then it is (k′, d′)-colourable for each pair (k′, d′) such that k′ ≥ k and d′≥ d.
Some results
Every outerplanar graph is (2,2)-colorable
Proof: Let be a connected outerplanar graph. Let be an arbitrary vertex of . Let be the set of vertices of that are at a distance from . Let be , the subgraph induced by .
Suppose contains a vertex of degree 3 or more, then it contains as a subgraph. Then we contract all edges of to obtain a new graph . It is to be noted that of is connected as every vertex in is adjacent to a vertex in . Hence, by contracting all the edges mentioned above, we obtain such that the subgraph of is replaced by a single vertex that is adjacent to every vertex in . Thus contains as a subgraph. But every subgraph of an outerplanar graph is outerplanar and every graph obtained by contracting edges of an outerplanar graph is outerplanar. This implies that is outerplanar, a contradiction. Hence no graph contains a vertex of degree 3 or more, implying that is (k, 2)-colorable.
No vertex of is adjacent to any vertex of or , hence the vertices of can be colored blue if is odd and red if even. Hence, we have produced a (2,2)-coloring of .
Corollary: Every planar graph is (4,2)-colorable.
This follows as if is planar then every (same as above) is outerplanar. Hence every is (2,2)-colourable. Therefore, each vertex of can be colored blue or red if is even and green or yellow if is odd, hence producing a (4,2)-coloring of .
Graphs excluding a complete minor
For every integer there is an integer such that every graph with no minor is -colourable.
Computational complexity
Defective coloring is computationally hard. It is NP-complete to decide if a given graph admits a (3,1)-coloring, even in the case where is of maximum vertex-degree 6 or planar of maximum vertex-degree 7.
Applications
An example of an application of defective colouring is the scheduling problem where vertices represent jobs (say users on a computer system), and edges represent conflicts (needing to access one or more of the same files). Allowing a defect means tolerating some threshold of conflict: each user may find the maximum slowdown incurred for retrieval of data with two conflicting other users on the system acceptable, and with more than two unacceptable.
Notes
References
Graph coloring
NP-complete problems | Defective coloring | Mathematics | 1,470 |
3,983,341 | https://en.wikipedia.org/wiki/AP%20Computer%20Science%20A | Advanced Placement (AP) Computer Science A (also known as AP CompSci, AP CompSci A, APCSA, AP Computer Science Applications, or AP Java) is an AP Computer Science course and examination offered by the College Board to high school students as an opportunity to earn college credit for a college-level computer science course. AP Computer Science A is meant to be the equivalent of a first-semester course in computer science. The AP exam currently tests students on their knowledge of Java.
AP Computer Science AB, which was equal to a full year, was discontinued following the May 2009 exam administration.
Course
AP Computer Science emphasizes object-oriented programming methodology with an emphasis on problem solving and algorithm development. It also includes the study of data structures and abstraction, but these topics were not covered to the extent that they were covered in AP Computer Science AB. The Microsoft-sponsored program Technology Education and Literacy in Schools (TEALS) aims to increase the number of students taking AP Computer Science classes.
The units of the exam are as follows:
Case studies and labs
Historically, the AP exam used several programs in its free-response section to test students' knowledge of object-oriented programs without requiring them to develop an entire environment. These programs were called Case Studies.
This practice was discontinued as of the 2014–15 school year and replaced with optional labs that teach concepts.
Case studies (discontinued)
Case studies were used in AP Computer Science curriculum starting in 1994.
Large Integer case study (1994-1999)
The Large Integer case study was in use prior to 2000. It was replaced by the Marine Biology case study.
Marine Biology case study (2000-2007)
The Marine Biology Case Study (MBCS) was a program written in C++ until 2003, then in Java, for use with the A and AB examinations. It served as an example of object-oriented programming (OOP) embedded in a more complicated design project than most students had worked with before.
The case study was designed to allow the College Board to quickly test a student's knowledge of object oriented programming ideas such as inheritance and encapsulation while requiring students to understand how objects such as "the environment", "the fish", and the simulation's control module interact with each other without having to develop the entire environment independently, which would be quite time-consuming. The case study also gives all students taking the AP Computer Science exams with a common experience from which to draw additional test questions.
On each of the exams, at least one free-response question was derived from the case study. There were also five multiple-choice questions that are derived from the case study.
This case study was discontinued from 2007, and was replaced by GridWorld.
GridWorld case study (2008-2014)
GridWorld is a computer program case study written in Java that was used with the AP Computer Science program from 2008 to 2014. It serves as an example of object-oriented programming (OOP). GridWorld succeeded the Marine Biology Simulation Case Study, which was used from 2000–2007. The GridWorld framework was designed and implemented by Cay Horstmann, based on the Marine Biology Simulation Case Study. The narrative was produced by Chris Nevison and Barbara Cloud Wells, Colgate University.
The GridWorld Case Study was used as a substitute for writing a single large program as a culminating project. Due to obvious time restraints during the exam, the GridWorld Case Study was provided by the College Board to students prior to the exam. Students were expected to be familiar with the classes and interfaces (and how they interact) before taking the exam. The case study was divided into five sections, the last of which was only tested on the AB exam. Roughly five multiple-choice questions in Section I were devoted to the GridWorld Case Study, and it was the topic of one free response question in Section II.
GridWorld has been discontinued and replaced with a set of labs for the 2014–2015 school year.
Actors
The GridWorld Case Study employs an Actor class to construct objects in the grid. The Actor class manages the object's color, direction, location, what the object does in the simulation, and how the object interacts with other objects.
Actors are broken down into the classes "Flower", "Rock", "Bug", and "Critter", which inherit the Actor class and often override certain methods (most notably the Act method). Flowers can't move, and when forced to Act, they become darker. Flowers are dropped by Bugs and eaten by Critters. Rocks are also immobile and aren't dropped or eaten. Bugs move directly ahead of themselves, unless blocked by a rock or another bug, in which case the Bug will make a 45 degree turn and try again. They drop flowers in every space they occupy, eat flowers that are directly on their space of grid, and are consumed by Critters. Critters move in a random direction to a space that isn't occupied by a Rock or other Critter and consume Flowers and Bugs.
Extensions
The Case Study also includes several extensions of the above classes. "BoxBug" extends "Bug" and moves in a box shape if its route is not blocked. "ChameleonCritter" extends "Critter" and does not eat other Actors, instead changing its color to match the color one of its neighbors. "Crab Critter" moves left or right and only eats Actors in front of it, but otherwise extends the "Critter" class.
Students often create their own extensions of the Actor class. Some common examples of student created extensions are Warden organisms and SimCity-like structures, in which objects of certain types create objects of other types based on their neighbors (much like Conway's Game of Life). Students have even created versions of the games Pac-Man, Fire Emblem, and Tetris.
Known issues
The version that is available at the College Board website, GridWorld 1.00, contains a bug (not to be confused with the Actor subclass Bug) that causes a SecurityException to be thrown when it is deployed as an applet. This was fixed in the "unofficial code" release on the GridWorld website. Also, after setting the environment to an invalid BoundedGrid, it will cause a NullPointerException.
Labs
Instead of the discontinued case studies, the College Board created three new labs that instructors are invited to use, but they are optional and are not tested on the exam. There are no questions on the specific content of the labs on the AP exam, but there are questions that test the concepts developed in the labs. The three labs are:
The Magpie Lab
The Elevens Lab
The Picture Lab
Exam
History
The AP exam in Computer Science was first offered in 1984.
Before 1999, the AP exam tested students on their knowledge of Pascal. From 1999 to 2003, the exam tested students on their knowledge of C++ instead. Since 2003, the AP Computer Science exam has tested students on their knowledge of computer science through Java.
Format
Prior to 2015, the exam was composed of two sections, consisting of the following times:
Section I: Multiple Choice [1 hour and 15 minutes for 40 multiple-choice questions]
Section II: Free-Response [1 hour and 45 minutes for 4 problems involving extended reasoning]
As of 2015, however, the Multiple Choice section was extended by 15 minutes while the Free-Response section was reduced by 15 minutes for the following:
Section I: Multiple Choice [1 hour and 30 minutes for 40 multiple-choice questions]
Section II: Free-Response [1 hour and 30 minutes for 4 problems involving extended reasoning]
Grade distributions
In the 2023 administration, 94,438 students took the exam. The mean score was a 3.21 with a standard deviation of 1.50. The grade distributions since 2003 were:
AP Computer Science AB
Course
The discontinued AP Computer Science AB course included all the topics of AP Computer Science A, as well as a more formal and a more in-depth study of algorithms, data structures, and data abstraction. For example, binary trees were studied in AP Computer Science AB but not in AP Computer Science A. The use of recursive data structures and dynamically allocated structures were fundamental to AP Computer Science AB. Due to low numbers of students taking the AP Computer Science AB exam, it was discontinued after the 2008–2009 year.
Grade distributions for AP Computer Science AB
The AP Computer Science AB Examination was discontinued as of May 2009. The grade distributions from 2003 to 2009 are shown below:
See also
Computer science
Glossary of computer science
References
External links
College Board: AP Computer Science A
Computer science education
Advanced Placement | AP Computer Science A | Technology | 1,764 |
23,742,719 | https://en.wikipedia.org/wiki/C18H16O7 | {{DISPLAYTITLE:C18H16O7}}
The molecular formula C18H16O7 (molar mass: 344.31 g/mol, exact mass: 344.0896 u) may refer to:
Ayanin, a flavonol
Cirsilineol, a flavone
Eupatilin, a flavone and a drug
Pachypodol, a flavonol
Santin (molecule), a flavonol
Scillavone A, a homoisoflavone
Usnic acid, a naturally occurring dibenzofuran derivative found in several lichen species | C18H16O7 | Chemistry | 134 |
22,384,158 | https://en.wikipedia.org/wiki/Compendium%20of%20Analytical%20Nomenclature | The Compendium of Analytical Nomenclature is an IUPAC nomenclature book published by the International Union of Pure and Applied Chemistry (IUPAC) containing internationally accepted definitions for terms in analytical chemistry. It has traditionally been published in an orange cover, hence its informal name, the Orange Book.
Color Books
The Orange Book is one of IUPAC's "Color Books" along with the Nomenclature of Organic Chemistry (Blue Book), Nomenclature of Inorganic Chemistry (Red Book), Quantities, Units and Symbols in Physical Chemistry (Green Book), Compendium of Chemical Terminology (Gold Book), Compendium of Polymer Terminology and Nomenclature (Purple Book), Compendium of Terminology and Nomenclature of Properties Clinical Laboratory Sciences (Silver Book), and Biochemical Nomenclature (White Book).
Editions
There have been four editions of Orange book published; the first in 1978 (), the second in 1987 (), the third in 1998 (), and the fourth in 2023 ().
The third edition is available online.
A Catalan translation has also been published (1987, ).
References
External links
Official Site
Chemistry books
Chemistry reference works
Chemical nomenclature | Compendium of Analytical Nomenclature | Chemistry | 233 |
1,605,807 | https://en.wikipedia.org/wiki/Stackelberg%20competition | The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first and then the follower firms move sequentially (hence, it is sometimes described as the "leader-follower game"). It is named after the German economist Heinrich Freiherr von Stackelberg who published Marktform und Gleichgewicht [Market Structure and Equilibrium] in 1934, which described the model. In game theory terms, the players of this game are a leader and a follower and they compete on quantity. The Stackelberg leader is sometimes referred to as the Market Leader.
There are some further constraints upon the sustaining of a Stackelberg equilibrium. The leader must know ex ante that the follower observes its action. The follower must have no means of committing to a future non-Stackelberg leader's action and the leader must know this. Indeed, if the 'follower' could commit to a Stackelberg leader action and the 'leader' knew this, the leader's best response would be to play a Stackelberg follower action.
Firms may engage in Stackelberg competition if one has some sort of advantage enabling it to move first. More generally, the leader must have commitment power. Moving observably first is the most obvious means of commitment: once the leader has made its move, it cannot undo it—it is committed to that action. Moving first may be possible if the leader was the incumbent monopoly of the industry and the follower is a new entrant. Holding excess capacity is another means of commitment.
Subgame perfect Nash equilibrium
The Stackelberg model can be solved to find the subgame perfect Nash equilibrium or equilibria (SPNE), i.e. the strategy profile that serves best each player, given the strategies of the other player and that entails every player playing in a Nash equilibrium in every subgame.
In very general terms, let the price function for the (duopoly) industry be ; price is simply a function of total (industry) output, so is where the subscript represents the leader and represents the follower. Suppose firm has the cost structure . The model is solved by backward induction. The leader considers what the best response of the follower is, i.e. how it will respond once it has observed the quantity of the leader. The leader then picks a quantity that maximises its payoff, anticipating the predicted response of the follower. The follower actually observes this and in equilibrium picks the expected quantity as a response.
To calculate the SPNE, the best response functions of the follower must first be calculated (calculation moves 'backwards' because of backward induction).
The profit of firm (the follower) is revenue minus cost. Revenue is the product of price and quantity and cost is given by the firm's cost structure, so profit is:
. The best response is to find the value of that maximises given , i.e. given the output of the leader (firm ), the output that maximises the follower's profit is found. Hence, the maximum of with respect to is to be found. First differentiate with respect to :
Setting this to zero for maximisation:
The values of that satisfy this equation are the best responses. Now the best response function of the leader is considered. This function is calculated by considering the follower's output as a function of the leader's output, as just computed.
The profit of firm (the leader) is , where is the follower's quantity as a function of the leader's quantity, namely the function calculated above. The best response is to find the value of that maximises given , i.e. given the best response function of the follower (firm ), the output that maximises the leader's profit is found. Hence, the maximum of with respect to is to be found. First, differentiate with respect to :
Setting this to zero for maximisation:
Examples
The following example is very general. It assumes a generalised linear demand structure
and imposes some restrictions on cost structures for simplicity's sake so the problem can be resolved.
and
for ease of computation.
The follower's profit is:
The maximisation problem resolves to (from the general case):
Consider the leader's problem:
Substituting for from the follower's problem:
The maximisation problem resolves to (from the general case):
Now solving for yields , the leader's optimal action:
This is the leader's best response to the reaction of the follower in equilibrium. The follower's actual can now be found by feeding this into its reaction function calculated earlier:
The Nash equilibria are all . It is clear (if marginal costs are assumed to be zero – i.e. cost is essentially ignored) that the leader has a significant advantage. Intuitively, if the leader was no better off than the follower, it would simply adopt a Cournot competition strategy.
Plugging the follower's quantity , back into the leader's best response function will not yield . This is because once leader has committed to an output and observed the followers it always wants to reduce its output ex-post. However its inability to do so is what allows it to receive higher profits than under Cournot.
Economic analysis
An extensive-form representation is often used to analyze the Stackelberg leader-follower model. Also referred to as a “decision tree”, the model shows the combination of outputs and payoffs both firms have in the Stackelberg game.
The image on the left depicts in extensive form a Stackelberg game. The payoffs are shown on the right. This example is fairly simple. There is a basic cost structure involving only marginal cost (there is no fixed cost). The demand function is linear and price elasticity of demand is 1. However, it illustrates the leader's advantage.
The follower wants to choose to maximise its payoff . Taking the first order derivative and equating it to zero (for maximisation) yields
as the maximum value of .
The leader wants to choose to maximise its payoff . However, in equilibrium, it knows the follower will choose as above. So in fact the leader wants to maximise its payoff (by substituting for the follower's best response function). By differentiation, the maximum payoff is given by . Feeding this into the follower's best response function yields . Suppose marginal costs were equal for the firms (so the leader has no market advantage other than first move) and in particular . The leader would produce 2000 and the follower would produce 1000. This would give the leader a profit (payoff) of two million and the follower a profit of one million. Simply by moving first, the leader has accrued twice the profit of the follower. However, Cournot profits here are 1.78 million apiece (strictly, apiece), so the leader has not gained much, but the follower has lost. However, this is example-specific. There may be cases where a Stackelberg leader has huge gains beyond Cournot profit that approach monopoly profits (for example, if the leader also had a large cost structure advantage, perhaps due to a better production function). There may also be cases where the follower actually enjoys higher profits than the leader, but only because it, say, has much lower costs. This behaviour consistently work on duopoly markets even if the firms are asymmetrical.
Credible and non-credible threats by the follower
If, after the leader had selected its equilibrium quantity, the follower deviated from the equilibrium and chose some non-optimal quantity it would not only hurt itself, but it could also hurt the leader. If the follower chose a much larger quantity than its best response, the market price would lower and the leader's profits would be stung, perhaps below Cournot level profits. In this case, the follower could announce to the leader before the game starts that unless the leader chooses a Cournot equilibrium quantity, the follower will choose a deviant quantity that will hit the leader's profits. After all, the quantity chosen by the leader in equilibrium is only optimal if the follower also plays in equilibrium. The leader is, however, in no danger. Once the leader has chosen its equilibrium quantity, it would be irrational for the follower to deviate because it too would be hurt. Once the leader has chosen, the follower is better off by playing on the equilibrium path. Hence, such a threat by the follower would not be credible.
However, in an (indefinitely) repeated Stackelberg game, the follower might adopt a punishment strategy where it threatens to punish the leader in the next period unless it chooses a non-optimal strategy in the current period. This threat may be credible because it could be rational for the follower to punish in the next period so that the leader chooses Cournot quantities thereafter.
Stackelberg compared with Cournot
The Stackelberg and Cournot models are similar because in both competition is on quantity. However, as seen, the first move gives the leader in Stackelberg a crucial advantage. There is also the important assumption of perfect information in the Stackelberg game: the follower must observe the quantity chosen by the leader, otherwise the game reduces to Cournot. With imperfect information, the threats described above can be credible. If the follower cannot observe the leader's move, it is no longer irrational for the follower to choose, say, a Cournot level of quantity (in fact, that is the equilibrium action). However, it must be that there is imperfect information and the follower is unable to observe the leader's move because it is irrational for the follower not to observe if it can once the leader has moved. If it can observe, it will so that it can make the optimal decision. Any threat by the follower claiming that it will not observe even if it can is as uncredible as those above. This is an example of too much information hurting a player. In Cournot competition, it is the simultaneity of the game (the imperfection of knowledge) that results in neither player (ceteris paribus) being at a disadvantage.
Game-theoretic considerations
As mentioned, imperfect information in a leadership game reduces to Cournot competition. However, some Cournot strategy profiles are sustained as Nash equilibria but can be eliminated as incredible threats (as described above) by applying the solution concept of subgame perfection. Indeed, it is the very thing that makes a Cournot strategy profile a Nash equilibrium in a Stackelberg game that prevents it from being subgame perfect.
Consider a Stackelberg game (i.e. one which fulfills the requirements described above for sustaining a Stackelberg equilibrium) in which, for some reason, the leader believes that whatever action it takes, the follower will choose a Cournot quantity (perhaps the leader believes that the follower is irrational). If the leader played a Stackelberg action, (it believes) that the follower will play Cournot. Hence it is non-optimal for the leader to play Stackelberg. In fact, its best response (by the definition of Cournot equilibrium) is to play Cournot quantity. Once it has done this, the best response of the follower is to play Cournot.
Consider the following strategy profiles: the leader plays Cournot; the follower plays Cournot if the leader plays Cournot and the follower plays Stackelberg if the leader plays Stackelberg and if the leader plays something else, the follower plays an arbitrary strategy (hence this actually describes several profiles). This profile is a Nash equilibrium. As argued above, on the equilibrium path play is a best response to a best response. However, playing Cournot would not have been the best response of the leader were it that the follower would play Stackelberg if it (the leader) played Stackelberg. In this case, the best response of the leader would be to play Stackelberg. Hence, what makes this profile (or rather, these profiles) a Nash equilibrium (or rather, Nash equilibria) is the fact that the follower would play non-Stackelberg if the leader were to play Stackelberg.
However, this very fact (that the follower would play non-Stackelberg if the leader were to play Stackelberg) means that this profile is not a Nash equilibrium of the subgame starting when the leader has already played Stackelberg (a subgame off the equilibrium path). If the leader has already played Stackelberg, the best response of the follower is to play Stackelberg (and therefore it is the only action that yields a Nash equilibrium in this subgame). Hence the strategy profile – which is Cournot – is not subgame perfect.
Comparison with other oligopoly models
In comparison with other oligopoly models,
The aggregate Stackelberg output is greater than the aggregate Cournot output, but less than the aggregate Bertrand output.
The Stackelberg price is lower than the Cournot price, but greater than the Bertrand price.
The Stackelberg consumer surplus is greater than the Cournot consumer surplus, but lower than the Bertrand consumer surplus.
The aggregate Stackelberg output is greater than pure monopoly or cartel, but less than the perfectly competitive output.
The Stackelberg price is lower than the pure monopoly or cartel price, but greater than the perfectly competitive price.
Applications
The Stackelberg concept has been extended to dynamic Stackelberg games. With the addition of time as a dimension, phenomena not found in static games were discovered, such as violation of the principle of optimality by the leader.
In recent years, Stackelberg games have been applied in the security domain. In this context, the defender (leader) designs a strategy to protect a resource, such that the resource remains safe irrespective of the strategy adopted by the attacker (follower). Stackelberg differential games are also used to model supply chains and marketing channels. Other applications of Stackelberg games include heterogeneous networks, genetic privacy, robotics, autonomous driving, electrical grids, and integrated energy systems.
See also
Economic theory
Cournot competition
Bertrand competition
Extensive form game
Industrial organization
Mathematical programming with equilibrium constraints
References
H. von Stackelberg, Market Structure and Equilibrium: 1st Edition Translation into English, Bazin, Urch & Hill, Springer 2011, XIV, 134 p.,
Fudenberg, D. and Tirole, J. (1993) Game Theory, MIT Press. (see Chapter 3, sect 1)
Gibbons, R. (1992) A primer in game theory, Harvester-Wheatsheaf. (see Chapter 2, section 1B)
Osborne, M.J. and Rubenstein, A. (1994) A Course in Game Theory, MIT Press (see p 97-98)
Oligoply Theory made Simple, Chapter 6 of Surfing Economics by Huw Dixon.
Eponyms in economics
Game theory
Non-cooperative games
Competition (economics)
Oligopoly | Stackelberg competition | Mathematics | 3,093 |
212,813 | https://en.wikipedia.org/wiki/1984%20%28advertisement%29 | "1984" is an American television commercial that introduced the Apple Macintosh personal computer. It was conceived by Steve Hayden, Brent Thomas, and Lee Clow at Chiat/Day, produced by New York production company Fairbanks Films, and directed by Ridley Scott. The ad was a reference to George Orwell's noted 1949 novel, Nineteen Eighty-Four, which described a dystopian future ruled by a televised "Big Brother". English athlete Anya Major performed as the unnamed heroine and David Graham as Big Brother. In the US, it first aired in 10 local outlets, including Twin Falls, Idaho, where Chiat/Day ran the ad on December 31, 1983, at the last possible break before midnight on KMVT, so that the advertisement qualified for the 1984 Clio Awards. Its second televised airing, and only US national airing, was on January 22, 1984, during a break in the third quarter of the telecast of Super Bowl XVIII by CBS.
In one interpretation of the commercial, "1984" used the unnamed heroine to represent the coming of the Macintosh (indicated by her white tank top with a stylized line drawing of Apple’s Macintosh computer on it) as a means of saving humanity from "conformity" (Big Brother).
Originally a subject of contention within Apple, it has subsequently been called a watershed event and a masterpiece in advertising. In 1995, The Clio Awards added it to its Hall of Fame, and Advertising Age placed it on the top of its list of 50 greatest commercials.
Plot
The commercial opens with a dystopian, industrial setting in blue and grayish tones, showing a line of people marching in unison through a long tunnel monitored by a string of telescreens. This is in sharp contrast to the full-color shots of the nameless runner (Anya Major). She looks like a competitive track and field athlete, wearing an athletic outfit (red athletic shorts, running shoes, a white tank top with a cubist picture of Apple's Macintosh computer, a white sweat band on her left wrist, and a red one on her right), and is carrying a large brass-headed sledgehammer. Rows of marching minions evoke the opening scenes of Metropolis.
As she is chased by four police officers (presumably agents of the Thought Police) wearing black uniforms, protected by riot gear, helmets with visors covering their faces, and armed with large night sticks, she races towards a large screen with the image of a Big Brother-like figure (David Graham, also seen on the telescreens earlier) giving a speech:
The runner, now close to the screen, hurls the hammer towards it, right at the moment Big Brother announces, "we shall prevail!" In a flurry of light and smoke, the screen is destroyed, leaving the audience in shock.
The commercial concludes with a portentous voiceover by actor Edward Grover, accompanied by scrolling black text (in Apple's early signature Garamond typeface); the hazy, whitish-blue aftermath of the cataclysmic event serves as the background. It reads:
The screen fades to black as the voiceover ends, and the rainbow Apple logo appears.
Production
Development
The commercial was created by the advertising agency Chiat/Day, of Venice, California, with copy by Steve Hayden, art direction by Brent Thomas, and creative direction by Lee Clow. The commercial "grew out of an abandoned print campaign" with a specific theme:
Ridley Scott (whose dystopian sci-fi film Blade Runner had been released one and a half years prior) was hired by agency producer Richard O'Neill to direct it. Less than two months after the Super Bowl airing, The New York Times reported that Scott "filmed it in England for about $370,000"; In 2005 writer Ted Friedman said the commercial had a then-"unheard-of production budget of $900,000."
The actors who appeared in the commercial were paid $25 per day. Scott later admitted that he accepted brutal budget constraints because he believed in the ad's concept, outlining how the total cost was less than $250,000 and that he used local skinheads to portray the broken, pale "drones" in the commercial.
Steve Jobs and John Sculley were so enthusiastic about the final product that they "...purchased one and a half minutes of ad time for the Super Bowl, annually the most-watched television program in America. In December 1983 they screened the commercial for the Apple Board of Directors. To Jobs' and Sculley's surprise, the entire board hated the commercial." However, Sculley himself got "cold feet" and asked Chiat/Day to sell off the two commercial spots.
Despite the board's dislike of the film, Steve Wozniak and others at Apple showed copies to friends, and he offered to pay for half of the spot personally if Jobs paid the other half. This turned out to be unnecessary. Of the original ninety seconds booked, Chiat/Day resold thirty seconds to another advertiser, then claimed they could not sell the other 60 seconds, when in fact they did not even try.
Intended message
In his 1983 Apple keynote address, Steve Jobs read the following story before showcasing a preview of the commercial:
In March 1984 Michael Tyler, a communications expert quoted by The New York Times, said "The Apple ad expresses a potential of small computers. This potential may not automatically flow from the company's product. But if enough people held a shared intent, grass-roots electronic bulletin boards (through which computer users share messages) might result in better balancing of political power."
In 2004, Adelia Cellini writing for Macworld, summarized the message:
Reception and legacy
Art director Brent Thomas said Apple "had wanted something to 'stop America in its tracks, to make people think about computers, to make them think about Macintosh.' With about $3.5 million worth of Macintoshes sold just after the advertisement ran, Thomas judged the effort 'absolutely successful.' 'We also set out to smash the old canard that the computer will enslave us,' he said. 'We did not say the computer will set us free—I have no idea how it will work out. This was strictly a marketing position.'"
The estate of George Orwell and the television rightsholder to the novel Nineteen Eighty-Four considered the commercial to be a copyright infringement and sent a cease-and-desist letter to Apple and Chiat/Day in April 1984.
Awards
1984: Clio Awards
1984: 31st Cannes Lions International Advertising Festival—Grand Prix
1995: Clio Awards—Hall of Fame
1995: Advertising Age—Greatest Commercial
1999: TV Guide—Number One Greatest Commercial of All Time
2003: WFA—Hall of Fame Award (Jubilee Golden Award)
2007: Best Super Bowl Spot (in the game's 40-year history)
It ranked at number 38 in Channel 4's 2000 list of the "100 Greatest TV Ads".
Social impact
Ted Friedman, in his 2005 text, Electric Dreams: Computers in American Culture, notes the impact of the commercial:
Super Bowl viewers were overwhelmed by the startling ad. The ad garnered millions of dollars worth of free publicity, as news programs rebroadcast it that night. It was quickly hailed by many in the advertising industry as a masterwork. Advertising Age named it the 1980s Commercial of the Decade, and it continues to rank high on lists of the most influential commercials of all time [...] '1984' was never broadcast again, adding to its mystique.
The "1984" ad became a signature representation of Apple computers. It was scripted as a thematic element in the 1999 docudrama, Pirates of Silicon Valley, which explores the rise of Apple and Microsoft (the film opens and closes with references to the commercial, including a re-enactment of the heroine running towards the screen of Big Brother and clips of the original commercial).
The commercial was also prominent in the 20th anniversary celebration of the Macintosh in 2004, as Apple reposted a new version of the ad on its website and showed it during Jobs's Keynote Address at Macworld Expo in San Francisco, California. In this updated version, an iPod, complete with signature white earbuds, was digitally added to the heroine. Keynote Attendees were given a poster showing the heroine with an iPod as a commemorative gift. And the ad has also been cited as the turning point for Super Bowl commercials, which had been important and popular before (especially Coca-Cola's "Hey Kid, Catch!" featuring "Mean" Joe Greene during Super Bowl XIV) but after "1984" those ads became the most expensive, creative and influential advertising set for all television coverage.
Revisiting the commercial in Harper's Magazine thirty years after it aired, social critic Rebecca Solnit suggested that "1984" did not so much herald a new era of liberation as a new era of oppression. In the December 2014 issue of the magazine, she wrote:
Media archivist (and early Apple supporter) Marion Stokes recorded the Super Bowl broadcast featuring the legendary ad, which was then featured in the 2019 documentary film Recorder: The Marion Stokes Project.
Parodies
In 2001, the Futurama season 3 episode Future Stock parodies the advert as a Planet Express advert challenging the all-powerful "MomCorp". In the advert a Planet Express employee throws a delivery package into the telescreen showing Mom - however in contrast to the original advert, after the screen is smashed an annoyed prole turns to the employee and shouts "Hey - we were watching that!"
In March 2007, the advertisement attracted attention again when Hillary 1984, a video mashup of the original commercial with footage of Hillary Clinton used in place of Big Brother, went viral in the early stages of the campaign for the 2008 Democratic presidential nomination. The video was produced in support of Barack Obama by Phil de Vellis, an employee of Blue State Digital, but was made without the knowledge of either Obama's campaign or his own employer. De Vellis stated that he made the video in one afternoon at home using a Mac and some software. Political commentators including Carla Marinucci and Arianna Huffington, as well as de Vellis himself, suggested that the video demonstrated the way technology had created new opportunities for individuals to make an impact on politics.
The 2008 The Simpsons episode "MyPods and Boomsticks" parodies the ad. In it, Comic Book Guy throws a sledgehammer at a giant screen that displays the CEO "Steve Mobs".
In May 2010, Valve released a short video announcing the release of Half-Life 2 on OS X featuring a recreation of the original commercial, with the people replaced with City 17's citizens, Big Brother with a speech from Wallace Breen, the agents of the Thought Police with Combine Soldiers, and the nameless runner with Alyx Vance. It is Valve's only official Half-Life 2 SFM.
In the 2016 The Simpsons episode "The Last Traction Hero", Lisa Simpson is a bus monitor and fantasizes about being on a big screen controlling the bus children with Bart Simpson as the runner with the hammer.
On August 13, 2020, Apple removed Fortnite from the App Store after Epic Games introduced a direct payment option that circumvented Apple's 30% revenue cut policy, violating terms of service policies. In response, Epic filed a lawsuit against Apple, and created a parody of the "1984" ad called "Nineteen Eighty-Fortnite".
The 2024 Pixar animated film Inside Out 2 contains a loose parody of the ad, in which the character Joy riles up the Mind Workers to rebel against Anxiety; one worker throws a chair at the giant screen Anxiety uses to monitor their work.
See also
Lemmings (advertisement), the follow-up advert
Think Different, an Apple advertising slogan
Get a Mac, television advertising campaign
List of Super Bowl commercials
References
Further reading
External links
Super Bowl commercials
American television commercials
1980s television commercials
Apple Inc. advertising
Films based on Nineteen Eighty-Four
Films directed by Ridley Scott
History of computing
1983 television films
1983 films
1983 short films
1984 in American television | 1984 (advertisement) | Technology | 2,493 |
46,223,763 | https://en.wikipedia.org/wiki/Penicillium%20incoloratum | Penicillium incoloratum is a species of the genus of Penicillium.
References
incoloratum
Fungi described in 1994
Fungus species | Penicillium incoloratum | Biology | 33 |
31,745 | https://en.wikipedia.org/wiki/Udo%20of%20Aachen | Udo of Aachen (c.1200–1270) is a fictional monk, a creation of British technical writer Ray Girvan, who introduced him in an April Fool's hoax article in 1999. According to the article, Udo was an illustrator and theologian who discovered the Mandelbrot set some 700 years before Benoit Mandelbrot.
Udo's works were allegedly discovered by the also-fictional Bob Schipke, a Harvard mathematician, who supposedly saw a picture of the Mandelbrot set in an illumination for a 13th-century carol. Girvan also attributed Udo as a mystic and poet whose poetry was set to music by Carl Orff with the haunting O Fortuna in Carmina Burana. Later Schipke uncovered Udo's work which described how Udo had come to this kind of design while working on a method of determining whether one's soul would reach heaven.
Aspects of the hoax
The poetry of O Fortuna was actually the work of itinerant goliards, found in the German Benedictine monastery of Benediktbeuern Abbey.
The hoax was lent an air of credibility because often medieval monks did discover scientific and mathematical theories, only to have them hidden or shelved due to persecution or simply ignored because publication prior to the invention of the printing press was difficult at best. Mr. Girvan adds to this suggestion by associating Udo with several other more legitimate discoveries where an author was considered ahead of his time in terms of a scientific theory of some sort that is now established as a mainstream theory but was considered fringe science at the time.
Another aspect of the deception was that it was very common for pre-20th century mathematicians to spend incredible amounts of time on hand calculations such as a logarithm table or trigonometric functions. Calculating all of the points for a Mandelbrot set is a comparable activity that would seem tedious today but would be routine for people of the time.
References
External links
Nonexistent people used in hoaxes
Fictional Christian monks
Fictional mathematicians
April Fools' Day jokes
Fractals
1999 hoaxes | Udo of Aachen | Mathematics | 432 |
22,612,797 | https://en.wikipedia.org/wiki/PEGASUS | PEGASUS is an encryption algorithm used for satellite telemetry, command link and mission data transfers.
According to budget item justification document for FY 2004–2005, this cryptographic algorithm is used for Global Positioning Systems (GPS), Space-Based Infrared Systems (SBIRS), MILSATCOM, and other Special Project Systems.
References
External links
PEGASUS products
Spaceflight technology
Telecommunications
Cryptographic algorithms | PEGASUS | Astronomy,Technology | 80 |
14,308,141 | https://en.wikipedia.org/wiki/Erythrocyte%20rosetting | Erythrocyte rosetting or E-rosetting is a phenomenon seen through a microscope where red blood cells (erythrocytes) are arranged around a central cell to form a cluster that looks like a flower. The red blood cells surrounding the cell form the petal, while the central cell forms the stigma of the flower shape. This formation occurs due to an immunological reaction between an epitope on the central cell's surface and a receptor or antibody on a red cell. The presence of E-rosetting can be used as a test for T cells although more modern tests such as immunohistochemistry are available. Rosetting is caused by parasites in the genus Plasmodium and is a cause of some malaria-associated symptoms.
Rosetting techniques
Three types of rosette techniques have been developed and used experimentally.
Rosette test for Rh factor
The Rosette test is performed on postpartum maternal blood to estimate the volume of fetal-maternal hemorrhage in case of an Rh negative mother and an Rh positive child. This estimate, in turn, also estimates the required amount of Rho(D) immune globulin to administer. In this test, a sample of maternal blood is incubated with Rho(D) immune globulin, which will bind to any fetal Rh positive red blood cells, if present. Upon addition of enzyme-treated cDE indicator cells, the presence of Rh positive fetal blood causes rosetting, which can be seen by light microscopy. The test is recommended for Rh negative mothers within 72 hours of giving birth to an Rh-positive infant. In a positive test, it is recommended that a Kleihauer–Betke test should be performed to confirm and quantify any positive rosette tests.
E-rosetting
E-rosetting is used in the identification of T cells where a T cells CD2 surface protein is bound to a sugar based LFA-3 homologue on the surface of a sheep red blood cell. Because the LFA-3 homologue is only present on the surface of sheep red blood cells other species red blood cells can not be used in this type of rosetting.
EA-rosetting
Erythrocyte antibody rosetting (EA-rosetting), occurs when an antibody molecule that is specific for an epitope on another cell is embedded in the membrane of a red blood cell and then reacted against a cell carrying the epitope that the antibody is specific for.
EAC-rosetting
Erythrocyte antibody complement rosetting (EAC-rosetting), occurs when antibody in the presence of complement is bound to the surface of a red blood cell. The complement binds to the tail region (Fc region) of the antibody. Finally T-cells with a complement receptor are added and the T-cells bind to the complement on the antibody completing the rosette.
References
Blood tests | Erythrocyte rosetting | Chemistry | 603 |
19,990,354 | https://en.wikipedia.org/wiki/Features%20new%20to%20Windows%207 | Some of the new features included in Windows 7 are advancements in touch, speech and handwriting recognition, support for virtual hard disks, support for additional file formats, improved performance on multi-core processors, improved boot performance, and kernel improvements.
Shell and user interface
Windows 7 retains the Windows Aero graphical user interface and visual style introduced in its predecessor, Windows Vista, but many areas have seen enhancements. Unlike Windows Vista, window borders and the taskbar do not turn opaque when a window is maximized while Windows Aero is active; instead, they remain translucent.
Desktop
Themes
Support for themes has been extended in Windows 7. In addition to providing options to customize colors of window chrome and other aspects of the interface including the desktop background, icons, mouse cursors, and sound schemes, the operating system also includes a native desktop slideshow feature. A new theme pack extension has been introduced, .themepack, which is essentially a collection of cabinet files that consist of theme resources including background images, color preferences, desktop icons, mouse cursors, and sound schemes. The new theme extension simplifies sharing of themes and can also display desktop wallpapers via RSS feeds provided by the Windows RSS Platform. Microsoft provides additional themes for free through its website.
The default theme in Windows 7 consists of a single desktop wallpaper named "Harmony" and the default desktop icons, mouse cursors, and sound scheme introduced in Windows Vista; however, none of the desktop backgrounds included with Windows Vista are present in Windows 7. New themes include Architecture, Characters, Landscapes, Nature, and Scenes, and an additional country-specific theme that is determined based on the defined locale when the operating system is installed; although only the theme for a user's home country is displayed within the user interface, the files for all of these other country-specific themes are included in the operating system. All themes included in Windows 7—excluding the default theme—include six wallpaper images. A number of new sound schemes (each associated with an included theme) have also been introduced: Afternoon, Calligraphy, Characters, Cityscape, Delta, Festival, Garden, Heritage, Landscape, Quirky, Raga, Savana, and Sonata. Themes may introduce their own custom sounds, which can be used with others themes as well.
Desktop Slideshow
Windows 7 introduces a desktop slideshow feature that periodically changes the desktop wallpaper based on a user-defined interval; the change is accompanied by a smooth fade transition with a duration that can be customized via the Windows Registry. The desktop slideshow feature supports local images and images obtained via RSS.
Gadgets
With Windows Vista, Microsoft introduced the Windows Sidebar to host Microsoft Gadgets that displayed details such as feeds and sports scores; the gadgets could optionally be placed on the Windows desktop. With Windows 7, gadgets can still be placed on the Windows desktop, but the Windows Sidebar itself has been removed, and the platform has been renamed as Windows Desktop Gadgets. Gadgets are more closely integrated with Windows Explorer, but the gadgets themselves continue to operate in a single sidebar.exe process (unlike in Windows Vista where gadgets could operate in multiple sidebar.exe processes). New features for gadgets include:
A context menu option on the desktop to access the Gadgets Gallery to add, display, or uninstall gadgets is now available
Gadgets that display details from online sources can now also display content that has been cached
High DPI support
Larger controls designed for touch-based interaction
Rearrangement capabilities automatically arrange gadgets based on their proximity with other gadgets
When gadgets are displayed on the desktop, there is a context menu option to display or hide them; hiding gadgets can result in power savings
Windows 7 also introduces a single new gadget, one for Windows Media Center that displays links to the various sections (e.g., Pictures + Videos) of its interface.
Branding and customization
For original equipment manufacturers and enterprises, Windows 7 natively supports the ability to customize the wallpaper that is displayed during user login. Because the settings to change the wallpaper are available via the Windows Registry, users can also customize this wallpaper. Options to customize the appearance of interface lighting and shadows are also available.
Windows Explorer
Libraries
Windows Explorer in Windows 7 supports file libraries that aggregate content from various locations – including shared folders on networked systems if the shared folder has been indexed by the host system – and present them in a unified view. The libraries hide the actual location the file is stored in. Searching in a library automatically federates the query to the remote systems, in addition to searching on the local system, so that files on the remote systems are also searched. Unlike search folders, Libraries are backed by a physical location which allows files to be saved in the Libraries. Such files are transparently saved in the backing physical folder. The default save location for a library may be configured by the user, as can the default view layout for each library. Libraries are generally stored in the Libraries special folder, which allows them to be displayed on the Navigation Pane.
By default, a new user account in Windows 7 contains four libraries for different file types: Documents, Music, Pictures, and Videos. They are configured to include the user's profile folders for these respective file types, as well as the computer's corresponding Public folders. The Public folder also contains a hidden Recorded TV library that appears in the Windows Explorer sidepane when TV is set up in Media Center for the first time.
In addition to aggregating multiple storage locations, Libraries enable Arrangement Views and Search Filter Suggestions. Arrangement Views allow you to pivot your view of the library's contents based on metadata. For example, selecting the "By Month" view in the Pictures library will display photos in stacks, where each stack represents a month of photos based on the date they were taken. In the Music library, the "By Artist" view will display stacks of albums from the artists in your collection, and browsing into an artist stack will then display the relevant albums.
Search Filter Suggestions are a new feature of the Windows 7 Explorer's search box. When the user clicks in the search box, a menu shows up below it showing recent searches as well as suggested Advanced Query Syntax filters that the user can type. When one is selected (or typed in manually), the menu will update to show the possible values to filter by for that property, and this list is based on the current location and other parts of the query already typed. For example, selecting the "tags" filter or typing "tags:" into the search box will display the list of possible tag values which will return search results.
Arrangement Views and Search Filter Suggestions are database-backed features which require that all locations in the Library be indexed by the Windows Search service. Local disk locations must be indexed by the local indexer, and Windows Explorer will automatically add locations to the indexing scope when they are included in a library. Remote locations can be indexed by the indexer on another Windows 7 machine, on a Windows machine running Windows Search 4 (such as Windows Vista or Windows Home Server), or on another device that implements the MS-WSP remote query protocol.
Federated search
Windows Explorer also supports federating search to external data sources, such as custom databases or web services, that are exposed over the web and described via an OpenSearch definition. The federated location description (called a Search Connector) is provided as an .osdx file. Once installed, the data source becomes queryable directly from Windows Explorer. Windows Explorer features, such as previews and thumbnails, work with the results of a federated search as well.
Miscellaneous Shell enhancements
Windows Explorer has received numerous minor enhancements that improve its overall functionality. The address bar and search box can be resized. The Command Bar features the New Folder command and a visible interface option to enable the Preview Pane (both were previously in the Organize option in Windows Vista). A new Content icon view mode is added, which shows metadata and thumbnails. The List icon view mode provides more space between items than in Windows Vista. Storage capacity indicators for hard disks introduced in Windows Vista are now also shown for removable storage devices. File types for which new iFilters or Property Handlers are installed are reindexed by Windows Search by default.
The Navigation Pane includes a new Favorites location, which serves as the replacement for the Favorite Links functionality of the interface in Windows Vista, and newly created Saved Searches are automatically pinned to this location.
There is a new Share With button on the Command Bar that allows users to share the currently viewed folder or currently selected item with people in a homegroup with either read permissions or with both read and write permissions, or with specific people, which opens the Sharing Wizard introduced in Windows Vista; a new Nobody sharing option prevents the selected folder or item from being shared, and all items that are excluded in this manner feature a new padlock overlay icon.
Previously, adding submenus to Shell context menus or customizing the context menu's behavior for a certain folder was only possible by installing a form of plug-in known as Shell extensions. In Windows 7, however, users can edit the Windows Registry or configuration files. Additionally, a new Shell API was introduced designed to simplify the writing of context menu shell extensions by software developers.
Windows 7 includes native support for burning ISO files. The functionality is available when a user selects the Burn disc image option within the context menu of an ISO file (support for disc image verification is also included). In previous versions of Windows, users were required to install third-party software to burn ISO images.
Start menu
The start button now has a fade-in highlight effect when the user hovers over it with the mouse cursor. The right column of the Start menu is now prominently the Aero Glass color; in Windows Vista, it was predominantly black regardless of the color in use.
Windows 7's Start menu retains the two-column layout of its predecessors, with several functional changes:
Documents, Music, and Pictures now link to their respective Libraries
Jump Lists are presented in the Start Menu via a guillemet; when the user moves the mouse cursor over the guillemet or presses the arrow key, the right-hand side of the Start menu is widened and replaced with the application's Jump List.
New links include Devices and Printers (a new Device Manager), Downloads, HomeGroup, Recorded TV, and Videos
Search has been updated to display results for Control Panel category keywords, federated searches, HomeGroup locations, Libraries (including network share locations when included in Libraries), and Sticky Notes
Search results now group items in groups of three, and users can click a group to open Windows Explorer to see additional items that match the criteria
The iconographic Shut Down button of Windows Vista has been replaced with a text link to indicate the action that will be taken when the button is clicked; the default action is now configurable through Taskbar and Start Menu Properties.
Group Policy settings for Windows Explorer provide the ability for administrators of an Active Directory domain to add up to five Internet Web sites and five additional "search connectors" to the Search Results view in the Start menu. The links, which appear at the bottom of the pane, allow the search to be executed again on the selected web site or search connector. Microsoft suggests that network administrators could use this feature to enable searching of corporate Intranets or an internal SharePoint server.
Taskbar
The Windows Taskbar has seen its most significant revision since its introduction in Windows 95 and combines the previous Quick Launch functionality with open application window icons. The taskbar is now rendered as an Aero Glass element whose color can be changed via the Personalization Control Panel. It is 10 pixels taller than in Windows Vista to accommodate touch screen input and a new larger default icon size (although a smaller taskbar size is available), as well as maintain proportion to newer high resolution monitor modes. Running applications are denoted by a border frame around the icon. Within this border, a color effect (dependent on the predominant color of the icon) that follows the mouse cursor also indicates the opened status of the application. The glass taskbar is more translucent than in Windows Vista. Taskbar buttons show icons by default, not application titles, unless they are set to 'not combine', or 'combine when taskbar is full.' In this case, only icons are shown when the application is not running. Programs running or pinned on the taskbar can be rearranged. Items in the notification area can also be rearranged.
Pinned applications
The Windows 7 taskbar is more application-oriented than window-oriented, and therefore doesn't show window titles (these are shown when an application icon is clicked or hovered over). Applications can now be pinned to the taskbar allowing the user instant access to the applications they commonly use. There are a few ways to pin applications to the taskbar. Icons can be dragged and dropped onto the taskbar, or the application's icon can be right-clicked to pin it to the taskbar. The Quick Launch toolbar has been removed from the default configuration, but can be manually added back.
Thumbnail previews
Thumbnail previews which were introduced in Windows Vista have been expanded to not only preview the windows opened by the application in a small-sized thumbnail view, but to also interact with them. The user can close any window opened by clicking the X on the corresponding thumbnail preview. The name of the window is also shown in the thumbnail preview. A "peek" at the window is obtained by hovering over the thumbnail preview. Peeking brings up only the window of the thumbnail preview over which the mouse cursor hovers, and turns any other windows on the desktop transparent. This also works for tabs in Internet Explorer: individual tabs may be peeked at in the thumbnail previews. Thumbnail previews integrate Thumbnail Toolbars which can control the application from the thumbnail previews themselves. For example, if Windows Media Player is opened and the mouse cursor is hovering on the application icon, the thumbnail preview will allow the user the ability to Play, Stop, and Play Next/Previous track without having to switch to the Windows Media Player window.
Jump lists
Jump lists are menu options available by right-clicking a taskbar icon or holding the left mouse button and sliding towards the center of the desktop on an icon. Each application has a jump list corresponding to its features, Microsoft Word's displaying recently opened documents; Windows Media Player's recent tracks and playlists; frequently opened directories in Windows Explorer; Internet Explorer's recent browsing history and options for opening new tabs or starting InPrivate Browsing; Windows Live Messenger's common tasks such as instant messaging, signing off, and changing online status. Third-party software can add custom actions through a dedicated API. Up to 10 menu items may appear on a list, partially customizable by user. Frequently used files and folders can be pinned by the user as to not get usurped from the list if others are opened more frequently.
Task progress
Progress bar in taskbar's tasks allows users to know the progress of a task without switching to the pending window. Task progress is used in Windows Explorer, Internet Explorer and third-party software.
Notification area
The notification area has been redesigned; the standard Volume, Network, Power and Action Center status icons are present, but no other application icons are shown unless the user has chosen them to be shown. A new "Notification Area Icons" control panel has been added which replaces the "Customize Notification Icons" dialog box in the "Taskbar and Start Menu Properties" window first introduced in Windows XP. In addition to being able to configure whether the application icons are shown, the ability to hide each application's notification balloons has been added. The user can then view the notifications at a later time.
A triangle to the left of the visible notification icons displays the hidden notification icons. Unlike Windows Vista and Windows XP, the hidden icons are displayed in a window above the taskbar, instead of on the taskbar. Icons can be dragged between this window and the notification area.
Aero Peek
In previous versions of Windows, the taskbar ended with the notification area on the right-hand side. Windows 7, however, introduces a show desktop button on the far right side of the taskbar which can initiate an Aero Peek feature that makes all open windows translucent when hovered over by a mouse cursor. Clicking this button shows the desktop, and clicking it again brings all windows to focus. The new button replaces the show desktop shortcut located in the Quick Launch toolbar in previous versions of Windows.
On touch-based devices, Aero Peek can be initiated by pressing and holding the show desktop button; touching the button itself shows the desktop. The button also increases in width to accommodate being pressed by a finger.
Window management mouse gestures
Aero Snap
Windows can be dragged to the top of the screen to maximize them and dragged away to restore them. Dragging a window to the left or right of the screen makes it take up half the screen, allowing the user to tile two windows next to each other. Also, resizing the window to the bottom of the screen or its top will extend the window to full height but retain its width. These features can be disabled via the Ease of Access Center if users do not wish the windows to automatically resize.
Aero Shake
Aero Shake allows users to clear up any clutter on their screen by shaking (dragging back and forth) a window of their choice with the mouse. All other windows will minimize, while the window the user shook stays active on the screen. When the window is shaken again, all previously minimized windows are restored, similar to desktop preview.
Keyboard shortcuts
A variety of new keyboard shortcuts have been introduced.
Global keyboard shortcuts:
operates as a keyboard shortcut for Aero Peek.
maximizes the current window.
if current window is maximized, restores it; otherwise minimizes current window.
makes upper and lower edge of current window nearly touch the upper and lower edge of the Windows desktop environment, respectively.
restores the original size of the current window.
snaps the current window to the left edge of the screen.
snaps the current window to the right half of the screen.
and move the current window to the left or right display.
functions as zoom in command wherever applicable.
functions as zoom out command wherever applicable.
turn off zoom once enabled.
operates as a keyboard shortcut for Aero Shake.
Opens Connect to a Network Projector, which has been updated from previous versions of Windows, and allows one to dictate where the desktop is displayed: on the main monitor, an external display, both; or allows one to display two independent desktops on two separate monitors.
Taskbar:
Shift + Click, or Middle click starts a new instance of the application, regardless of whether it's already running.
Ctrl + Shift + Click starts a new instance with Administrator privileges; by default, a User Account Control prompt will be displayed.
Shift + Right-click (or right-clicking the program's thumbnail) shows the titlebar's context menu which, by default, contains "Restore", "Move", "Size", "Maximize", "Minimize" and "Close" commands. If the icon being clicked on is a grouped icon, a specialized context menu with "Restore All", "Minimize All", and "Close All" commands is shown.
Ctrl + Click on a grouped icon cycles between the windows (or tabs) in the group.
Font management
The user interface for font management has been overhauled in Windows 7. As with Windows Vista, the collection of installed fonts is displayed in a Windows Explorer window, but fonts that originate from the same font family appear as icons that are represented as stacks that display font previews within the interface. Windows 7 also introduces the option to hide installed fonts; certain fonts are automatically removed from view based on a user's regional settings. An option to manually hide installed fonts is also available. Hidden fonts remain installed but are not enumerated when an application asks for a list of available fonts, thus reducing the amount of fonts to scroll through within the interface and also reducing memory usage. Windows 7 includes over 40 new fonts, including a new "Gabriola" font.
The dialog box for fonts in Windows 7 has also been updated to display font previews within the interface, which allows users to preview fonts before selecting them. Previous versions of windows only displayed the name of the font.
The ClearType Text Tuner which was previously available as a Microsoft Powertoy for earlier Windows versions has been integrated into, and updated for Windows 7.
Microsoft would later backport Windows 8 Emoji features to Windows 7.
Devices
There are two major new user interface components for device management in Windows 7, "Devices and Printers" and "Device Stage". Both of these are integrated with Windows Explorer, and together provide a simplified view of what devices are connected to the computer, and what capabilities they support.
Devices and Printers
Devices and Printers is a new Control Panel interface that is directly accessible from the Start menu. Unlike the Device Manager Control Panel applet, which is still present, the icons shown on the Devices and Printers screen are limited to components of the system that a non-expert user will recognize as plug-in devices. For example, an external monitor connected to the system will be displayed as a device, but the internal monitor on a laptop will not. Device-specific features are available through the context menu for each device; an external monitor's context menu, for example, provides a link to the "Display Settings" control panel.
This new Control Panel applet also replaces the "Printers" window in prior versions of Windows; common printer operations such as setting the default printer, installing or removing printers, and configuring properties such as paper size are done through this control panel.
Windows 7 and Server 2008 R2 introduce print driver isolation, which improves the reliability of the print spooler by running printer drivers in a separate process to the spooler service. If a third party print driver fails while isolated, it does not impact other drivers or the print spooler service.
Device Stage
Device Stage provides a centralized location for an externally connected multi-function device to present its functionality to the user. When a device such as a portable music player is connected to the system, the device appears as an icon on the task bar, as well as in Windows Explorer.
Windows 7 ships with high-resolution images of a number of popular devices, and is capable of connecting to the Internet to download images of devices it doesn't recognize. Opening the icon presents a window that displays actions relevant to that device. Screenshots of the technology presented by Microsoft suggest that a mobile phone could offer options for two-way synchronization, configuring ring-tones, copying pictures and videos, managing the device in Windows Media Player, and using Windows Explorer to navigate through the device. Other device status information such as free memory and battery life can also be shown. The actual per-device functionality is defined via XML files that are downloaded when the device is first connected to the computer, or are provided by the manufacturer on an installation disc.
Mobility enhancements
Multi-touch support
Hilton Locke, who worked on the Tablet PC team at Microsoft, reported on December 11, 2007 that Windows 7 will have new touch features on devices supporting multi-touch. An overview and demonstration of the multi-touch capabilities, including a virtual piano program, a mapping and directions program and a touch-aware version of Microsoft Paint, was given at the All Things Digital Conference on May 27, 2008; a video of the multi-touch capabilities was made available on the web later the same day.
Sensors
Windows 7 introduces native support for sensors, including accelerometer sensors, ambient light sensors, and location-based sensors; the operating system also provides a unified driver model for sensor devices. A notable use of this technology in Windows 7 is the operating system's adaptive display brightness feature, which automatically adjusts the brightness of a compatible computer's display based on environmental light conditions and factors. Gadgets developed for Windows 7 can also display location-based information. Applications for certain sensor capabilities can be developed without the requisite hardware.
Because data acquired by some sensors can be considered personally identifiable information, all sensors are disabled by default in Windows 7, and an account in Windows 7 requires administrative permissions to enable a sensor. Sensors also require user consent to share location data.
Power management
Battery notification messages
Unlike previous versions of Windows, Windows 7 is able to report when a laptop battery is in need of a replacement. The operating system works with design capabilities present in modern laptop batteries to report this information.
Hibernation improvements
The powercfg command enables the customization of the hibernation file size. By default, Windows 7 automatically sets the size of the hibernation file to 75% of a computer's total physical memory. The operating system also compresses the contents of memory during the hibernate process to minimize the possibility that the contents exceeds the default size of the hibernation file.
Power analysis and reporting
Windows 7 introduces a new /Energy parameter for the powercfg command, which generates an HTML report of a computer's energy efficiency and displays information related to devices or settings.
USB suspension
Windows 7 can individually suspend USB hubs and supports selective suspend for all in-box USB class drivers.
Graphics
DirectX
Direct3D 11 is included with Windows 7. It is a strict super-set of Direct3D 10.1, which was introduced in Windows Vista Service Pack 1 and Windows Server 2008.
Direct2D and DirectWrite, new hardware-accelerated vector graphics and font rendering APIs built on top of Direct3D 10 that are intended to replace GDI/GDI+ for screen-oriented native-code graphics and text drawing. They can be used from managed applications with the Windows API Code Pack
Windows Advanced Rasterization Platform (WARP), a software rasterizer component for DirectX that provides all of the capabilities of Direct3D 10.0 and 10.1 in software.
DirectX Video Acceleration-High Definition (DXVA-HD)
Direct3D 11, Direct2D, DirectWrite, DXGI 1.1, WARP and several other components are currently available for Windows Vista SP2 and Windows Server 2008 SP2 by installing the Platform Update for Windows Vista.
Desktop Window Manager
First introduced in Windows Vista, the Desktop Window Manager (DWM) in Windows 7 has been updated to use version 10.1 of Direct3D API, and its performance has been improved significantly.
The Desktop Window Manager still requires at least a Direct3D 9-capable video card (supported with new device type introduced with the Direct3D 11 runtime).
With a video driver conforming to Windows Display Driver Model v1.1, DXGI kernel in Windows 7 provides 2D hardware acceleration to APIs such as GDI, Direct2D and DirectWrite (though GDI+ was not updated to use this functionality). This allows DWM to use significantly lower amounts of system memory, which do not grow regardless of how many windows are opened, like it was in Windows Vista. Systems equipped with a WDDM 1.0 video card will operate in the same fashion as in Windows Vista, using software-only rendering.
The Desktop Window Manager in Windows 7 also adds support for systems using multiple heterogeneous graphics cards from different vendors.
Other changes
Support for color depths of 30 and 48 bits is included, along with the wide color gamut scRGB (which for HDMI 1.3 can be converted and output as xvYCC). The video modes supported in Windows 7 are 16-bit sRGB, 24-bit sRGB, 30-bit sRGB, 30-bit with extended color gamut sRGB, and 48-bit scRGB.
Each user of Windows 7 and Server 2008 R2 has individual DPI settings, rather than the machine having a single setting as in previous versions of Windows. DPI settings can be changed by logging on and off, without needing to restart.
File system
Solid state drives
Over time, several technologies have been incorporated into subsequent versions of Windows to improve the performance of the operating system on traditional hard disk drives (HDD) with rotating platters. Since Solid-state drives (SSD) differ from mechanical HDDs in some key areas (no moving parts, write amplification, limited number of erase cycles allowed for reliable operation), it is beneficial to disable certain optimizations and add others.
Windows 7 incorporates many engineering changes to reduce the frequency of writes and flushes, which benefit SSDs in particular since each write operation wears the flash memory.
Windows 7 also makes use of the TRIM command. If supported by the SSD (not implemented on early devices), this optimizes when erase cycles are performed, reducing the need to erase blocks before each write and increasing write performance.
Several tools and techniques that were implemented in the past to reduce the impact of the rotational latency of traditional HDDs, most notably disk defragmentation, SuperFetch, ReadyBoost, and application launch prefetching, involve reorganizing (rewriting) the data on the platters. Since SSDs have no moving platters, this reorganization has no advantages, and may instead shorten the life of the solid state memory. Therefore, these tools are by default disabled on SSDs in Windows 7, except for some early generation SSDs that might still benefit.
Finally, partitions made with Windows 7's partition-creating tools are created with the SSD's alignment needs in mind, avoiding unwanted systematic write amplification.
Virtual hard disks
The Enterprise and Ultimate editions of Windows 7 incorporate support for the Virtual Hard Disk (VHD) file format. VHD files can be mounted as drives, created, and booted from, in the same way as WIM files. Furthermore, an installed version of Windows 7 can be booted and run from a VHD drive, even on non-virtual hardware, thereby providing a new way to multi boot Windows. Some features such as hibernation and BitLocker are not available when booting from VHD.
Disk partitioning
By default, a computer's disk is partitioned into two partitions: one of limited size for booting, BitLocker and running the Windows Recovery Environment and the second with the operating system and user files.
Removable media
Windows 7 has also seen improvements to the Safely Remove Hardware menu, including the ability to eject just one camera card at the same time (from a single hub) and retain the ports for future use without reboot; and the labels of removable media are now also listed, rather than just the drive letter. Windows Explorer now by default only shows memory card reader ports in My Computer if they contain a card.
BitLocker to Go
BitLocker brings encryption support to removable disks such as USB drives. Such devices can be protected by a passphrase, a recovery key, or be automatically unlocked on a computer.
Boot performance
According to data gathered from the Microsoft Customer Experience Improvement Program (CEIP), 35% of Vista SP1 installations boot up in 30 seconds or less. The more lengthy boot times on the remainder of the machines are mainly due to some services or programs that are loaded but are not required when the system is first started. Microsoft's Mike Fortin, a distinguished engineer on the Windows team, noted in August 2008 that Microsoft has set aside a team to work solely on the issue, and that team aims to "significantly increase the number of systems that experience very good boot times". They "focused very hard on increasing parallelism of driver initialization". Also, Microsoft aims to "dramatically reduce" the number of system services, along with their demands on processors, storage, and memory.
Kernel and scheduling improvements
User-mode scheduler
The 64-bit versions of Windows 7 and Server 2008 R2 introduce a user-mode scheduling framework. On Microsoft Windows operating systems, scheduling of threads inside a process is handled by the kernel, ntoskrnl.exe. While for most applications this is sufficient, applications with large concurrent threading requirements, such as a database server, can benefit from having a thread scheduler in-process. This is because the kernel no longer needs to be involved in context switches between threads, and it obviates the need for a thread pool mechanism, as threads can be created and destroyed much more quickly when no kernel context switches are required.
Prior to Windows 7, Windows used a one-to-one user thread to kernel-thread relationship. It was of course always possible to cobble together a rough many-to-one user-scheduler (with user-level timer interrupts) but if a system call was blocked on any one of the user threads, it would block the kernel thread and accordingly block all other user threads on the same scheduler. A many-to-one model could not take full advantage of symmetric multiprocessing.
With Windows 7's user-mode scheduling, a program may configure one or more kernel threads as a scheduler supplied by a programming language library (one per logical processor desired) and then create a user-mode thread pool from which these UMS can draw. The kernel maintains a list of outstanding system calls which allows the UMS to continue running without blocking the kernel thread. This configuration can be used as either many-to-one or many-to-many.
There are several benefits to a user mode scheduler. Context switching in User Mode can be faster. UMS also introduces cooperative multitasking. Having customizable scheduler also gives more control over thread execution.
Memory management and CPU parallelism
The memory manager is optimized to mitigate the problem of total memory consumption in the event of excessive cached read operations, which occurred on earlier releases of 64-bit Windows.
Support for up to 256 logical processors
Fewer hardware locks and greater parallelism
Timer coalescing: modern processors and chipsets can switch to very low power usage levels while the CPU is idle. In order to reduce the number of times the CPU enters and exits idle states, Windows 7 introduces the concept of "timer coalescing"; multiple applications or device drivers which perform actions on a regular basis can be set to occur at once, instead of each action being performed on their own schedule. This facility is available in both kernel mode, via the KeSetCoalesableTimer API (which would be used in place of KeSetTimerEx), and in user mode with the SetWaitableTimerEx Windows API call (which replaces SetWaitableTimer).
Multimedia
Windows Media Center
Windows Media Center in Windows 7 has retained much of the design and feel of its predecessor, but with a variety of user interface shortcuts and browsing capabilities. Playback of H.264 video both locally and through a Media Center Extender (including the Xbox 360) is supported.
Some notable enhancements in Windows 7 Media Center include a new mini guide, a new scrub bar, the option to color code the guide by show type, and internet content that is more tightly integrated with regular TV via the guide. All Windows 7 versions now support up to four tuners of each type (QAM, ATSC, CableCARD, NTSC, etc.).
When browsing the media library, items that don't have album art are shown in a range of foreground and background color combinations instead of using white text on a blue background. When the left or right remote control buttons are held down to browse the library quickly, a two-letter prefix of the current album name is prominently shown as a visual aid. The Picture Library includes new slideshow capabilities, and individual pictures can be rated.
Also, while browsing a media library, a new column appears at the top named "Shared." This allows users to access shared media libraries on other Media Center PCs from directly within Media Center.
For television support, the Windows Media Center "TV Pack" released by Microsoft in 2008 is incorporated into Windows Media Center. This includes support for CableCARD and North American (ATSC) clear QAM tuners, as well as creating lists of favorite stations.
A gadget for Windows Media Center is also included.
Format support
Windows 7 includes AVI, WAV, AAC/ADTS file media sinks to read the respective formats, an MPEG-4 file source to read MP4, M4A, M4V, MP4V MOV and 3GP container formats and an MPEG-4 file sink to output to MP4 format. Windows 7 also includes a media source to read MPEG transport stream/BDAV MPEG-2 transport stream (M2TS, MTS, M2T and AVCHD) files.
Transcoding (encoding) support is not exposed through any built-in Windows application but codecs are included as Media Foundation Transforms (MFTs). In addition to Windows Media Audio and Windows Media Video encoders and decoders, and ASF file sink and file source introduced in Windows Vista, Windows 7 includes an H.264 encoder with Baseline profile level 3 and Main profile support and an AAC Low Complexity (AAC-LC) profile encoder.
For playback of various media formats, Windows 7 also introduces an H.264 decoder with Baseline, Main, and High profiles support, up to level 5.1, AAC-LC and HE-AAC v1 (SBR) multichannel, HE-AAC v2 (PS) stereo decoders, MPEG-4 Part 2 Simple Profile and Advanced Simple Profile decoders which includes decoding popular codec implementations such as DivX, Xvid and Nero Digital as well as MJPEG and DV MFT decoders for AVI. Windows Media Player 12 uses the built-in Media Foundation codecs to play these formats by default.
Windows 7 also updates the DirectShow filters introduced in Windows Vista for playback of MPEG-2 and Dolby Digital to decode H.264, AAC, HE-AAC v1 and v2 and Dolby Digital Plus (including downmixing to Dolby Digital).
Security
Action Center, formerly Windows Security Center, now encompasses both security and maintenance. It was called Windows Health Center and Windows Solution Center in earlier builds.
A new user interface for User Account Control has been introduced, which provides the ability to select four different levels of notifications, one of these notification settings, Default, is new to Windows 7. Geo-tracking capabilities are also available in Windows 7. The feature will be disabled by default. When enabled the user will only have limited control as to which applications can track their location.
The Encrypting File System supports Elliptic-curve cryptographic algorithms (ECC) in Windows 7. For backward compatibility with previous releases of Windows, Windows 7 supports a mixed-mode operation of ECC and RSA algorithms. EFS self-signed certificates, when using ECC, will use 256-bit key by default. EFS can be configured to use 1K/2k/4k/8k/16k-bit keys when using self-signed RSA certificates, or 256/384/512-bit keys when using ECC certificates.
In Windows Vista, the Protected User-Mode Audio (PUMA) content protection facilities are only available to applications that are running in a Protected Media Path environment. Because only the Media Foundation application programming interface could interact with this environment, a media player application had to be designed to use Media Foundation. In Windows 7, this restriction is lifted. PUMA also incorporates stricter enforcement of "Copy Never" bits when using Serial Copy Management System (SCMS) copy protection over an S/PDIF connection, as well as with High-bandwidth Digital Content Protection (HDCP) over HDMI connections.
Biometrics
Windows 7 includes the new Windows Biometric Framework. This framework consists of a set of components that standardizes the use of fingerprint biometric devices. In prior releases of Microsoft Windows, biometric hardware device manufacturers were required to provide a complete stack of software to support their device, including device drivers, software development kits, and support applications. Microsoft noted in a white paper on the Windows Biometric Framework that the proliferation of these proprietary stacks resulted in compatibility issues, compromised the quality and reliability of the system, and made servicing and maintenance more difficult. By incorporating the core biometric functionality into the operating system, Microsoft aims to bring biometric device support on par with other classes of devices.
A new Control Panel called Biometric Device Control Panel is included which provides an interface for deleting stored biometrics information, troubleshooting, and enabling or disabling the types of logins that are allowed using biometrics. Biometrics configuration can also be configured using Group Policy settings.
Networking
DirectAccess, a VPN tunnel technology based on IPv6 and IPsec. DirectAccess requires domain-joined machines, Windows Server 2008 R2 on the DirectAccess server, at least Windows Server 2008 domain controllers and a PKI to issue authentication certificates.
BranchCache, a WAN optimization technology.
The Bluetooth stack includes improvements introduced in the Windows Vista Feature Pack for Wireless, namely, Bluetooth 2.1+EDR support and remote wake from S3 or S4 support for self-powered Bluetooth modules.
NDIS 6.20 (Network Driver Interface Specification)
WWAN (Mobile broadband) support (driver model based on NDIS miniport driver for CDMA and GSM device interfaces, Connection Manager support and Mobile Broadband COM and COM Interop API).
Wireless Hosted Network capabilities: The Windows 7 wireless LAN service supports two new functions – Virtual Wi-Fi, that allows a single wireless network adapter to act like two client devices, or a software-based wireless access point (SoftAP) to act as both a wireless hotspot in infrastructure mode and a wireless client at the same time. This feature is not exposed through the GUI; however the Virtual WiFi Miniport adapter can be installed and enabled for wireless adapters with drivers that support a hosted network by using the command netsh wlan set hostednetwork mode=allow "ssid=<network SSID>" "key=<wlan security key>" keyusage=persistent|temporary at an elevated command prompt. The wireless SoftAP can afterwards be started using the command netsh wlan start hostednetwork. Windows 7 also supports WPA2-PSK/AES security for the hosted network, but DNS resolution for clients requires it to be used with Internet Connection Sharing or a similar feature.
SMB 2.1, which includes minor performance enhancements over SMB2, such as a new opportunistic locking mechanism.
RDP 7.0
Background Intelligent Transfer Service 4.0
HomeGroup
Alongside the workgroup system used by previous versions, Windows 7 adds a new ad hoc home networking system known as HomeGroup. The system uses a password to join computers into the group, and allows users' libraries, along with individual files and folders, to be shared between multiple computers. Only computers running Windows 7 to Windows 10 version 1709 can create or join a HomeGroup; however, users can make files and printers shared in a HomeGroup accessible to Windows XP and Windows Vista through a separate account, dedicated to sharing HomeGroup content, that uses traditional Windows sharing. HomeGroup support was deprecated in Windows 10 and has been removed from Windows 10 version 1803 and later.
HomeGroup as a concept is very similar to a feature slated for Windows Vista, known as Castle, which would have made it possible to have an identification service for all members on the network, without a centralized server.
HomeGroup is created in response to the need for a simple sharing model for inexperienced users who need to share files without wrestling with user accounts, Security descriptors and share permissions. To that end, Microsoft previously created Simple File Sharing mode in Windows XP that, once enabled, caused all connected computers to be authenticated as Guest. Under this model, either a certain file or folder was shared with anyone who connects to the network (even unauthorized parties who are in range of the wireless network) or was not shared at all. In a HomeGroup, however:
Communication between HomeGroup computers is encrypted with a pre-shared password.
A certain file or folder can be shared with the entire HomeGroup (anyone who joins) or a certain person only.
HomeGroup computers can also be a member of a Windows domain or Windows workgroup at the same time and take advantage of those file sharing mechanisms.
Only computers that support HomeGroup (Windows 7 to Windows 10 version 1709) can join the network.
Windows Firewall
Windows 7 adds support for multiple firewall profiles. The Windows Firewall in Windows Vista dynamically changes which network traffic is allowed or blocked based on the location of the computer (based on which network it is connected to). This approach falls short if the computer is connected to more than one network at the same time (as for a computer with both an Ethernet and a wireless interface). In this case, Vista applies the profile that is more secure to all network connections. This is often not desirable; Windows 7 resolves this by being able to apply a separate firewall profile to each network connection.
DNSSEC
Windows 7 and Windows Server 2008 R2 introduce support for Domain Name System Security Extensions (DNSSEC), a set of specifications for securing certain kinds of information provided by the Domain Name System (DNS) as used on Internet Protocol (IP) networks. DNSSEC employs digital signatures to ensure the authenticity of DNS data received from a DNS server, which protect against DNS cache poisoning attacks.
Management features
Windows 7 contains Windows PowerShell 2.0 out-of-the-box, which is also available as a download to install on older platforms:
Windows Troubleshooting Platform
Windows PowerShell Integrated Scripting Environment
PowerShell Remoting
Other new management features include:
AppLocker (a set of Group Policy settings that evolved from Software Restriction Policies, to restrict which applications can run on a corporate network, including the ability to restrict based on the application's version number or publisher)
Group Policy Preferences (also available as a download for Windows XP and Windows Vista).
The Windows Automation API (also available as a download for Windows XP and Windows Vista).
Upgraded components
Windows 7 includes Internet Explorer 8, .NET Framework 3.5 SP1, Internet Information Services (IIS) 7.5, Windows Installer 5.0 and a standalone XPS Viewer. Paint, Calculator, Resource Monitor, on-screen keyboard, and WordPad have also been updated.
Paint and WordPad feature a Ribbon interface similar to the one introduced in Office 2007, with both sporting several new features. WordPad supports Office Open XML and ODF file formats.
Calculator has been rewritten, with multiline capabilities including Programmer and Statistics modes, unit conversion, and date calculations. Calculator was also given a graphical facelift, the first since Windows 95 in 1995 and Windows NT 4.0 in 1996.
Sticky Notes of Windows XP Tablet PC Edition 2002 and the similar Sticky Notes Gadget introduced in Windows Vista have been replaced with a new Sticky Notes application that supports new Windows 7 taskbar features — a thumbnail preview of a stack representing all minimized notes, and Jump Lists on the taskbar and Start menu to create a New Note — and full-text-based search in the Windows Shell through an IFilter and protocol handler for all notes. Real-time stylus (both pen and touch input) is also supported.
Resource Monitor includes an improved RAM usage display and supports display of TCP/IP ports being listened to, filtering processes using networking, filtering processes with disk activity and listing and searching process handles (e.g. files used by a process) and loaded modules (files required by an executable file, e.g. DLL files).
Microsoft Magnifier, an accessibility utility for low vision users has been dramatically improved. Magnifier now supports the full screen zoom feature, whereas previous Windows versions had the Magnifier attached to the top of the screen in a dock layout. The new full screen feature is enabled by default, however, it requires Windows Aero for the advantage of the full screen zoom feature. If Windows is set to the Windows 7 Basic, Windows Classic, or High Contrast themes, as well as having Magnifier to use a docked window instead of full screen, Magnifier will still function like it did in Windows Vista and earlier.
Windows Installer 5.0 supports installing and configuring Windows Services, and provides developers with more control over setting permissions during software installation. Neither of these features will be available for prior versions of Windows; custom actions to support these features will continue to be required for Windows Installer packages that need to implement these features.
Other features
Windows 7 improves the Tablet PC Input Panel to make faster corrections using new gestures, supports text prediction in the soft keyboard and introduces a new Math Input Panel for inputting math into programs that support MathML. It recognizes handwritten math expressions and formulas. Additional language support for handwriting recognition can be gained by installing the respective MUI pack for that language (also called language pack).
Windows 7 introduces a new Problem Steps Recorder tool that enables users to record their interaction with software for analysis and support. The feature can be used to replicate a problem to show support when and where a problem occurred.
As opposed to the mostly blank start-up screen in Windows Vista, Windows 7's start-up screen consists of an animation featuring four colored light balls (one red, one yellow, one green, and one blue). They twirl around for a few seconds and then merge to form a glowing Windows logo. This only occurs on displays with a vertical resolution of 768 pixels or higher, as the animation is rendered at 1024x768. Any screen with a resolution below this displays the same startup screen that Vista used, which can also be forced to be displayed by manually editing BCD settings.
The Starter Edition of Windows 7 can run an unlimited number of applications, compared to only 3 in Windows Vista Starter. Microsoft had initially intended to ship Windows 7 Starter Edition with this limitation, but announced after the release of the Release Candidate that this restriction would not be imposed in the final release.
For developers, Windows 7 includes a new networking API with support for building SOAP-based web services in native code (as opposed to .NET-based WCF web services), new features to shorten application install times, reduced UAC prompts, simplified development of installation packages, and improved globalization support through a new Extended Linguistic Services API.
If an application crashes twice in a row, Windows 7 will automatically attempt to apply a shim. If an application fails to install a similar self-correcting fix, a tool that asks some questions about the application launches.
Windows 7 includes an optional TIFF IFilter that enables indexing of TIFF documents by reading them with optical character recognition (OCR), thus making their text content searchable. TIFF iFilter supports Adobe TIFF Revision 6.0 specifications and four compression schemes: LZW, JPEG, CCITT v4, CCITT v6
The Windows Console now adheres to the current Windows theme instead of it being shown in the Windows Classic theme.
Games such as Internet Spades, Internet Backgammon and Internet Checkers, which were removed from Windows Vista, were brought back in Windows 7.
Users can disable as many more Windows components than was possible in Windows Vista. The new components which can now be disabled include: Handwriting Recognition, Internet Explorer, Windows DVD Maker, Windows Fax and Scan, Windows Gadget Platform Windows Media Center, Windows Media Player, Windows Search, and the XPS Viewer (with its services).
Windows XP Mode is a fully functioning copy of 32-bit Windows XP Professional SP3 running in a virtual machine in Windows Virtual PC (as opposed to Hyper-V) running on top of Windows 7. Through the use of the RDP protocol, it allows applications incompatible with Windows 7 to be run on the underlying Windows XP virtual machine, but still to appear to be part of the Windows 7 desktop, thereby sharing the native Start Menu of Windows 7 as well as participating in file type associations. It is not distributed with Windows 7 media, but is offered as a free download to users of the Professional, Enterprise and Ultimate editions from Microsoft's web site. Users of Home Premium who want Windows XP functionality on their systems can download Windows Virtual PC free of charge, but must provide their own licensed copy of Windows XP. XP Mode is intended for consumers rather than enterprises, as it offers no central management capabilities. Microsoft Enterprise Desktop Virtualization (Med-V) is available for the enterprise market.
Windows 7 has native support for Hyper-V virtual machines through the inclusion of VMBus integration drivers.
Supports AVCHD camera support and Universal Video Class 1.1.
Supports Protected Broadcast Driver Architecture (PBDA) for TV tuner cards, first implemented in Windows Media Center TV Pack 2008 for Windows Vista.
Multi-function devices and Device Containers: Prior to Windows 7, every device attached to the system was treated as a single functional end-point, known as a devnode, that has a set of capabilities and a "status". While this is appropriate for single-function devices (such as a keyboard or scanner), it does not accurately represent multi-function devices such as a combined printer, fax machine, and scanner, or web-cams with a built-in microphone. In Windows 7, the drivers and status information for multi-function device can be grouped together as a single "Device Container", which is presented to the user in the new "Devices and Printers" Control Panel as a single unit. This capability is provided by a new Plug and Play property, ContainerID, which is a Globally Unique Identifier that is different for every instance of a physical device. The Container ID can be embedded within the device by the manufacturer, or created by Windows and associated with each devnode when it is first connected to the computer. In order to ensure the uniqueness of the generated Container ID, Windows will attempt to use information unique to the device, such as a MAC address or USB serial number. Devices connected to the computer via USB, IEEE 1394 (FireWire), eSATA, PCI Express, Bluetooth, and Windows Rally's PnP-X support can make use of Device Containers.
Windows 7 also contains a new FireWire (IEEE 1394) stack that fully supports IEEE 1394b with S800, S1600 and S3200 data rates.
Windows 7 now offers the ability to join a domain offline.
Service Control Manager in conjunction with the Windows Task Scheduler supports trigger-start services.
See also
References
External links
What's New in Windows 7 for IT Pros (RC)
Windows 7 Support
Windows 7
Windows 7 | Features new to Windows 7 | Technology | 11,363 |
248,117 | https://en.wikipedia.org/wiki/Huge%20cardinal | In mathematics, a cardinal number is called huge if there exists an elementary embedding from into a transitive inner model with critical point and
Here, is the class of all sequences of length whose elements are in .
Huge cardinals were introduced by .
Variants
In what follows, refers to the -th iterate of the elementary embedding , that is, composed with itself times, for a finite ordinal . Also, is the class of all sequences of length less than whose elements are in . Notice that for the "super" versions, should be less than , not .
κ is almost n-huge if and only if there is with critical point and
κ is super almost n-huge if and only if for every ordinal γ there is with critical point , , and
κ is n-huge if and only if there is with critical point and
κ is super n-huge if and only if for every ordinal there is with critical point , , and
Notice that 0-huge is the same as measurable cardinal; and 1-huge is the same as huge. A cardinal satisfying one of the rank into rank axioms is -huge for all finite .
The existence of an almost huge cardinal implies that Vopěnka's principle is consistent; more precisely any almost huge cardinal is also a Vopěnka cardinal.
Kanamori, Reinhardt, and Solovay defined seven large cardinal properties between extendibility and hugeness in strength, named through , and a property . The additional property is equivalent to " is huge", and is equivalent to " is -supercompact for all ". Corazza introduced the property , lying strictly between and .
Consistency strength
The cardinals are arranged in order of increasing consistency strength as follows:
almost -huge
super almost -huge
-huge
super -huge
almost -huge
The consistency of a huge cardinal implies the consistency of a supercompact cardinal, nevertheless, the least huge cardinal is smaller than the least supercompact cardinal (assuming both exist).
ω-huge cardinals
One can try defining an -huge cardinal as one such that an elementary embedding from into a transitive inner model with critical point and , where is the supremum of for positive integers . However Kunen's inconsistency theorem shows that such cardinals are inconsistent in ZFC, though it is still open whether they are consistent in ZF. Instead an -huge cardinal is defined as the critical point of an elementary embedding from some rank to itself. This is closely related to the rank-into-rank axiom I1.
See also
List of large cardinal properties
The Dehornoy order on a braid group was motivated by properties of huge cardinals.
References
.
.
. A copy of parts I and II of this article with corrections is available at the author's web page.
Large cardinals | Huge cardinal | Mathematics | 576 |
12,150,918 | https://en.wikipedia.org/wiki/Gliese%20317 | Gliese 317 is a small red dwarf star with two exoplanetary companions in the southern constellation of Pyxis. It is located at a distance of 49.6 light-years from the Sun based on parallax measurements, and is drifting further away with a radial velocity of +87.8 km/s. This star is too faint to be viewed with the naked eye, having an apparent visual magnitude of 11.98 and an absolute magnitude of 11.06.
This is an M-type main-sequence star with a stellar classification of M2.5V. Photometric calibrations and infrared spectroscopic measurements indicate that the star is enriched in heavy elements compared to the Sun. The star is estimated to be roughly five billion years old and has a low activity level for a star of its class. It has 42% of the mass and radius of the Sun and is spinning with a rotation period of 69 days. The star is radiating 2.2% of the Sun's luminosity from its photosphere at an effective temperature of 3,510 K.
Planetary system
In 2007, a jovian planet (designated Gliese 317 b) was announced to orbit the star. The planet orbits about 95% the distance between Earth to the Sun. Despite this, it takes about 1.9 years, due to the lower mass of the central M dwarf. Astrometric measurements on Gliese 317 provided a significant update to the distance, putting the star at 15.3 pc, which is 65% further out than previously assumed. Using mass-luminosity calibrations, the new distance implies the star is significantly more massive and so are the planet candidates. The same astrometric measurements allowed to constrain the orbital inclination and put an upper limit to the mass of Gliese 317 b (98% confidence level) of 2.5 .
The second planet in the system was also confirmed with the additional new RV measurements, but the period and orbital parameters of Gliese 317 c were very uncertain (P>2000 days). A stability analysis on this putative system suggest that the pair of gas giant planets are in a 4:1 mean motion resonance. The second planet, remote from its host star, is a good candidate for direct imaging. Revised elements of this companion were presented in 2020, demonstrating this is a Jupiter analog.
See also
Gliese 649
Gliese 849
HD 108874
List of exoplanets discovered between 2000–2009 - Gliese 317 b
List of exoplanets discovered in 2020 - Gliese 317 c
References
External links
Extrasolar Planet Interactions by Rory Barnes & Richard Greenberg, Lunar and Planetary Lab, University of Arizona
M-type main-sequence stars
Planetary systems with two confirmed planets
Pyxis
0317 | Gliese 317 | Astronomy | 571 |
51,443,362 | https://en.wikipedia.org/wiki/Data%20augmentation | Data augmentation is a statistical technique which allows maximum likelihood estimation from incomplete data. Data augmentation has important applications in Bayesian analysis, and the technique is widely used in machine learning to reduce overfitting when training machine learning models, achieved by training models on several slightly-modified copies of existing data.
Synthetic oversampling techniques for traditional machine learning
Synthetic Minority Over-sampling Technique (SMOTE) is a method used to address imbalanced datasets in machine learning. In such datasets, the number of samples in different classes varies significantly, leading to biased model performance. For example, in a medical diagnosis dataset with 90 samples representing healthy individuals and only 10 samples representing individuals with a particular disease, traditional algorithms may struggle to accurately classify the minority class. SMOTE rebalances the dataset by generating synthetic samples for the minority class. For instance, if there are 100 samples in the majority class and 10 in the minority class, SMOTE can create synthetic samples by randomly selecting a minority class sample and its nearest neighbors, then generating new samples along the line segments joining these neighbors. This process helps increase the representation of the minority class, improving model performance.
Data augmentation for image classification
When convolutional neural networks grew larger in mid-1990s, there was a lack of data to use, especially considering that some part of the overall dataset should be spared for later testing. It was proposed to perturb existing data with affine transformations to create new examples with the same labels, which were complemented by so-called elastic distortions in 2003, and the technique was widely used as of 2010s. Data augmentation can enhance CNN performance and acts as a countermeasure against CNN profiling attacks.
Data augmentation has become fundamental in image classification, enriching training dataset diversity to improve model generalization and performance. The evolution of this practice has introduced a broad spectrum of techniques, including geometric transformations, color space adjustments, and noise injection.
Geometric Transformations
Geometric transformations alter the spatial properties of images to simulate different perspectives, orientations, and scales. Common techniques include:
Rotation: Rotating images by a specified degree to help models recognize objects at various angles.
Flipping: Reflecting images horizontally or vertically to introduce variability in orientation.
Cropping: Removing sections of the image to focus on particular features or simulate closer views.
Translation: Shifting images in different directions to teach models positional invariance.
Color Space Transformations
Color space transformations modify the color properties of images, addressing variations in lighting, color saturation, and contrast. Techniques include:
Brightness Adjustment: Varying the image's brightness to simulate different lighting conditions.
Contrast Adjustment: Changing the contrast to help models recognize objects under various clarity levels.
Saturation Adjustment: Altering saturation to prepare models for images with diverse color intensities.
Color Jittering: Randomly adjusting brightness, contrast, saturation, and hue to introduce color variability.
Noise Injection
Injecting noise into images simulates real-world imperfections, teaching models to ignore irrelevant variations. Techniques involve:
Gaussian Noise: Adding Gaussian noise mimics sensor noise or graininess.
Salt and Pepper Noise: Introducing black or white pixels at random simulates sensor dust or dead pixels.
Data augmentation for signal processing
Residual or block bootstrap can be used for time series augmentation.
Biological signals
Synthetic data augmentation is of paramount importance for machine learning classification, particularly for biological data, which tend to be high dimensional and scarce. The applications of robotic control and augmentation in disabled and able-bodied subjects still rely mainly on subject-specific analyses. Data scarcity is notable in signal processing problems such as for Parkinson's Disease Electromyography signals, which are difficult to source - Zanini, et al. noted that it is possible to use a generative adversarial network (in particular, a DCGAN) to perform style transfer in order to generate synthetic electromyographic signals that corresponded to those exhibited by sufferers of Parkinson's Disease.
The approaches are also important in electroencephalography (brainwaves). Wang, et al. explored the idea of using deep convolutional neural networks for EEG-Based Emotion Recognition, results show that emotion recognition was improved when data augmentation was used.
A common approach is to generate synthetic signals by re-arranging components of real data. Lotte proposed a method of "Artificial Trial Generation Based on Analogy" where three data examples provide examples and an artificial is formed which is to what is to . A transformation is applied to to make it more similar to , the same transformation is then applied to which generates . This approach was shown to improve performance of a Linear Discriminant Analysis classifier on three different datasets.
Current research shows great impact can be derived from relatively simple techniques. For example, Freer observed that introducing noise into gathered data to form additional data points improved the learning ability of several models which otherwise performed relatively poorly. Tsinganos et al. studied the approaches of magnitude warping, wavelet decomposition, and synthetic surface EMG models (generative approaches) for hand gesture recognition, finding classification performance increases of up to +16% when augmented data was introduced during training. More recently, data augmentation studies have begun to focus on the field of deep learning, more specifically on the ability of generative models to create artificial data which is then introduced during the classification model training process. In 2018, Luo et al. observed that useful EEG signal data could be generated by Conditional Wasserstein Generative Adversarial Networks (GANs) which was then introduced to the training set in a classical train-test learning framework. The authors found classification performance was improved when such techniques were introduced.
Mechanical signals
The prediction of mechanical signals based on data augmentation brings a new generation of technological innovations, such as new energy dispatch, 5G communication field, and robotics control engineering. In 2022, Yang et al. integrate constraints, optimization and control into a deep network framework based on data augmentation and data pruning with spatio-temporal data correlation, and improve the interpretability, safety and controllability of deep learning in real industrial projects through explicit mathematical programming equations and analytical solutions.
See also
Oversampling and undersampling in data analysis
Surrogate data
Generative adversarial network
Variational autoencoder
Data pre-processing
Convolutional neural network
Regularization (mathematics)
Data preparation
Data fusion
References
Machine learning | Data augmentation | Engineering | 1,328 |
25,801,354 | https://en.wikipedia.org/wiki/Masayoshi%20Tomizuka | Masayoshi Tomizuka is a professor in Control Theory in Department of Mechanical Engineering, University of California, Berkeley. He holds the Cheryl and John Neerhout, Jr., Distinguished Professorship Chair. Tomizuka received his B.S. and M.S. degrees in mechanical engineering from Keio University, Tokyo, Japan in 1968 and 1970, and his Ph.D. in mechanical engineering from the Massachusetts Institute of Technology in February 1974. He was elected to the National Academy of Engineering in 2022.
Career
Tomizuka joined the faculty of the Department of Mechanical Engineering at the University of California, Berkeley in 1974. He served as vice chair of mechanical engineering in charge of instruction from December 1989 to December 1991, and as vice chair in charge of graduate studies from July 1995 to December 1996. Since June 11, 2009, he has been executive associate dean for the College of Engineering at UC Berkeley. He also served as program director of the Dynamic Systems and Control Program at the National Science Foundation from Sept. 2002 to Dec. 2004.
Research interests
Tomizuka's current research interests include optimal and adaptive control, digital control, signal processing, motion control, and control problems related to robotics, manufacturing, data storage devices, vehicles and human-machine systems.
Society activities
Tomizuka has been and is an active member of the Dynamic Systems and Control Division (DSCD) of the American Society of Mechanical Engineers (ASME). He served as chairman of the executive committee of the Division (1986–87), Technical Editor of the ASME Journal of Dynamic Systems, Measurement and Control, J-DSMC (1988–93) and editor-in-chief of the IEEE/ASME Transactions on Mechatronics (1997–99). He served as president of the American Automatic Control Council (1998–99). He chairs the IFAC Technical Committee on Mechatronic Systems. He is a Fellow of the ASME, the Institute of Electric and Electronics Engineers (IEEE) and the Society of Manufacturing Engineers. He is the recipient of the Best J-DSMC Best Paper Award (1995), the DSCD Outstanding Investigator Award (1996), the Pi Tau Sigma-ASME Charles Russ Richards Memorial Award (1997), the DSCD Leadership Award (2000), the Rufus Oldenburger Medal (2002) and the John R. Ragazzini Award (2006). The Oldenburger Medal was awarded to him for his seminal contributions in the area of adaptive control, preview control and zero-phase control.
References
External links
Prof. Tomizuka's Personal Page
Prof. Tomizuka's Research Lab
Prof. Tomizuka's Curriculum Vitae
Selected Publications of Prof. Tomizuka
Control theorists
American academics of Japanese descent
UC Berkeley College of Engineering faculty
Japanese mechanical engineers
1946 births
Living people
Fellows of the American Society of Mechanical Engineers
Fellows of the IEEE
Engineers from Tokyo
Academics from Tokyo
Japanese emigrants to the United States | Masayoshi Tomizuka | Engineering | 601 |
45,031,327 | https://en.wikipedia.org/wiki/Integrated%20Water%20Flow%20Model | Integrated Water Flow Model (IWFM) is a computer program for simulating water flow through the integrated land surface, surface water and groundwater flow systems. It is a rewrite of the abandoned software IGSM, which was found to have several programing errors. The IWFM programs and source code are freely available. IWFM is written in Fortran, and can be compiled and run on Microsoft Windows, Linux and Unix operating systems. The IWFM source code is released under the GNU General Public License.
Groundwater flow is simulated using the finite element method. Surface water flow can be simulated as a simple one-dimensional flow-through network or with the kinematic wave method. IWFM input data sets incorporate a time stamp, allowing users to run a model for a specified time period without editing the input files.
One of the most useful features of IWFM is the internal calculation of water demands for each land use type. IWFM simulates four land use classes: agricultural, urban, native vegetation, and riparian vegetation. Land use areas are delineated as a time series, with corresponding evapotranspiration rates and water management parameters. Each time step, the land use process applies precipitation, calculates infiltration and runoff, calculates water demands, and determines what portion of the demands are not met by soil moisture. For agricultural and urban land use classes, IWFM then applies surface water and groundwater at specified rates, and optionally adjusts surface water and groundwater to exactly meet water demands. This automatic adjustment feature is especially useful for calculating unmeasured flow components (such as groundwater withdrawals) or for simulating proposed future scenarios such as studying the impacts of potential climate change.
In IWFM, the land surface, surface water and groundwater flow domains are simulated as separate processes, compiled into individual dynamic link libraries. The processes are linked by water flow terms, maintain conservation of mass and momentum between processes, and are solved simultaneously. This allows each IWFM process to be run independently as a stand-alone model, or to be linked to other programs. This functionality has been used to create a Microsoft Excel Add-in to create workbooks from IWFM output files. The IWFM land surface process has been compiled into a stand-alone program called the IWFM Demand Calculator (IDC). The groundwater process is linked to the Water Resource Integrated Modeling System (WRIMS) modeling system and used in the water resources optimization model CalSim. This feature allows other models to be easily linked with IWFM, to either enhance the capabilities of the target model (for example, by adding groundwater flow to a land surface-surface water model) or to enhance the capabilities of IWFM (for example, linking an economic model to IWFM to dynamically change the crop mix based on the depth to groundwater, as the cost of pumping increases with depth to water).
Notable models developed with IWFM include the California Central Valley Groundwater-Surface Water Simulation Model (C2VSim), a model of the Walla-Walla Basin in Washington and Oregon, USA, a model of the Butte Basin, CA, USA, and several unpublished models. IWFM has also been peer reviewed.
References
Computer programming
Water supply | Integrated Water Flow Model | Chemistry,Technology,Engineering,Environmental_science | 674 |
5,550,192 | https://en.wikipedia.org/wiki/Carlos%20J.%20Finlay%20Prize%20for%20Microbiology | The Carlos J. Finlay Prize is a biennial scientific prize sponsored by the Government of Cuba and awarded since 1980 by the United Nations Educational, Scientific and Cultural Organization (UNESCO) to people or organizations for their outstanding contributions to microbiology (including immunology, molecular biology, genetics, etc.) and its applications. Winners receive a grant of $5,000 USD donated by the Government of Cuba and an Albert Einstein Silver Medal from UNESCO.
The Prize is awarded in odd years (to coincide with UNESCO's General Conference) and is named after Carlos Juan Finlay (1833 – 1915), a Cuban physician and microbiologist widely known for his pioneering discoveries in the field of yellow fever.
Winners
Source: UNESCO
1980 - Roger Y. Stanier (Canada)
1983 - César Milstein, FRS (Argentina, United Kingdom)
1985 - and Ruth Nussenzweig (Brazil)
1987 - Hélio Gelli Pereira (Brazil) and (Sweden)
1989 - Georges Cohen (France) and Walter Fiers (Belgium)
1991 - Margarita Salas and (Spain) and Jean-Marie Ghuysen (Belgium)
1993 - James Michael Lynch (UK), James Tiedje (USA), Johannes Antonie Van Veen (Netherlands)
1995 - Jan Balzarini (Belgium) and Pascale Cossart (France)
1996 - Etienne Pays (Belgium) and Sheikh Riazzudin (Pakistan)
1999 - (Hungary)
2001 - Susana López Charreton and Carlos Arias Ortiz (Mexico)
2003 - Antonio Peña Díaz (Mexico)
2005 - Khatijah Yusoff (Malaysia)
2015 - Yoshihiro Kawaoka (Japan)
2017 - Samir Kumar Saha (Bangladesh) and Shahida Hasnain (Pakistan)
2020 - Kenya Honda (Japan)
2023 - Dilfuza Egamberdieva (Uzbekistan)
See also
List of biology awards
References
Biology awards
UNESCO awards
Awards established in 1980 | Carlos J. Finlay Prize for Microbiology | Technology | 392 |
72,910,457 | https://en.wikipedia.org/wiki/Highway%20dimension | The highway dimension is a graph parameter modelling transportation networks, such as road networks or public transportation networks. It was first formally defined by Abraham et al. based on the observation by Bast et al. that any road network has a sparse set of "transit nodes", such that driving from a point A to a sufficiently far away point B along the shortest route will always pass through one of these transit nodes. It has also been proposed that the highway dimension captures the properties of public transportation networks well (at least according to definitions 1 and 2 below), given that longer routes using busses, trains, or airplanes will typically be serviced by larger transit hubs (stations and airports). This relates to the spoke–hub distribution paradigm in transport topology optimization.
Definitions
Several definitions of the highway dimension exist. Each definition of the highway dimension uses a hitting set of a certain set of shortest paths: given a graph with edge lengths , let contain every vertex set such that induces a shortest path between some vertex pair of , according to the edge lengths . To measure the highway dimension we determine the "sparseness" of a hitting set of a subset of in a local area of the graph, for which we define a ball of radius around a vertex to be the set of vertices at distance at most from in according to the edge lengths . In the context of low highway dimension graphs, the vertices of a hitting set for the shortest paths are called hubs.
Definition 1
The original definition of the highway dimension measures the sparseness of a hub set of shortest paths contained within a ball of radius :The highway dimension of is the smallest integer such that for any radius and any node there is a hitting set of size at most for all shortest paths of length more than for which .A variant of this definition uses balls of radius for some constant . Choosing a constant greater than 4 implies additional structural properties of graphs of bounded highway dimension, which can be exploited algorithmically.
Definition 2
A subsequent definition of the highway dimension measures the sparseness of a hub set of shortest paths intersecting a ball of radius :The highway dimension of is the smallest integer such that for any radius and any node there is a hitting set of size at most for all shortest paths of length more than and at most for which .This definition is weaker than the first, i.e., every graph of highway dimension also has highway dimension , but not vice versa.
Definition 3
For the third definition of the highway dimension we introduce the notion of a "witness path": for a given radius , a shortest path has an -witness path if has length more than and can be obtained from by adding at most one vertex to either end of (i.e., has at most 2 vertices more than and these additional vertices are incident to ). Note that may be shorter than but is contained in , which has length more than .The highway dimension of is the smallest integer such that for any radius and any node there is a hitting set of size at most for all shortest paths that have an -witness path with .This definition is stronger than the above, i.e., every graph of highway dimension also has highway dimension , but cannot be bounded in terms of .
Shortest path cover
A notion closely related to the highway dimension is that of a shortest path cover, where the order of the quantifiers in the definition is reversed, i.e., instead of a hub set for each ball, there is a one hub set , which is sparse in every ball:Given a radius , an -shortest path cover of is a hitting set for all shortest paths in of length more than and at most . The -shortest path cover is locally -sparse if any node the ball contains at most vertices of , i.e., .Every graph of bounded highway dimension (according to any of the above definitions) also has a locally -sparse -shortest path cover for every , but not vice versa. For algorithmic purposes it is often more convenient to work with one hitting set for each radius , which makes shortest path covers an important tool for algorithms on graphs of bounded highway dimension.
Relation to other graph parameters
The highway dimension combines structural and metric properties of graphs, and is thus incomparable to common structural and metric parameters. In particular, for any graph it is possible to choose edge lengths such that the highway dimension is , while at the same time some graphs with very simple structure such as trees can have arbitrarily large highway dimension. This implies that the highway dimension parameter is incomparable to structural graph parameters such as treewidth, cliquewidth, or minor-freeness. On the other hand, a star with unit edge lengths has highway dimension (according to definitions 1 and 2 above) but unbounded doubling dimension, while a grid graph with unit edge lengths has constant doubling dimension but highway dimension . This means that the highway dimension according to definitions 1 and 2 is also incomparable to the doubling dimension. Any graph of bounded highway dimension according to definition 3 above, also has bounded doubling dimension.
Computing the highway dimension
Computing the highway dimension of a given graph is NP-hard. Assuming that all shortest paths are unique (which can be done by slightly perturbing the edge lengths), an -approximation can be computed in polynomial time, given that the highway dimension of the graph is . It is not known whether computing the highway dimension is fixed-parameter tractable (FPT), however there are hardness results indicating that this is likely not the case. In particular, these results imply that, under standard complexity assumptions, an FPT algorithm can neither compute the highway dimension bottom-up (from the smallest value to the largest) nor top-down (from the largest value to the smallest).
Algorithms exploiting the highway dimension
Shortest path algorithms
Some heuristics to compute shortest paths, such as the Reach, Contraction Hierarchies, Transit Nodes, and Hub Labelling algorithms, can be formally proven to run faster than other shortest path algorithms (e.g. Dijkstra's algorithm) on graphs of bounded highway dimension according to definition 3 above.
Approximations for NP-hard problems
A crucial property that can be exploited algorithmically for graphs of bounded highway dimension is that vertices that are far from the hubs of a shortest path cover are clustered into so-called towns:Given a radius , an -shortest path cover of , and a vertex at distance more than from , the set of vertices at distance at most from according to the edge lengths is called a town. The set of all vertices not lying in any town is called the sprawl.It can be shown that the diameter of every town is at most , while the distance between a town and any vertex outside it is more than . Furthermore, the distance from any vertex in the sprawl to some hub of is at most .
Based on this structure, Feldmann et al. defined the towns decomposition, which recursively decomposes the sprawl into towns of exponentially growing values . For a graph of bounded highway dimension (according to definition 1 above) this decomposition can be used to find a metric embedding into a graph of bounded treewidth that preserves distances between vertices arbitrarily well. Due to this embedding it is possible to obtain quasi-polynomial time approximation schemes (QPTASs) for various problems such as Travelling Salesman (TSP), Steiner Tree, k-Median, and Facility Location.
For clustering problems such as k-Median, k-Means, and Facility Location, faster polynomial-time approximation schemes (PTASs) are known for graphs of bounded highway dimension according to definition 1 above. For network design problems such as TSP and Steiner Tree it is not known how to obtain a PTAS.
For the k-Center problem, it is not known whether a PTAS exists for graphs of bounded highway dimension, however it is NP-hard to compute a ()-approximation on graphs of highway dimension , which implies that any ()-approximation algorithm needs at least double exponential time in the highway dimension, unless P=NP. On the other hand, it was shown that a parameterized -approximation algorithm with a runtime of exists for k-Center where is the highway dimension according to any of the above definitions. When using definition 1 above, a parameterized approximation scheme (PAS) is known to exist when using and as parameters.
For the Capacitated k-Center problem there is no PAS parameterized by and the highway dimension , unless FPT=W[1]. This is notable, since typically (i.e., for all the problems mentioned above), if there is an approximation scheme for metrics of low doubling dimension, then there is also one for graphs of bounded highway dimension. But for Capacitated k-Center there is a PAS parameterized by and the doubling dimension.
External links
Video on "Capacitated k-Center in Low Doubling and Highway Dimension" given by Tung Ahn Vu, 2022.
Video on "Algorithms for Hard Problems on Low Highway Dimension Graphs" given by Andreas Emil Feldmann at ICERM, Brown University, Providence, US, May 2019.
Video on "A (1 + ε)-Embedding of Low Highway Dimension Graphs into Bounded Treewidth Graphs" given by Andreas Emil Feldmann at Hausdorff Institut, Bonn, DE, 2015.
Video on "Highway Dimension: From Practice to Theory and Back" given by Andrew Goldberg
References
Graph theory objects | Highway dimension | Mathematics | 1,931 |
18,984,059 | https://en.wikipedia.org/wiki/Somatorelin | Somatorelin is a diagnostic agent for determining growth hormone deficiency. It is a recombinant version of growth hormone-releasing hormone (GHRH).
Somatorelin has been used to study hormone deficiency (particularly growth hormone deficiency), cognitive impairment, sleep disorders, and aging.
See also
List of growth hormone secretagogues
References
Peptides
http://www.hghninja.com/ | Somatorelin | Chemistry | 88 |
12,530,220 | https://en.wikipedia.org/wiki/Super%20Optimal%20Broth | Super Optimal Broth (SOB medium) is a nutrient-rich bacterial growth medium used for microbiological culture, generally of Escherichia coli. This nutrient-rich microbial broth contains peptides, amino acids, water soluble vitamins and glucose in a low-salt formulation. It was developed by Douglas Hanahan in 1983 and is an adjusted version of the commonly used LB medium (lysogeny broth). Growth of E. coli in SOB or SOC medium results in higher transformation efficiencies of plasmids.
SOC medium can also be used to regenerate Klebsiella oxytoca strains for the improved transformation efficiency.
Super Optimal broth with Catabolite repression (SOC) is SOB with glucose added to the culture medium as preferred carbon and energy source (i.e., rapidly metabolizable).
Composition
Figures in parentheses are the masses of reagents required to prepare 1 liter of medium.
SOB
2 % w/v tryptone (tryptic peptides from the casein hydrolysis by trypsin) (20 g)
0.5 % w/v yeast extract (5 g)
8.56 mM NaCl (0.5 g) or 10 mM NaCl (0.584 g)
2.5 mM KCl (0.186 g)
Doubly distilled H2O to 1000 mL
10 mM MgCl2 (anhydrous: 0.952 g; hexahydrate: 2.033 g) and 10 mM MgSO4 (anhydrous:1.204 g; heptahydrate: 2.465 g)
SOC
In addition to the SOB contents, SOC also contains 20 mM glucose (3.603 g).
Alternatively, SOB and SOC can be prepared by adding small amounts of concentrated magnesium chloride and glucose solutions to pre-prepared SOB.
pH adjustment
For maximum effectiveness, SOB/SOC media should have its pH adjusted to 7.0 by adding concentrated sodium hydroxide. The original literature states that the pH of the final medium should be between 6.8 and 7.0.
Sterilization
Finally, the SOB medium should be autoclaved at 121 °C to ensure sterility. The components of SOC medium should not be autoclaved together because at elevated temperature glucose can react with the tryptic peptides (see Maillard reaction), compromising the quality of the preparation. SOB and magnesium and glucose additive solutions can be autoclaved separately and mixed afterwards to the final concentrations. Complete SOC can be filter sterilized through a 0.22 μm filter.
References
Microbiological media
American inventions | Super Optimal Broth | Biology | 549 |
16,288,924 | https://en.wikipedia.org/wiki/Hjelmslev%27s%20theorem | In geometry, Hjelmslev's theorem, named after Johannes Hjelmslev, is the statement that if points P, Q, R... on a line are isometrically mapped to points P´, Q´, R´... of another line in the same plane, then the midpoints of the segments PP´, QQ´, RR´... also lie on a line.
The proof is easy if one assumes the classification of plane isometries. If the given isometry is odd, in which case it is necessarily either a reflection in a line or a glide-reflection (the product of three reflections in a line and two perpendiculars to it), then the statement is true of any points in the plane whatsoever: the midpoint of PP´ lies upon the axis of the (glide-)reflection for any P. If the isometry is even, compose it with reflection in line PQR to obtain an odd isometry with the same effect on P, Q, R... and apply the previous remark.
The importance of the theorem lies in the fact that it has a different proof that does not presuppose the parallel postulate and is therefore valid in non-Euclidean geometry as well. By its help, the mapping that maps every point P of the plane to the midpoint of the segment P´P´´, where P´and P´´ are the images of P under a rotation (in either sense) by a given acute angle about a given center, is seen to be a collineation mapping the whole hyperbolic plane in a 1-1 way onto the inside of a disk, thus providing a good intuitive notion of the linear structure of the hyperbolic plane. In fact, this is called the Hjelmslev transformation.
References
.
External links
Hjelmslev's Theorem by Jay Warendorff, the Wolfram Demonstrations Project.
Hjelmslev's Theorem from cut-the-knot
Theorems in plane geometry | Hjelmslev's theorem | Mathematics | 419 |
577,438 | https://en.wikipedia.org/wiki/Minisatellite | In genetics, a minisatellite is a tract of repetitive DNA in which certain DNA motifs (ranging in length from 10–60 base pairs) are typically repeated two to several hundred times. Minisatellites occur at more than 1,000 locations in the human genome and they are notable for their high mutation rate and high diversity in the population. Minisatellites are prominent in the centromeres and telomeres of chromosomes, the latter protecting the chromosomes from damage. The name "satellite" refers to the early observation that centrifugation of genomic DNA in a test tube separates a prominent layer of bulk DNA from accompanying "satellite" layers of repetitive DNA. Minisatellites are small sequences of DNA that do not encode proteins but appear throughout the genome hundreds of times, with many repeated copies lying next to each other.
Minisatellites and their shorter cousins, the microsatellites, together are classified as VNTR (variable number of tandem repeats) DNA. Confusingly, minisatellites are often referred to as VNTRs, and microsatellites are often referred to as short tandem repeats (STRs) or simple sequence repeats (SSRs).
Structure
Minisatellites consist of repetitive, generally GC-rich, motifs that range in length from 10 to over 100 base pairs. These variant repeats are tandemly intermingled. Some minisatellites contain a central sequence (or "core unit") of nucleobases "GGGCAGGANG" (where N can be any base) or more generally consist of sequence motifs of purines (adenine (A) and guanine (G)) and pyrimidines (cytosine (C) and thymine (T)).
Hypervariable minisatellites have core units 9–64 bp long and are found mainly at the centromeric regions.
In humans, 90% of minisatellites are found at the sub-telomeric region of chromosomes. The human telomere sequence itself is a tandem repeat: TTAGGG TTAGGG TTAGGG ...
Function
Minisatellites have been implicated as regulators of gene expression (e.g. at levels of transcription, alternative splicing, or imprint control). They are generally non-coding DNA but sometimes are part of possible genes.
Minisatellites also constitute the chromosomal telomeres, which protect the ends of a chromosome from deterioration or from fusion with neighbouring chromosomes.
Mutability
Minisatellites have been associated with chromosome fragile sites and are proximal to a number of recurrent translocation breakpoints.
Some human minisatellites (~1%) have been demonstrated to be hypermutable, with an average mutation rate in the germline higher than 0.5% up to over 20%, making them the most unstable region in the human genome known to date. While other genomes (mouse, rat and pig) contain minisatellite-like sequences, none was found to be hypermutable. Since all hypermutable minisatellites contain internal variants, they provide extremely informative systems for analyzing the complex turnover processes that occur at this class of tandem repeat. Minisatellite variant repeat mapping by PCR (MVR-PCR) has been extensively used to chart the interspersion patterns of variant repeats along the array, which provides details on the structure of the alleles before and after mutation.
Studies have revealed distinct mutation processes operating in somatic and germline cells. Somatic instability detected in blood DNA shows simple and rare intra-allelic events two to three orders of magnitude lower than in sperm. In contrast, complex inter-allelic conversion-like events occur in the germline.
Additional analyses of DNA sequences flanking human minisatellites have also revealed an intense and highly localized meiotic crossover hotspot that is centered upstream of the unstable side of minisatellite arrays. Repeat turnover therefore appears to be controlled by recombinational activity in DNA that flanks the repeat array and results in a polarity of mutation. These findings have suggested that minisatellites most probably evolved as bystanders of localized meiotic recombination hotspots in the human genome.
It has been proposed that minisatellite sequences encourage chromosomes to swap DNA. In alternative models, it is the presence of neighbouring double-strand hotspots which is the primary cause of minisatellite repeat copy number variations. Somatic changes are suggested to result from replication difficulties (which might include replication slippage, among other phenomena).
Studies have shown that the evolutionary fate of minisatellites tends towards an equilibrium distribution in the size of alleles, until mutations in the flanking DNA affect the recombinational activity of a minisatellite by suppressing DNA instability. Such an event would ultimately lead to the extinction of a hypermutable minisatellite by meiotic drive.
History
The first human minisatellite was discovered in 1980 by A.R. Wyman and R. White,. Discovering their high level of variability, Sir Alec Jeffreys developed DNA fingerprinting based on minisatellites, solving the first immigration case by DNA in 1985, and the first forensic murder case, the Enderby murders in the United Kingdom, in 1986. Minisatellites were subsequently also used for genetic markers in linkage analysis and population studies, but were soon replaced by microsatellite profiling in the 1990s.
The term satellite DNA originates from the observation in the 1960s of a fraction of sheared DNA that showed a distinct buoyant density, detectable as a "satellite peak" in density gradient centrifugation, and that was subsequently identified as large centromeric tandem repeats. When shorter (10–30-bp) tandem repeats were later identified, they came to be known as minisatellites. Finally, with the discovery of tandem iterations of simple sequence motifs, the term microsatellites was coined.
External links
Search tools:
SERF De Novo Genome Analysis and Tandem Repeats Finder
TRF Tandem Repeats Finder
See also
Microsatellite
Tandem repeat
Telomere
References
Repetitive DNA sequences | Minisatellite | Biology | 1,279 |
29,997,263 | https://en.wikipedia.org/wiki/Cerotic%20acid | Cerotic acid, or hexacosanoic acid, is a 26-carbon long-chain saturated fatty acid with the chemical formula . It is most commonly found in beeswax and carnauba wax. It is a white solid, although impure samples appear yellowish.
The name is derived from the Latin word cerotus, which in turn was derived from the Ancient Greek word κηρός (keros), meaning beeswax or honeycomb.
Cerotic acid is also a type of very long chain fatty acid that is often associated with the disease adrenoleukodystrophy, which involves the excessive accumulation of unmetabolized fatty acid chains, including cerotic acid, in the peroxisome.
See also
List of saturated fatty acids
Very long chain fatty acids
References
Fatty acids
Alkanoic acids | Cerotic acid | Chemistry | 175 |
912,128 | https://en.wikipedia.org/wiki/Microsome | In cell biology, microsomes are heterogeneous vesicle-like artifacts (~20-200 nm diameter) re-formed from pieces of the endoplasmic reticulum (ER) when eukaryotic cells are broken-up in the laboratory; microsomes are not present in healthy, living cells.
Rough (containing ribosomes) and smooth (without ribosomes) microsomes are made from the endoplasmic reticulum through cell disruption. These microsomes have an inside that is exactly the same as the endoplasmic reticulum lumen. Both forms of microsomes can be purified by a process known as equilibrium density centrifugation. Rough and smooth microsomes do differ in their proteins and rough microsomes have shown occurrence of translation and translocation at the same time besides certain exceptions from proteins in yeast.
Signal Hypothesis
The Signal Hypothesis was postulated by Günter Blobel and David Sabatini in 1971, stating that a unique peptide sequence is encoded by mRNA specific for proteins destined for translocation across the ER membrane. This peptide signal directs the active ribosome to the membrane surface and creates the conditions for transfer of the nascent polypeptide across the membrane. The generalization of the Signal Hypothesis to include signals for every organelle and location within the cell had an impact far beyond illuminating the targeting of secretory proteins, as it introduced the concept of 'topogenic' signals for the first time. Before the Signal Hypothesis, it was almost inconceivable that information encoded in the polypeptide chain could determine the localization of proteins in the cell.
Cell-free Protein Synthesis
This relates to cell-free protein synthesis. Cell-free protein synthesis that is without microsomes has no way for incorporation into the microsomes to happen. This means that when microsomal membranes are presented later there isn’t the removal of the signal sequence. With microsomes there, cell-free protein synthesis demonstrates cotranslational transport of the protein into the microsome and therefore the removal of the signal sequence. This process produces a mature protein chain. Studies have looked into the cell-free protein synthesis process when microsomes have their bound ribosomes stripped away from them. This explained certain details about endoplasmic reticulum signal sequences. Normally, a secretory protein only has its signal sequence removed if the microsomes are there for protein synthesis due to the secretory protein being incorporated into the microsomes. Protein transport doesn’t happen if there is a late addition of microsomes after the completion of the protein synthesis process.
Protein extrusion into a microsome can be described by multiple factors. A protein has been extruded if it is resistant to proteases, is not resistant to proteases when detergents are present, or is glycosylated by enzymes residing in the microsomes. Additionally, another sign that a protein has been extruded is signal peptidase cleaving off the N-terminal signal peptide inside the microsome that may cause the protein to be smaller in size.
Pulse-Chase experiments
Microsomes also play a part in the Pulse-Chase experiments. The Pulse-Chase experiments showed that secreted proteins move across the endoplasmic reticulum membrane when the membranes are purified. It was important to take the endoplasmic reticulum away from the rest of the cell to look into translocation but this isn’t possible due to how delicate and interconnected it is. This allowed microsomes to come into play as they have the majority of the biochemical properties of the endoplasmic reticulum. The microsomes are formed through homogenizing the cells and small closed vesicles with ribosomes outside being formed from rough endoplasmic reticulum breakdown. When microsomes were treated with protease, it was found that the polypeptide made by ribosomes ended in the microsomal lumen. This takes place even though the proteins are made on the cytosolic face of the endoplasmic reticulum membrane.
Other experiments have shown that microsomes have to be introduced before about the first 70 amino acids are translated for the secretory protein to go into the microsomal lumen. At this point, 40 amino acids are sticking out from the ribosome and the 30 amino acids after that are in the ribosomal channel. Cotranslational translocation explains that transport into the endoplasmic reticulum lumen of secretory proteins starts with the protein still bound to the ribosomes and not completely synthesized.
Microsomes can be concentrated and separated from other cellular debris by differential centrifugation. Unbroken cells, nuclei, and mitochondria sediment out at 10,000 g (where g is the Earth's gravitational acceleration), whereas soluble enzymes and fragmented ER, which contains cytochrome P450 (CYP), remain in solution. At 100,000 g, achieved by faster centrifuge rotation, ER sediments out of solution as a pellet but the soluble enzymes remain in the supernatant. In this way, cytochrome P450 in microsomes is concentrated and isolated. Microsomes have a reddish-brown color, due to the presence of the heme. Because of the need for a multi-part protein-system, microsomes are necessary to analyze the metabolic activity of CYPs. These CYPs are highly abundant in livers of rats, mice and humans, but present in all other organs and organisms as well.
To get microsomes containing a specific CYP or for high amounts of active enzyme, microsomes are prepared from Sf9 insect cells or in yeast via heterologous expression. Alternatively expression in Escherichia coli of whole or truncated proteins can also be performed. Therefore, microsomes are a valuable tool for investigating the metabolism of compounds (enzyme inhibition, clearance and metabolite identification) and for examining drug-drug interactions by in vitro-research. Researchers often select microsome lots based on the enzyme activity level of specific CYPs. Some lots are available to study specific populations (for example, lung microsomes from smokers or non-smokers) or divided into classifications to meet target CYP activity levels for inhibition and metabolism studies.
Microsomes are used to mimic the activity of the endoplasmic reticulum in a test tube and conduct experiments that require protein synthesis on a membrane. They provide a way for scientists to figure out how proteins are being made on the ER in a cell by reconstituting the process in a test tube.
Keefer et al. looked into how human liver microsomes and human hepatocytes are used to study metabolic stability and inhibition for in vitro systems. Going into their similarities and differences can shine light on the mechanisms of metabolism, passive permeability, and transporters. It was shown that passive permeability is important in metabolism and enzyme inhibition in human hepatocytes. Also, P-gp efflux has a smaller role in this same area. Also, liver microsomes are more predictive than hepatocytes of in vivo clearance when they give higher intrinsic clearance than the hepatocytes.
MTP
Iqbal, Jahangir, and Al-Qarni studied the microsomal triglyceride transfer protein (MTP). MTP is an endoplasmic reticulum resident protein and assists in transferring neutral lipids to nascent apolipoprotein B. MTP has a large use for abetalipoproteinemia patients with MTP mutations because of how it affects the assembly and secretion of apoB-containing lipoproteins. These MTP mutations are linked with not having circulation of the apoB-containing lipoproteins. MTP is also involved with cholesterol ester and cluster of differentiation 1d biosynthesis. Transferring sphingolipids to apoB-containing lipoproteins also falls under the ability of MTP. MTP works with the homeostasis of lipids and lipoproteins and is related to certain pathophysiological conditions and metabolic diseases.
Wang et al. explored drug metabolism in vitro using human liver microsomes and human liver S9 fractions. The study found significant differences between human liver microsomes and human liver S9 fractions in drug-metabolizing enzyme and transporter protein concentrations. The protein-protein correlations of these drug-metabolizing enzymes and transporters was determined relating to the two hepatic preparations.
See also
Cytochrome P450
List of biological development disorders
S9 fraction
Cell-free protein synthesis
Pulse-Chase experiments
Differential Centrifugation
Microsomal Triglyceride Transfer Protein
References
External links
Membrane biology | Microsome | Chemistry | 1,846 |
71,369,834 | https://en.wikipedia.org/wiki/Sacrifice%20to%20Heaven | Sacrifice to Heaven () is an Asian religious practice originating in the worship of Shangdi in China. In Ancient Chinese society, nobles of all levels constructed altars for Heaven. At first, only nobles could worship Shangdi but later beliefs changed and everyone could worship Shangdi.
Modern Confucian churches make this practice available to all believers and it continues in China without a monarch.
It has been influential on areas outside of China including Japan, Vietnam, and Korea.
The Jì () in the Chinese name is the same Je as in Jesa.
History
It first originated in the Shang dynasty. During the Zhou dynasty, Sacrifice to Heaven and Fen Shan, were privileges enjoyed exclusively by the Son of Heaven due to Shendao teachings.
The rites have been performed at the Temple of Heaven since the Ming dynasty and are still performed today
Some scholars believe that Qing involvement with the ritual standardized Manchu rituals with the book of Manchu rites, but this is unsupported
Since the early years of the Republic of China, Kang Youwei's Confucian movement advocated the separation of Religious Confucianism from the state bureaucracy, allowing everyone to Sacrifice to Heaven according to the Christian model.
In the 21st century, it is done without a monarch. It is sometimes done in other locations aside from the Temple of Heaven, such as in Fujian in 2015
In Korea
In Korea, Sacrifice to Heaven is read as Jecheon (Hanja: 祭天). It is also identified with the word yeonggo 영고 (迎鼓) and has a history linked to Korean shamanism, in addition to Chinese influence.
In Buyeo, during the yeonggo festival which was held in December, prisoners would be released and judgments given. It was used as a political tool. in a manner similar to a jubilee.
These ceremonies were typically characterized by communal and thanksgiving aspects and in Buyeo, it was done after the harvest.
Dongye
Mucheon (舞天), a religious ritual and a comprehensive art form of the Dongye, was an event held during the first month of the lunar calendar (October) in which offerings were made to the heavens and people climbed high mountains to have fun. According to a commentary called the Touyuan Booklet (兎園策府), included in the Dunhuang manuscripts during the Tang Dynasty in China, Mucheon was a custom in Gojoseon that was held in October.
Goryeo
During the Goryeo Dynasty, there was a Jecheon event called Eight Gwanhoe (팔관회/八關會). It was a successor to Silla's Eight Gwanhoe, an event where sacrifices were made to the spirits of all things and the heavens.
There was also an event called Weonguje (圜丘祭), which came from China. According to the Goryeo History, it was practiced from the time of Goryeo Seongjong, and it is said that the Weongudan (圜丘壇) was built to offer sacrifices to the sky. As a place to offer sacrifices to the heavens, Weongudan was repeatedly installed and abolished from the Goryeo Dynasty.
Joseon
During the early Joseon Dynasty, Sejo (世祖), a temple was built and the Sacrifices to Heaven were held, but it was discontinued after seven years. The reasoning was that only the emperor could offer sacrifices to the heavens, and Joseon, as an imperial state, had no such authority as per little China ideology. Later, after the country was renamed the Korean Empire, the practice was restored and a Hwangudan was built for the purpose.
Japan
The ritual of was imported from China to Japan during the Tang Dynasty. The emperor would perform the sacrifice on the winter solstice. According to the book Shoku Nihongi (Japanese: 続日本紀), Emperor Shōmu performed a ritual sacrifice to the heavens during the summer court ceremony (the first day of the New Year, year 725).
The religions of Japan have been heavily influenced by imported beliefs such as Confucianism and Buddhism, which were merged with the country's indigenous religion of Shinto. The Sun Goddess Amaterasu is considered the supreme deity in Japan and is considered the ancestor of both the Emperor and the country. The Emperors were known to build temples and perform sacrifices, leading to the localization of these rituals into the worship of the Sun Goddess at the Ise Shrine.
During the Heian period, Buddhism became deeply ingrained in Japanese society, with the theory of "Honji suijaku" being propagated by the Japanese Royal Family. This theory posited that Buddha was the original deity and that the gods were simply temporary manifestations of the Buddha. According to this theory, the Sun Goddess was seen as an incarnation of Vairocana.
The Shoku Nihongi records that in 698, Emperor Monmu ordered the construction of a temple in the Watarai district of Ise, to worship both gods and Buddha. Over time, the rituals of worshiping the gods took on the characteristics of worshiping the Buddha.
Emperor Kanmu played a pivotal role in centralizing power and establishing the supremacy of the emperor in Japan. In 784, he relocated the capital to Nagaoka-kyō in order to counteract the growing influence of Buddhism in the Nara region and to promote the study of Chinese Confucian texts, such as the Spring and Autumn Annals, among the population. Therefore, he performed a sacrifice to heaven in 785 on the Winter Solstice to assert his authority
The modern concept emerged in Japan in the Meiji period with the rise of western style Japanese nationalism and its promotion by the Imperial House of Japan. Sacrifice to Heaven is still performed but it is considered a form of Shinto. Every year, the festival of Niiname-no-Matsuri (新嘗祭) is performed. Most Japanese citizens are unaware of the connection to China. The first such festival of the reign of an Emperor is called the Daijosai.
Vietnam
In Vietnam, tế thiên or Sacrifice to Heaven was first established with the Đinh dynasty when Đinh Bộ Lĩnh declared himself Emperor. The Đại Việt sử ký toàn thư records an early sacrifice by Lý Anh Tông in 1154.
It is better known in Vietnam by the name Nam Giao.
From the Lý dynasty onwards, the ritual was seen as highly important.
Nam Giao is considered the most important sacrificial ritual of the Nguyễn dynasty and is the only well-documented one
In the Nguyễn dynasty, the Esplanade of Sacrifice to the Heaven and Earth was made to sacrifice to heaven It was made in 1807 and continuous sacrifices were made at it until 1945 The Nam Giao sacrifice ceremony was gradually restored to be included in Festival Huế every two years from 2002 and continues to this day.
See also
Temple of Heaven
Wufang Shangdi
Ancestor veneration in China
Tian
Son of Heaven
Jesa
Feng Shan
Interactions Between Heaven and Mankind
Unity of Heaven and humanity
Tenno taitei
Shangdi
References
Pages with unreviewed translations
Confucian rites
Relationship between Heaven and Mankind
Ritual
Sacrifice | Sacrifice to Heaven | Biology | 1,459 |
514,261 | https://en.wikipedia.org/wiki/Wrigley%20Rooftops | Wrigley Rooftops is a name for the sixteen rooftops of residential buildings which have bleachers or seating on them to view baseball games or other major events at Wrigley Field. Since 1914 Wrigley roofs have dotted the neighborhood of Wrigleyville around Wrigley Field, where the Chicago Cubs play Major League Baseball. Venues on Waveland Avenue overlook left field, while those along Sheffield Avenue have a view over right field.
The rooftops had always been a gathering place for free views of the game, but until the 1980s, the observers were usually just a few dozen people watching from the flat rooftops, windows and porches of the buildings, with "seating" consisting of a few folding chairs, and with little commercial impact on the team. When the popularity of the Cubs began to rise in the 1980s, formal seating structures began to appear, and building owners began charging admission, much to the displeasure of Cubs management, who saw it as an unreasonable encroachment.
Various methods of combating this phenomenon were discussed. The idea of a "spite fence", as with Shibe Park in Philadelphia, or the Cubs' previous home, West Side Park, was discussed. The idea was not implemented, nor was it fully abandoned. Before Opening Day in 2002, a "wind screen" was temporarily erected on the ballpark's back screen behind the outfield wall, obscuring some of the view from Wrigley roofs.
When the majority were independent of Cub affiliate ownership prior to 2016, the Wrigleyville Rooftops Association's members were the 16 rooftop venues. Wrigley Rooftops is the Ricketts family's marketing arm and brand for their rooftop holdings through Greystone Sheffield Holdings and Hickory Street Capital.
History
Soon after Wrigley opened in 1914, the rooftops sprung up around the ball field. In the 1938 World Series, when the Cubs played the Yankees, The Sheffield Baseball Club was the first to charge for admission.
Real estate investor Donal Barry, through an entity, purchased in 2000 1010 W. Waveland (Beyond the Ivy I) then 1048 W. Waveland (originally Beyond The Ivy III, then Sky Lounge Wrigley Rooftop now 1048 Sky Lounge) also in 2000. Barry's entity in 2004 purchased 1038 W. Waveland (Beyond The Ivy II).
In 2002, the Cubs organization filed a lawsuit against the different facilities for copyright infringement. Since operators charge admission to use their amenities and sell licenses to view Major League Baseball, the Cubs asserted that the facilities were illegally using a copyrighted game and sued for royalties. In 2004, 11 of the 13 roofs settled with the club out of court, agreeing to pay 17% of gross revenue in exchange for official endorsement. The city also began investigating the structural integrity of the roofs, issuing citations to those in danger of collapse. With the Cubs and the neighbors reaching agreement, many of the facilities began to feature seating structures: some with bleachers, some with chair seats, and even one with a steel-girdered double deck of seats (see photo). The agreement was to last until 2023.
In 2013, principal owner Thomas S. Ricketts sought Commission on Chicago Landmarks permission to build "additional seating, new lighting, four additional LED signs of up to and a video board in right field." Ricketts said Wrigley has "the worst player facilities in Major League Baseball". When the roof owners threatened to sue he tempered the design to just "a sign in right field and a video board in left field." After the roof owners did not rescind their threat to sue, Ricketts said in May 2014 that he would attempt to proceed with the original plan even if the matter was fought in court. The Wrigleyville Rooftop Association claim in 2014 that its members spend $50 million to renovate their venues to code after agreeing to revenue-sharing. On January 20, 2015, the roof owners filed a lawsuit in federal court against the Cubs and Ricketts, citing breach of contract.
The Ricketts family, owners of the Cubs, began purchasing the rooftop properties in order to control the marketable sight lines into the stadium and by the end of the 2016 season, owned (or controlled via agreement) 11 of the rooftop locations. This led to a dispute with the Major League Baseball Players Association and other MLB clubs, which argued that these acquisitions made the rooftops' receipts baseball-related revenue for the purposes of revenue sharing.
In 2015, a family venture bought 3643 N. Sheffield Ave. building, 3639 N. Sheffield and 1032 W. Waveland. A Jerry Lasky-managed entity sold 3617, 3619 and 3637 N. Sheffield to the Ricketts in May 2015. Hickory Street Capital, a venture of the Ricketts family, took a stake in the Down the Line Rooftop (3621-3625 Sheffield Ave.) along with right of first refusal in 2010. James and Camelia Petrozzini moved their share of Down the Line to a living trust then the Petrozzini died in 2014. Hickory Street sued that the trust transfer violated the right of first refusal. Ricketts family ventures outright purchased seven rooftop buildings while purchase the mortgages on three Sheffield rooftop properties in receivership. The Petrozzini Trust agreed to sell Down the Line Rooftop to Hickory Street in December 2016. Ricketts family's Greystone Sheffield Holdings bought the three W. Waveland rooftops from Donal Barry on January 13, 2016. Also in January, the Ricketts launched its Wrigley Rooftops marketing brand and arm. The Ricketts family add the Brixen Ivy located at 1044 W Waveland to its rooftop portfolio.
In May 2017, the Cubs and the Rickets family formed Marquee Sports & Entertainment as a central sales and marketing company for the various Rickets family sports and entertainment assets, the Cubs, Wrigley Rooftops and Hickory Street Capital. As part of this process, the Cubs agreed to count the rooftops' revenues along with regular Wrigley Field receipts for the purposes of revenue sharing.
Rooftop venues
Hickory Street Capital
Hickory Street Capital is a Wrigleyville development company owned by the Joe Ricketts family.
Hickory Street Capital, a venture of the Ricketts family, took a stake in the Down the Line Rooftop (3621-3625 Sheffield Ave.) along with right of first refusal in 2010. James and Camelia Petrozzini moved their share of Down the Line to a living trust then the Petrozzini died in 2014. Hickory Street sued that the trust transfer violated the right of first refusal. Ricketts family ventures outright purchased seven rooftop buildings while purchase the mortgages on three Sheffield rooftop properties in receivership. The Petrozzini Trust agreed to sell Down the Line Rooftop to Hickory Street in December 2016
In May 2017, the Cubs and the Rickets family formed Marquee Sports & Entertainment as a central sales and marketing company for the various Rickets family sports and entertainment assets, the Chicago Cubs, Wrigley Rooftops and Hickory Street Capital.
The Hotel Wrigley project was announced by Hickory Street Capital in 2013 for the northwest corner of Clark and Addison across from Wrigley Field with a faux-historic baseball theme. In September 2016, the project was rechristened Hotel Zachary after the 1914 ballpark’s architect Zachary Taylor Davis with a new design by architect VOA Associates. Starwood Hotels & Resorts was brought on board to manage the hotel expect to open in 2018. On April 10, 2017, the Park at Wrigley outdoor plaza was opened to the public on the same day as the Chicago Cubs' home opener.
Portfolio
Down the Line Rooftop
Gallagher Way Plaza (April 10, 2017) and development
North Building, 1101 W Waveland
American Airlines Conference Center
Chicago Cubs Front Office
Marquee Sports & Entertainment
Motorola World Series Trophy Room
West Building, 3630 N Clark Street
Hotel Zachary (opened March 28, 2018) managed by Starwood Hotels & Resorts
Marquee Sports & Entertainment
Marquee Sports & Entertainment LLC (MSE) is a sales and marketing company for the various Rickets family sports and entertainment assets, the Cubs, Wrigley Rooftops and Hickory Street Capital. Marquee is headquartered in the Gallagher Way's North Building. Hickory Street properties included are the new Park at Wrigley, the American Airlines Conference Center in the new Cubs headquarters adjacent to the ballpark, and Hotel Zachary. The company was draws on the famous Wrigley Field marquee.
Marquee Sports & Entertainment was formed in May 2017. The Cubs’ existing corporate partnerships and sales teams of 30 were transferred to Marquee.
Day-to-day operating heads are the two co-managing directors, Allen Hermeling, Cubs senior director of corporate partnerships, and Andy Blackburn, Cubs senior director of ticket sales. Also involved is Crane Kenney, Cubs president of business operations, as MSE officer and Colin Faulkner, Cubs senior vice president of sales and marketing as senior vice president of MSE directly over the co-managing directors.
Notes
External links
Wrigley Rooftops, owned by Ricketts family businesses
Rooftops of Wrigley, review site
Chicago Cubs
Wrigley Field
Roofs
Buildings and structures in Chicago | Wrigley Rooftops | Technology,Engineering | 1,902 |
50,564,979 | https://en.wikipedia.org/wiki/Samson%20Shatashvili | Samson Lulievich Shatashvili ( Russian: Самсон Лулиевич Шаташвили, born February 1960) is a theoretical and mathematical physicist who has been working at Trinity College Dublin, Ireland, since 2002. He holds the Trinity College Dublin Chair of Natural Philosophy and is the director of the Hamilton Mathematics Institute. He is also affiliated with the Institut des Hautes Études Scientifiques (IHÉS), where he held the Louis Michel Chair from 2003 to 2013 and the Israel Gelfand Chair from 2014 to 2019. Prior to moving to Trinity College, he was a professor of physics at Yale University from 1994.
Background
Shatashvili received his PhD in 1984 at the Steklov Institute of Mathematics in Saint Petersburg under the supervision of Ludwig Faddeev (and Vladimir Korepin). The topic of his thesis was on gauge theories and had the title "Modern Problems in Gauge Theories". In 1989 he received D.S. degree (doctor of science, 2nd degree in Russia) also at the Steklov Institute of Mathematics in Saint Petersburg.
Contributions and awards
Shatashvili has made several discoveries in the fields of theoretical and mathematical physics. He is mostly known for his work with Ludwig Faddeev on quantum anomalies, with Anton Alekseev on geometric methods in two-dimensional conformal field theories, for his work on background independent open string field theory, with Cumrun Vafa on superstrings and manifolds of exceptional holonomy, with Anton Gerasimov on tachyon condensation, with Andrei Losev, Nikita Nekrasov and Greg Moore on instantons and supersymmetric gauge theories, as well as for his work with Nikita Nekrasov on quantum integrable systems. In particular, Shatashvili and Nikita Nekrasov discovered the gauge/Bethe correspondence. In 1995 he received an Outstanding Junior Investigator Award of the Department of Energy (DOE) and a NSF Career Award and from 1996 to 2000 he was a Sloan Fellow. Shatashvili is the member of the Royal Irish Academy and the recipient of the 2010 Royal Irish Academy Gold Medal as well as the Ivane Javakhishvili State Medal, Georgia. In 2009 he was a plenary speaker at the International Congress on Mathematical Physics in Prague and in 2014 was an invited speaker at the International Congress of Mathematicians in Seoul (speaking on "Gauge theory angle at quantum integrability").
References
External links
Videos of Samson Shatashvili in the AV-Portal of the German National Library of Science and Technology
American mathematicians
Russian mathematicians
Soviet mathematicians
21st-century Irish mathematicians
21st-century mathematicians from Georgia (country)
Mathematical physicists
Theoretical physicists
String theorists
Academics of Trinity College Dublin
Year of birth missing (living people)
Living people
Members of the Royal Irish Academy | Samson Shatashvili | Physics | 585 |
37,642,258 | https://en.wikipedia.org/wiki/Dienone%E2%80%93phenol%20rearrangement | The dienone–phenol rearrangement is a reaction in organic chemistry first reported in 1921 by Karl von Auwers and Karl Ziegler. A common example of dienone–phenol rearrangement is 4,4-disubstituted converting into a stable 3,4-disubstituted phenol in presence of acid. A similar rearrangement is possible with a 2,2-disubstituted cyclohexadienone to its corresponding disubstituted phenol. Usually this type of rearrangement is spontaneous unless a dichloromethyl group is present at the 4th position or the process is otherwise blocked.
Reaction mechanism
The reaction mechanism of 4,4-disubstituted cyclohexadienones to 3,4-disubstituted phenol is illustrated here.
The migration tendency for the two different groups (R) present at either 4,4 position or 2,2 position can be determined by comparing the relative stability of the intermediate carbocation formed during rearrangement. In case of acid-promoted conditions, some relative migration tendencies are: COOEt > phenyl (or alkyl); phenyl > methyl; vinyl > methyl; methyl > alkoxy and alkoxy > phenyl. In some cases such as allyl and benzyl group, the actual rearrangement might happen through the Cope rearrangement. Apart from acid catalysis, the dienone–phenol rearrangement is also possible in presence of base. The dienone–phenol rearrangement has been used in the synthesis of steroids, anthracenes, and phenanthrenes.
References
Chemical reactions
Organic reactions | Dienone–phenol rearrangement | Chemistry | 366 |
69,317,325 | https://en.wikipedia.org/wiki/Lithium%20telluride | Lithium telluride (Li2Te) is an inorganic compound of lithium and tellurium. Along with LiTe3, it is one of the two intermediate solid phases in the lithium-tellurium system. It can be prepared by directly reacting lithium and tellurium in a beryllium oxide crucible at 950°C.
References
Lithium compounds
Tellurides
Fluorite crystal structure | Lithium telluride | Chemistry | 82 |
40,304,453 | https://en.wikipedia.org/wiki/Cam%20engine | A cam engine is a reciprocating engine where instead of the conventional crankshaft, the pistons deliver their force to a cam that is then caused to rotate. The output work of the engine is driven by this cam.
A variation of the cam engine, the swashplate engine (also the closely related wobble-plate engine), was briefly popular.
Cam engines are generally thought of as internal combustion engines, although they have also been used as hydraulic and pneumatic motors. Hydraulic motors, particularly the swashplate form, are widely and successfully used. Internal combustion engines, though, remain almost unknown.
Historical background
The history of cam engines is connected to the development of engines, especially in the late 19th and early 20th centuries. Engineers and inventors explored different mechanical designs to improve engine performance. One of the earliest recorded cam engine concepts dates back to the 19th century, during the industrial revolution.
In 1862, a French engineer named Alphonse Beau de Rochas, who is credited with the four-stroke engine, also explored using cams in engines. His work laid the foundation for later developments in internal combustion engines. Another notable figure is Felix Wankel, the German engineer known for inventing the Wankel rotary engine. Wankel's work on unconventional engine designs included experiments with cam-based mechanisms, although his rotary engine became more prominent.
In the early 20th century, there were many patents filed for different cam engine designs. These designs were especially important for aviation and industrial applications. During World War I and World War II, there was a lot of interest in alternative engine designs, with various advantages in power-to-weight ratio, durability, and fuel efficiency. However, cam engines never became widely used. This was mainly due to the complexity of their design and durability issues with the cam and follower mechanisms.
Mechanical design
The mechanical design of a cam engine is different from the conventional crankshaft-driven internal combustion engines. The engine's design uses a cam mechanism instead of a crankshaft. This introduces unique challenges and opportunities for optimizing performance.
Cam mechanism
The cam mechanism is at the heart of the cam engine. This mechanism plays a crucial role. It converts the linear motion of the pistons into rotational motion. This task is traditionally handled by a crankshaft in conventional engines. The cam is a rotating or sliding component. It is part of a mechanical linkage. The cam imparts a desired motion to a follower by direct contact. In the context of a cam engine, the cam is typically designed as a rotating disk or cylinder. It has a specially shaped profile. This profile interacts with the pistons. The cam's profile is carefully engineered. It controls the timing and movement of the pistons. The pistons reciprocate within the engine's cylinders. As the cam rotates, its profile pushes against a cam follower. The cam follower rides on the cam surface. This causes the follower to move up and down. This movement is transmitted to the pistons. This makes the pistons reciprocate. The cam's shape determines the piston's stroke length, timing, and speed. These factors directly influence the engine's performance characteristics. In a cam engine, the cam is connected to a drive mechanism, usually a shaft. This shaft rotates the cam at a specific speed. The rotation of the cam is synchronized with the engine's combustion cycle. This ensures that the pistons are in the correct position to use the energy from the combustion process. The careful design and synchronization of the cam mechanism are crucial for the efficient operation of the engine. Any deviation can lead to performance issues or mechanical failure.
Types of cam designs
The design of the cam profile is very important for a cam engine. The cam profile directly affects the engine's performance, such as torque, power output, and efficiency. There are several types of cam designs, each with its unique advantages and challenges:
Flat
Design: Flat cams, also called plate cams, have a flat surface. The edge of the surface is contoured. The contour of the edge determines the motion of the cam follower. This, in turn, determines the motion of the piston.
Performance impact: Flat cams are easy to design and manufacture. This makes them a popular choice for early cam engine experiments. However, they have limitations. They cannot produce complex motion profiles. This can restrict the engine's efficiency and power output.
Conical
Design: Conical cams have a tapered, cone-shaped surface that interacts with the cam follower. The varying radius of the cone influences the motion of the follower. This allows for creating intricate motion profiles.
Performance: Conical cams can generate highly specialized motion profiles. These profiles can optimize the engine's performance for specific applications. However, the complexity of their design and the precision required in manufacturing can make them challenging to implement.
Barrel
Design: Barrel cams have a barrel-shaped surface, which is a variation of cylindrical cams. The cam follower moves along a track or groove on the curved surface. This converts the rotational motion into linear motion.
Performance: Barrel cams can provide a high degree of control over the piston's motion, similar to cylindrical cams. Their design allows for the creation of motion profiles that can enhance the engine's torque output at specific points in the cycle.
Operation
Operating cycle
Some cam engines are two-stroke engines, rather than four-stroke. In a two-stroke engine, the forces on the piston act uniformly downwards, throughout the cycle. In a four-stroke engine, these forces reverse cyclically: In the induction phase, the piston is forced upwards, against the reduced induction depression. The simple cam mechanism only works with a force in one direction. In the first Michel engines, the cam had two surfaces, a main surface on which the pistons worked when running and another ring inside this that gave a desmodromic action to constrain the piston position during engine startup.
Usually, only one cam is required, even for multiple cylinders. Most cam engines were thus opposed twin or radial engines. An early version of the Michel engine was a rotary engine, a form of radial engine where the cylinders rotate around a fixed crank.
Advantages
Perfect balance, a crank system is impossible to dynamically balance, because one cannot attenuate a reciprocal force or action with a rotary reaction or force.
A more ideal combustion dynamic, a look at a PV diagram of the "ideal IC engine" and one will find that the combustion event ideally should be a more-or-less "constant volume event".
The short dwell time that a crank produces does not provide a more-or-less constant volume for the combustion event to take place in. A crank system reaches significant mechanical advantage at 6° before TDC; it then reaches maximum advantage at 45° to 50°. This limits the burn time to less than 60°. Also, the quickly descending piston lowers the pressure ahead of the flame front, reducing the burn time. This means less time to burn under lower pressure. This dynamic is why in all crank engines a significant amount of the fuel is burned not above the piston, where its power can be extracted, but in the catalytic converter, which only produces heat.
A modern cam can be manufactured with computer numerical control (CNC) technology so as to have a delayed mechanical advantage.
Other advantages of modern cam engines include:
Ideal piston dynamics
Lower internal friction
Cleaner exhaust
Lower fuel consumption
Longer life
More power per kilogram
Compact, modular design permits better vehicle design
Fewer parts, cost less to make
After extensive testing by the United States government, the Fairchild Model 447-C radial-cam engine had the distinction of receiving the very first Department of Commerce Approved Type Certificate. At a time when aircraft crank engine had a life of 30 to 50 hours, the Model 447-C was far more robust than any other aircraft engine then in production.
However, in this pre-CNC age it had a very poor cam profile, which meant it shook too severely for the wood propellers and the wood, wire, and cloth airframes of the time.
One advantage is that the bearing surface area can be larger than for a crankshaft. In the early days of bearing material development, the reduced bearing pressure this allowed could give better reliability. A relatively successful swashplate cam engine was developed by the bearing expert George Michell, who also developed the slipper-pad thrust block.
The Michel engine (no relation) began with roller cam followers, but switched during development to plain bearing followers.
Unlike a crankshaft, a cam may easily have more than one throw per rotation. This allows more than one piston stroke per revolution. For aircraft use, this was an alternative to using a propeller speed reduction unit: high engine speed for an improved power-to-weight ratio, combined with a slower propeller speed for an efficient propeller. In practice, the cam engine design weighed less than the combination of a conventional engine and gearbox.
Swashplate and wobble plate engines
The only internal combustion cam engines that have been remotely successful were the swashplate engines. These were almost all axial engines, where the cylinders are arranged parallel to the engine axis, in one or two rings. The purpose of such engines was usually to achieve this axial or "barrel" layout, making an engine with a very compact frontal area. There were plans at one time to use barrel engines as aircraft engines, with their reduced frontal area allowing a smaller fuselage and lower drag.
A similar engine to the swashplate engine is the wobble plate engine, also known as nutator or Z-crank drive. This uses a bearing that purely nutates, rather than also rotating as for the swashplate. The wobble plate is separated from the output shaft by a rotary bearing. Wobble plate engines are thus not cam engines.
Pistonless rotary engines
Most piston-less engines relying on cams, such as the Rand cam engine, use the cam mechanism to control the motion of sealing vanes. Combustion pressure against these vanes causes a vane carrier, separate from the cam, to rotate. In the Rand engine, the camshaft moves the vanes so that they have a varying length exposed and so enclose a combustion chamber of varying volume as the engine rotates. The work done in rotating the engine to cause this expansion is the thermodynamic work done by the engine and what causes the engine to rotate.
References
Bibliography
Two-stroke engines
Piston engine configurations
Axial engines | Cam engine | Technology | 2,133 |
2,527,535 | https://en.wikipedia.org/wiki/Trace%20heating | Electric heat tracing, heat tape or surface heating, is a system used to maintain or raise the temperature of pipes and vessels using heat tracing cables. Trace heating takes the form of an electrical heating element run in physical contact along the length of a pipe. The pipe is usually covered with thermal insulation to retain heat losses from the pipe. Heat generated by the element then maintains the temperature of the pipe. Trace heating may be used to protect pipes from freezing, to maintain a constant flow temperature in hot water systems, or to maintain process temperatures for piping that must transport substances that solidify at ambient temperatures. Electric trace heating cables are an alternative to steam trace heating where steam is unavailable or unwanted.
Development
Electric trace heating began in the 1930s but initially no dedicated equipment was available. Mineral insulated cables ran at high current densities to produce heat, and control equipment was adapted from other applications. Mineral-insulated resistance heating cable was introduced in the 1950s, and parallel-type heating cables that could be cut to length in the field became available. Self-limiting thermoplastic cables were marketed in 1971.
Control systems for trace heating systems developed from capillary filled-bulb thermostats and contactors in the 1970s to networked computerized controls in the 1990s, in large systems that require centralized control and monitoring.
One paper projected that between 2000 and 2010 trace heating would account for 100 megawatts of connected load, and that trace heating and insulation would account for up to CAD $700 million capital investment in the Alberta oil sands.
International standards applied in the design and installation of electric trace heating systems include IEEE standards 515 and 622, British standard BS 6351, and IEC standard 60208.
Uses
The most common pipe trace heating applications include:
Freeze protection
Temperature maintenance
Snow Melting On Driveways
Other uses of trace heating cables include:
Ramp and stair snow / ice protection
Gulley and roof snow / ice protection
Underfloor heating
Door / frame interface ice protection
Window de-misting
Anti-condensation
Pond freeze protection
Soil warming
Preventing cavitation
Reducing Condensation On Windows
Freeze protection
Every pipe or vessel is subject to heat loss when its temperature is greater than ambient temperature. Thermal insulation reduces the rate of heat loss but does not eliminate it. Trace heating maintains the temperature above freezing by balancing heat lost with heat supplied. Normally, a thermostat is used to energise when it measures temperature falling below a set temperature value - usually between 3 °C and 5 °C and often referred to as the 'setpoint'. The thermostat will de-energise the trace heating when it measures temperature rising past another set temperature value - usually 2 °C higher than the setpoint value.
Gutter and roof de-icing
Placement of heat trace cable on roofs or in gutters to melt ice during winter months. When used in gutters the cable is not meant to keep the gutters free of ice or snow, but only to provide a free path for the melted water to get off the roof and down the downspout or drain piping.
Temperature maintenance
Hot water service piping can also be traced, so that a circulating system is not needed to provide hot water at outlets. The combination of trace heating and the correct thermal insulation for the operating ambient temperature maintains a thermal balance where the heat output from the trace heating matches the heat loss from the pipe. Self-limiting or regulating heating tapes have been developed and are very successful in this application.
A similar principle can be applied to process piping carrying fluids which may congeal at low temperatures, for example, tars or molten sulfur. Hit-temperature trace heating elements can prevent blockage of pipes.
Industrial applications for trace heating range from chemical industry, oil refineries, nuclear power plants, food factories. For example, wax is a material which starts to solidify below 70 °C which is usually far above the temperature of the surrounding air. Therefore, the pipeline must be provided with an external source of heat to prevent the pipe and the material inside it from cooling down. Trace heating can also be done with steam, but this requires a source of steam and may be inconvenient to install and operate.
In laboratories, researchers working in the field of materials science use trace heating to heat a sample isotropically. They may use trace heating in conjunction with a variac, so as to control the heat energy delivered. This is an effective means of slowly heating an object to measure thermodynamic properties such as thermal expansion.
Anti-cavitation purpose
As heating a thick fluid decreases its viscosity, it reduces losses occurring in a pipe. Therefore, the net positive suction head (pressure difference) available can be raised, decreasing the likelihood of cavitation when pumping. However, care must be taken not to increase the vapour pressure of the fluid too much, as this would have a strong side effect on the available head, possibly outweighing any benefit.
Types
Constant electric power "series"
A series heating cable is made of a run of high-resistance wire, insulated and often enclosed in a protective jacket. It is powered at a specific voltage and the resistance heat of the wire creates heat. The downside of these types of heaters is that if they are crossed over themselves they can overheat and burn out, they are provided in specific lengths and cannot be shortened in the field, also, a break anywhere along the line will result in a failure of the entire cable. The upside is that they are typically inexpensive (if plastic style heaters) or, as is true with mineral insulated heating cables, they can be exposed to very high temperatures. Mineral insulated heating cables are good for maintaining high temperatures on process lines or maintaining lower temperatures on lines which can get extremely hot such as high temperature steam lines.
Typically series elements are used on long pipe line process heating, for example long oil pipe lines and quay side of load pipes on oil refineries.
Constant wattage
A constant wattage cable is composed of multiple constant electric power zones and is made by wrapping a fine heating element around two insulated parallel bus wires, then on alternating sides of the conductors a notch is made in the insulation. The heating element is then normally soldered to the exposed conductor wire which creates a small heating circuit; this is then repeated along the length of the cable. There is then an inner jacket which separates the bus wires from the grounding braid. In commercial and industrial cables, an additional outer jacket of rubber or Teflon is applied.
The benefits of this system over series elements is that should one small element fail then the rest of the system will continue to operate, on the other hand damaged sections of cable (usually 3 ft span) will stay cold and possibly lead to freeze ups on said section. Also, this cable can be cut-to-length in-field due to its parallel circuitry, however, due to the circuit only running to the last zone on the cable, when installing on site you normally have to install slightly beyond the end of the pipe work. When installing constant wattage, or any heat tracing cable, it is important to not overlap or touch the cable to itself as it will be subject to overheating and burnout. Constant wattage cable is always installed with a thermostat to control the power output of the cable, making it a very reliable heating source.
The disadvantage of this cable is that most constant wattage cables do not have soldered connections to the bus wires but press on type contact and are therefore more prone to have cold circuits due to loose connections caused by cable manipulation and installation.
Self regulating
Self-regulating heat tracing tapes are cables whose resistance varies with temperature - low resistance for temperatures below the cable set point and high resistance for temperatures above the cable set point. When the cable temperature reaches the set point, the resistance reaches a high point, resulting in no more heat being supplied.
These cables use two parallel bus wires which carry electricity but do not create significant heat. They are encased in a semi-conductive polymer. This polymer is loaded with carbon; as the polymer element heats, it allows less current to flow so the cable is inherently power saving and only delivering heat and power where and when required by the system. The cables are manufactured and then irradiated and by varying both the carbon content and the dosage then different tape with different output characteristics can be produced. The benefits of this cable are the ability to cut to length in the field. It is more rugged, and much more reliable than a constant wattage cable; it cannot over-heat itself so it can be crossed over, but it is bad practice to install tape in this way. Self-regulating and constant wattage heating cables have specific maximum exposure temperature, which means that if they are subject to high temperatures then the tape can be damaged beyond repair.
Also self-limiting tapes are subject to higher inrush currents on cold starting up similar to an induction motor, so a higher rated contactor is required.
Power supply and control
Trace heat cables may be connected to single-phase or (in groups) to three-phase power supplies. Power is controlled either by a contactor or a solid-state controller. For self-regulating cable, the supply must furnish a large warm-up current if the system is switched on from a cold starting condition. The contactor or controller may include a thermostat if accurate temperature maintenance is required, or may just shut off a freeze-protection system in mild weather.
Electrical heat tracing systems may be required to have earth leakage (ground fault or RCD) devices for personnel and equipment protection. The system design must minimize leakage current to prevent nuisance tripping; this may limit the length of any individual heating circuit.
Control system
The three phase systems are fed via contactors similar to a three phase motor 'direct on line' starter which is controlled by a thermostat somewhere in the line. This ensures that the temperature is kept constant and the line does not overheat or underheat.
If a line becomes frozen because the heating was switched off then this may take some time to thaw out using trace heating. This thawing out is done on the three phase systems by using an 'auto transformer' to give a higher voltage, and consequently higher current, and make the trace heating elements a bit hotter. The boost system is usually on a timer and switches back to 'normal' after a period of time.
References
Further reading
CPSC notice on residential heat tape safety (US)
Electric heating
Piping
Plumbing | Trace heating | Chemistry,Engineering | 2,150 |
15,172,479 | https://en.wikipedia.org/wiki/Smith%27s%20Cloud | Smith's Cloud is a high-velocity cloud of hydrogen gas located in the constellation Aquila at Galactic coordinates l = 39°, b = −13°. The cloud was discovered in 1963 by Gail Bieger, née Smith, who was an astronomy student at Leiden University in the Netherlands.
Properties
Using the National Science Foundation's Robert C. Byrd Green Bank Telescope, radio astronomers have found that Smith's cloud has a mass of at least one million solar masses and measures long by wide in projection. The cloud is between and from Earth and has an angular diameter of 10 to 12 degrees, approximately as wide as the Orion constellation, or about 20 times the diameter of the full moon, although the cloud is not visible to the naked eye.
The cloud is apparently moving towards the disk of the Milky Way at 73 ± 26 kilometers per second. Smith's Cloud is expected to merge with the Milky Way in 27 million years at a point in the Perseus arm. Astronomers believe it will strike the Milky Way disk at a 45° angle, and its impact may produce a burst of star formation or a supershell of neutral hydrogen.
Projecting the cloud's trajectory backwards through time, it is estimated that it had passed through the disk of the Milky Way some 70 million years ago. To have survived this previous encounter, astronomers have suggested that it is embedded inside a massive dark matter halo. The fact that it survived this previous encounter means that it is likely to be much more massive than previously thought, and may be a candidate for being a dark galaxy. In this scenario it would be a failed dwarf galaxy, with the ingredients to form a stellar galaxy, but few if any detectable stars. However, chemical abundance measurements from the Hubble Space Telescope argue against this hypothesis; these measurements show that the Smith Cloud has an average metallicity of one half of the solar value, indicating that its gas originates in the Galaxy, not from an extragalactic source. The cloud's orbit and metallicity are both consistent with an origin in the outer disk of the Milky Way. The mechanism by which this gas was released is not known.
References
External links
High-velocity clouds
Dark galaxies
Aquila (constellation)
? | Smith's Cloud | Physics,Astronomy | 449 |
42,351,905 | https://en.wikipedia.org/wiki/Rossby%20wave%20instability | Rossby Wave Instability (RWI) is a concept related to astrophysical accretion discs. In non-self-gravitating discs, for example around newly forming stars, the instability can be triggered by an axisymmetric bump, at some radius , in the disc surface mass-density. It gives rise to exponentially growing non-axisymmetric perturbation in the vicinity of consisting of anticyclonic vortices. These vortices are regions of high pressure and consequently act to trap dust particles which in turn can facilitate planetesimal growth in proto-planetary discs. The Rossby vortices in the discs around stars and black holes may cause the observed quasi-periodic modulations of the disc's thermal emission.
Rossby waves, named after Carl-Gustaf Arvid Rossby, are important in planetary atmospheres and oceans and are also known as planetary waves. These waves have a significant role in the transport of heat from equatorial to polar regions of the Earth. They may have a role in the formation of the long-lived ( yr) Great Red Spot on Jupiter which is an anticyclonic vortex. The Rossby waves have the notable property of having the phase velocity opposite to the direction of motion of the atmosphere or disc in the comoving frame of the fluid.
The theory of the Rossby wave instability in accretion discs was developed by Lovelace et al. and Li et al. for thin Keplerian discs with negligible self-gravity and earlier by Lovelace and Hohlfeld for thin disc galaxies where the self-gravity may or may not be important and where the rotation is in general non-Keplerian.
The Rossby wave instability occurs because of the local wave trapping in a disc. It is related to the Papaloizou and Pringle instability; where the wave is trapped between the inner and outer radii of a disc or torus.
References
Further reading
Astrophysics | Rossby wave instability | Physics,Astronomy | 401 |
14,027,843 | https://en.wikipedia.org/wiki/Advisory%20Committee%20on%20Earthquake%20Hazards%20Reduction | The 2004 re-authorization of National Earthquake Hazards Reduction Program (NEHRP) directed that the Director of the U.S. National Institute of Standards and Technology (NIST) establish the Advisory Committee on Earthquake Hazards Reduction (ACEHR) to assess:
trends and developments in the science and engineering of earthquake hazards reduction;
the effectiveness of NEHRP in performing its statutory activities:
improved design and construction methods and practices;
land use controls and redevelopment;
prediction techniques and early-warning systems;
coordinated emergency preparedness plans; and
public education and involvement programs;
any need to revise NEHRP; and
the management, coordination, implementation, and activities of the NEHRP.
On June 27, 2006, the official Charter of the Advisory Committee on Earthquake Hazards Reduction was established by the U.S. Department of Commerce, parent agency for NIST. The committee is to be widely representative of the stakeholder community. Federal employees may not serve on the committee. As established by the charter, ACEHR will have 11–15 voting members, in addition to having the Chairperson of the United States Geological Survey (USGS) Scientific Earthquake Studies Advisory Committee (SESAC) serve in an capacity.
References
External links
USGS Earthquake Hazards Program
United States Department of Commerce
National Institute of Standards and Technology
Disaster preparedness in the United States
Earthquake and seismic risk mitigation
Government agencies established in 2006
2006 establishments in the United States | Advisory Committee on Earthquake Hazards Reduction | Engineering | 286 |
45,515,909 | https://en.wikipedia.org/wiki/Penicillium%20fusisporum | Penicillium fusisporum is a fungus species in the family Trichocomaceae. Described as new to science in 2014, it was isolated from plant leaves in China. It is closely related to Penicillium thomii var. flavescens.
See also
List of Penicillium species
References
fusisporum
Fungi described in 2014
Fungi of Asia
Fungus species | Penicillium fusisporum | Biology | 79 |
2,903,350 | https://en.wikipedia.org/wiki/Phi%20Bo%C3%B6tis | Phi Boötis (φ Boötis) is a single, yellow-hued star in the northern constellation of Boötes. It is dimly visible to the naked eye with an apparent visual magnitude of +5.24. Based upon an annual parallax shift of 19.22 mas as seen from the Earth, it is located 170 light years from the Sun. At that distance, the visual magnitude is diminished by an extinction of 0.09 due to interstellar dust. It is moving closer to the Sun with a radial velocity of −10.6 km/s.
The stellar classification of Phi Boötis is , which would suggest it is an evolving G-type star that shows spectral traits of both a subgiant and a giant star. However, Alves (2000) has it listed as a member of the so-called "red clump", indicating that it is an aging giant star that is generating energy through helium fusion at its core. The 'Fe-2' suffix notation in its class means that it displays a significant underabundance of iron in its spectrum. Around three billion years old, Phi Boötis has an estimated 1.43 times the mass of the Sun and 5 times the Sun's radius. It is radiating 17 times the Sun's luminosity from its photosphere at an effective temperature of about 4,945 K.
References
References
G-type giants
Horizontal-branch stars
Boötes
Bootis, Phi
Durchmusterung objects
Bootis, 54
139641
076534
5823 | Phi Boötis | Astronomy | 315 |
47,381,133 | https://en.wikipedia.org/wiki/MACHO%20176.18833.411 | MACHO 176.18833.411 (OGLE BLG-RRLYR-10353) is an RR Lyrae variable star located in the galactic bulge of our Milky Way Galaxy. However, it is not a galactic bulge star, it is a galactic halo star, which is on the part of its elliptical orbit that brings it within the bulge before returning to the outer parts of the galaxy, the halo. The star is currently located about from the Galactic Center. , this star has the highest velocity of any known RR Lyrae variable located in the bulge, moving at , only slightly below galactic escape velocity, and 5x the average velocity of bulge stars. Its nature was discovered as part of the BRAVA-RR survey.
References
RR Lyrae variables
Sagittarius (constellation) | MACHO 176.18833.411 | Astronomy | 166 |
10,700,701 | https://en.wikipedia.org/wiki/Manufacturing%20supermarket | A manufacturing supermarket (or market location) is, for a factory process, what a retail supermarket is for the customer. The customers draw products from the 'shelves' as needed and this can be detected by the supplier who then initiates a replenishment of that item. It was the observation that this 'way of working' could be transferred from retail to manufacturing that is one of the cornerstones of the Toyota Production System (TPS).
History
In the 1950s Toyota sent teams to the United States to learn how they achieved mass-production. However, the Toyota Delegation first got inspiration for their production system at an American Supermarket (a Piggly Wiggly, to be precise). They saw the virtue in the supermarket only reordering and restocking goods once they’d been bought by customers.
In a supermarket (like the TPS) customers (processes) buy what they need when they need it. Since the system is self-service the sales effort (materials management) is reduced. The shelves are refilled as products are sold (parts withdrawn) on the assumption that what has sold will sell again which makes it easy to see how much has been used and to avoid overstocking. The most important feature of a supermarket system is that stocking is triggered by actual demand. In the TPS this signal triggers the 'pull' system of production.
Implementation
Market locations are appropriate where there is a desire to communicate customer pull up the supply chain. The aim of the 'market' is to send single unit consumption signals back up the supply chain so that a demand leveling effect occurs. Just as in a supermarket it is possible for someone to decide to cater for a party of 300 from the supermarket so it is possible to decide to suddenly fill ten trucks and send massively distorting signals up those same pathways. Thus the 'market location' can be used as a sort of isolator between actual demand and how supply would like demand to be, an isolator between batch demand spikes and the up upstream supply process.
For example, if the market were positioned at the loading bay, then it will receive 'spikes' of demand whenever a truck comes in to be loaded. Since, in general, one knows in advance when trucks will arrive and what they will require to be loaded onto them, it is possible to spread that demand spike over a chosen period before the truck actually arrives. It is possible to do this by designating a location, say a marked floor area, to be the 'virtual' truck and moving items from the market to the 'virtual truck' smoothly over the chosen period prior to the load onto the actual truck commencing. Smoothly here means that for each item its 'loading' is evenly spread across the period. For regular shipments this period might start the moment the last shipment in that schedule departs the loading bay. This has four key impacts:
Loading movements rise, which is the reason often given for not doing this 'virtual' truck loading;
Demand evenness (Mura) increases which allows stock reductions and exposes new issues to be resolved;
Any last minute searching for items to load is eliminated since before the real truck need to be loaded the 'virtual' truck will have completed its loading;
Any potential shortages that may affect the shipment can be exposed earlier by the 'stockout' in the market location. This is true because the 'virtual' truck loading sequence will be constructed to fit with the supply process tempo.
This logic can, obviously, be applied upstream of any batch process and not just deliveries to another plant. It is a workaround for the fact that the batch process hasn't been made to flow yet. It therefore has some costs but the benefits in terms of reducing the three wastes should outweigh these.
Toyota use this technique and demand it of their suppliers in order to generate focus on the supply issues it uncovers. They then demand the preparation of loads for more frequent 'virtual' trucks than will actually appear in order to raise this pressure (see Frequent deliveries).
At low stocking levels for some items the 'market location' can require Just in Sequence supply rather than Just in Time.
References
Lean manufacturing
Toyota Production System | Manufacturing supermarket | Engineering | 849 |
24,443,253 | https://en.wikipedia.org/wiki/Web%20Services%20for%20Devices | Web Services for Devices or Web Services on Devices (WSD) is a Microsoft API to enable programming connections to web service enabled devices, such as printers, scanners and file shares. Such devices conform to the Devices Profile for Web Services (DPWS). It is an extensible framework that serves as a replacement for older Windows networking functions and a common framework for allowing access to new device APIs.
Operation
The Microsoft Web Services for Devices API (WSDAPI) uses WS-Discovery for device discovery.
Devices that connect to the WSDAPI must implement the DPWS.
See also
WS-Discovery
Devices Profile for Web Services
Features new to Windows Vista
WSD-Working Stress Design
USD-Ultimate Strength Design
References
External links
Web Services on Devices (Windows)
Web Services for Devices (WSD)
The WSD Port Monitor
Device as a Service
Web services | Web Services for Devices | Technology | 180 |
2,468,930 | https://en.wikipedia.org/wiki/Louis%20Monier | Louis Monier (born March 21, 1956) is a cofounder of the defunct Internet search engine AltaVista together with Paul Flaherty and Michael Burrows. After he left AltaVista, he worked at eBay and then at Google. He left Google in August 2007 to join Cuil, a search engine startup. He was Vice President of Products at Cuil. One month after the launch, he left Cuil, citing differences with the CEO. He also was the co-founder and CTO of Qwiki with Doug Imbruce. Qwiki won the TechCrunch Disrupt Award in 2010 and was sold to Yahoo in 2013. In 2014, Yahoo shuttered Qwiki.
Monier received a Ph.D. in Mathematics and Computer Science from the University of Paris XI, France in 1980 and worked at Carnegie Mellon University, Xerox PARC, and DEC's Western Research Laboratory.
Louis was the Chief Scientist of Proximic until July 2013, and has founded a health technology company, Kyron.
References
1956 births
University of Paris alumni
Carnegie Mellon University faculty
Living people
Digital Equipment Corporation people
Scientists at PARC (company)
French emigrants to the United States
French computer scientists
American computer scientists
Google employees | Louis Monier | Technology | 252 |
2,785,376 | https://en.wikipedia.org/wiki/RF%20front%20end | In a radio receiver circuit, the RF front end, short for radio frequency front end, is a generic term for all the circuitry between a receiver's antenna input up to and including the mixer stage. It consists of all the components in the receiver that process the signal at the original incoming radio frequency (RF), before it is converted to a lower intermediate frequency (IF). In microwave and satellite receivers it is often called the low-noise block downconverter (LNB) and is often located at the antenna, so that the signal from the antenna can be transferred to the rest of the receiver at the more easily handled intermediate frequency.
Superheterodyne receiver
For most superheterodyne architectures, the RF front end consists of:
A band-pass filter (BPF) to reduce image response. This removes any signals at the image frequency, which would otherwise interfere with the desired signal. It also prevents strong out-of-band signals from saturating the input stages.
An RF amplifier, often called the low-noise amplifier (LNA). Its primary responsibility is to increase the sensitivity of the receiver by amplifying weak signals without contaminating them with noise, so that they can stay above the noise level in succeeding stages. It must have a very low noise figure (NF). The RF amplifier may not be needed and is often omitted (or switched off) for frequencies below 30 MHz, where the signal-to-noise ratio is defined by atmospheric and human-made noise.
A local oscillator (LO) which generates a radio frequency signal at an offset from the incoming signal, which is mixed with the incoming signal.
The mixer, which mixes the incoming signal with the signal from the local oscillator to convert the signal to the intermediate frequency (IF).
Digital receiver
In digital receivers, particularly those in wireless devices such as cell phones and Wifi receivers, the intermediate frequency is digitized; sampled and converted to a binary digital form, and the rest of the processing – IF filtering and demodulation – is done by digital filters (digital signal processing, DSP), as these are smaller, use less power and can have more selectivity. In this type of receiver the RF front end is defined as everything from the antenna to the analog-to-digital converter (ADC) which digitizes the signal. The general trend is to do as much of the signal processing in digital form as possible, and some receivers digitize the RF signal directly, without down-conversion to an IF, so here the front end is merely an RF filter in the simple receiver path/chain.
References
Radio electronics | RF front end | Engineering | 546 |
56,098 | https://en.wikipedia.org/wiki/Monte%20Carlo%20method | Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. The name comes from the Monte Carlo Casino in Monaco, where the primary developer of the method, mathematician Stanisław Ulam, was inspired by his uncle's gambling habits.
Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generating draws from a probability distribution. They can also be used to model phenomena with significant uncertainty in inputs, such as calculating the risk of a nuclear power plant failure. Monte Carlo methods are often implemented using computer simulations, and they can provide approximate solutions to problems that are otherwise intractable or too complex to analyze mathematically.
Monte Carlo methods are widely used in various fields of science, engineering, and mathematics, such as physics, chemistry, biology, statistics, artificial intelligence, finance, and cryptography. They have also been applied to social sciences, such as sociology, psychology, and political science. Monte Carlo methods have been recognized as one of the most important and influential ideas of the 20th century, and they have enabled many scientific and technological breakthroughs.
Monte Carlo methods also have some limitations and challenges, such as the trade-off between accuracy and computational cost, the curse of dimensionality, the reliability of random number generators, and the verification and validation of the results.
Overview
Monte Carlo methods vary, but tend to follow a particular pattern:
Define a domain of possible inputs
Generate inputs randomly from a probability distribution over the domain
Perform a deterministic computation of the outputs
Aggregate the results
For example, consider a quadrant (circular sector) inscribed in a unit square. Given that the ratio of their areas is , the value of can be approximated using a Monte Carlo method:
Draw a square, then inscribe a quadrant within it
Uniformly scatter a given number of points over the square
Count the number of points inside the quadrant, i.e. having a distance from the origin of less than 1
The ratio of the inside-count and the total-sample-count is an estimate of the ratio of the two areas, . Multiply the result by 4 to estimate .
In this procedure the domain of inputs is the square that circumscribes the quadrant. One can generate random inputs by scattering grains over the square then perform a computation on each input (test whether it falls within the quadrant). Aggregating the results yields our final result, the approximation of .
There are two important considerations:
If the points are not uniformly distributed, then the approximation will be poor.
The approximation is generally poor if only a few points are randomly placed in the whole square. On average, the approximation improves as more points are placed.
Uses of Monte Carlo methods require large amounts of random numbers, and their use benefitted greatly from pseudorandom number generators, which are far quicker to use than the tables of random numbers that had been previously used for statistical sampling.
Application
Monte Carlo methods are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean ( the 'sample mean') of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances, a flow of probability distributions with an increasing level of sampling complexity arise (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples ( particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Simple Monte Carlo
Suppose one wants to know the expected value μ of a population (and knows that μ exists), but does not have a formula available to compute it. The simple Monte Carlo method gives an estimate for μ by running n simulations and averaging the simulations’ results. It has no restrictions on the probability distribution of the inputs to the simulations, requiring only that the inputs are randomly generated and are independent of each other and that μ exists. A sufficiently large n will produce a value for m that is arbitrarily close to μ; more formally, it will be the case that, for any ε > 0, |μ – m| ≤ ε.
Typically, the algorithm to obtain m is
s = 0;
for i = 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
repeat
m = s / n;
An example
Suppose we want to know how many times we should expect to throw three eight-sided dice for the total of the dice throws to be at least T. We know the expected value exists. The dice throws are randomly distributed and independent of each other. So simple Monte Carlo is applicable:
s = 0;
for i = 1 to n do
throw the three dice until T is met or first exceeded; ri = the number of throws;
s = s + ri;
repeat
m = s / n;
If n is large enough, m will be within ε of μ for any ε > 0.
Determining a sufficiently large n
General formula
Let ε = |μ – m| > 0. Choose the desired confidence level – the percent chance that, when the Monte Carlo algorithm completes, m is indeed within ε of μ. Let z be the z-score corresponding to that confidence level.
Let s2 be the estimated variance, sometimes called the “sample” variance; it is the variance of the results obtained from a relatively small number k of “sample” simulations. Choose a k; Driels and Shin observe that “even for sample sizes an order of magnitude lower than the number required, the calculation of that number is quite stable."
The following algorithm computes s2 in one pass while minimizing the possibility that accumulated numerical error produces erroneous results:
s1 = 0;
run the simulation for the first time, producing result r1;
m1 = r1; //mi is the mean of the first i simulations
for i = 2 to k do
run the simulation for the ith time, producing result ri;
δi = ri - mi−1;
mi = mi-1 + (1/i)δi;
si = si-1 + ((i - 1)/i)(δi)2;
repeat
s2 = sk/(k - 1);
Note that, when the algorithm completes, mk is the mean of the k results.
n is sufficiently large when
If n ≤ k, then mk = m; sufficient sample simulations were done to ensure that mk is within ε of μ. If n > k, then n simulations can be run “from scratch,” or, since k simulations have already been done, one can just run n – k more simulations and add their results into those from the sample simulations:
s = mk * k;
for i = k + 1 to n do
run the simulation for the ith time, giving result ri;
s = s + ri;
m = s / n;
A formula when simulations' results are bounded
An alternate formula can be used in the special case where all simulation results are bounded above and below.
Choose a value for ε that is twice the maximum allowed difference between μ and m. Let 0 < δ < 100 be the desired confidence level, expressed as a percentage. Let every simulation result r1, r2, …ri, … rn be such that a ≤ ri ≤ b for finite a and b. To have confidence of at least δ that |μ – m| < ε/2, use a value for n such that
For example, if δ = 99%, then n ≥ 2(b – a )2ln(2/0.01)/ε2 ≈ 10.6(b – a )2/ε2.
Computational costs
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
History
Before the Monte Carlo method was developed, simulations tested a previously understood deterministic problem, and statistical sampling was used to estimate uncertainties in the simulations. Monte Carlo simulations invert this approach, solving deterministic problems using probabilistic metaheuristics (see simulated annealing).
An early variant of the Monte Carlo method was devised to solve the Buffon's needle problem, in which can be estimated by dropping needles on a floor made of parallel equidistant strips. In the 1930s, Enrico Fermi first experimented with the Monte Carlo method while studying neutron diffusion, but he did not publish this work.
In the late 1940s, Stanisław Ulam invented the modern version of the Markov Chain Monte Carlo method while he was working on nuclear weapons projects at the Los Alamos National Laboratory. In 1946, nuclear weapons physicists at Los Alamos were investigating neutron diffusion in the core of a nuclear weapon. Despite having most of the necessary data, such as the average distance a neutron would travel in a substance before it collided with an atomic nucleus and how much energy the neutron was likely to give off following a collision, the Los Alamos physicists were unable to solve the problem using conventional, deterministic mathematical methods. Ulam proposed using random experiments. He recounts his inspiration as follows:
Being secret, the work of von Neumann and Ulam required a code name. A colleague of von Neumann and Ulam, Nicholas Metropolis, suggested using the name Monte Carlo, which refers to the Monte Carlo Casino in Monaco where Ulam's uncle would borrow money from relatives to gamble.
Monte Carlo methods were central to the simulations required for further postwar development of nuclear weapons, including the design of the H-bomb, though severely limited by the computational tools at the time. Von Neumann, Nicholas Metropolis and others programmed the ENIAC computer to perform the first fully automated Monte Carlo calculations, of a fission weapon core, in the spring of 1948. In the 1950s Monte Carlo methods were used at Los Alamos for the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
The theory of more sophisticated mean-field type particle Monte Carlo methods had certainly started by the mid-1960s, with the work of Henry P. McKean Jr. on Markov interpretations of a class of nonlinear parabolic partial differential equations arising in fluid mechanics. An earlier pioneering article by Theodore E. Harris and Herman Kahn, published in 1951, used mean-field genetic-type Monte Carlo methods for estimating particle transmission energies. Mean-field genetic type Monte Carlo methodologies are also used as heuristic natural search algorithms (a.k.a. metaheuristic) in evolutionary computing. The origins of these mean-field computational techniques can be traced to 1950 and 1954 with the work of Alan Turing on genetic type mutation-selection learning machines and the articles by Nils Aall Barricelli at the Institute for Advanced Study in Princeton, New Jersey.
Quantum Monte Carlo, and more specifically diffusion Monte Carlo methods can also be interpreted as a mean-field particle Monte Carlo approximation of Feynman–Kac path integrals. The origins of Quantum Monte Carlo methods are often attributed to Enrico Fermi and Robert Richtmyer who developed in 1948 a mean-field particle interpretation of neutron-chain reactions, but the first heuristic-like and genetic type particle algorithm (a.k.a. Resampled or Reconfiguration Monte Carlo methods) for estimating ground state energies of quantum systems (in reduced matrix models) is due to Jack H. Hetherington in 1984. In molecular chemistry, the use of genetic heuristic-like particle methodologies (a.k.a. pruning and enrichment strategies) can be traced back to 1955 with the seminal work of Marshall N. Rosenbluth and Arianna W. Rosenbluth.
The use of Sequential Monte Carlo in advanced signal processing and Bayesian inference is more recent. It was in 1993, that Gordon et al., published in their seminal work the first application of a Monte Carlo resampling algorithm in Bayesian statistical inference. The authors named their algorithm 'the bootstrap filter', and demonstrated that compared to other filtering methods, their bootstrap algorithm does not require any assumption about that state-space or the noise of the system. Another pioneering article in this field was Genshiro Kitagawa's, on a related "Monte Carlo filter", and the ones by Pierre Del Moral and Himilcon Carvalho, Pierre Del Moral, André Monin and Gérard Salut on particle filters published in the mid-1990s. Particle filters were also developed in signal processing in 1989–1992 by P. Del Moral, J. C. Noyer, G. Rigal, and G. Salut in the LAAS-CNRS in a series of restricted and classified research reports with STCAN (Service Technique des Constructions et Armes Navales), the IT company DIGILOG, and the LAAS-CNRS (the Laboratory for Analysis and Architecture of Systems) on radar/sonar and GPS signal processing problems. These Sequential Monte Carlo methodologies can be interpreted as an acceptance-rejection sampler equipped with an interacting recycling mechanism.
From 1950 to 1996, all the publications on Sequential Monte Carlo methodologies, including the pruning and resample Monte Carlo methods introduced in computational physics and molecular chemistry, present natural and heuristic-like algorithms applied to different situations without a single proof of their consistency, nor a discussion on the bias of the estimates and on genealogical and ancestral tree based algorithms. The mathematical foundations and the first rigorous analysis of these particle algorithms were written by Pierre Del Moral in 1996.
Branching type particle methodologies with varying population sizes were also developed in the end of the 1990s by Dan Crisan, Jessica Gaines and Terry Lyons, and by Dan Crisan, Pierre Del Moral and Terry Lyons. Further developments in this field were described in 1999 to 2001 by P. Del Moral, A. Guionnet and L. Miclo.
Definitions
There is no consensus on how Monte Carlo should be defined. For example, Ripley defines most probabilistic modeling as stochastic simulation, with Monte Carlo being reserved for Monte Carlo integration and Monte Carlo statistical tests. Sawilowsky distinguishes between a simulation, a Monte Carlo method, and a Monte Carlo simulation: a simulation is a fictitious representation of reality, a Monte Carlo method is a technique that can be used to solve a mathematical or statistical problem, and a Monte Carlo simulation uses repeated sampling to obtain the statistical properties of some phenomenon (or behavior).
Here are some examples:
Simulation: Drawing one pseudo-random uniform variable from the interval [0,1] can be used to simulate the tossing of a coin: If the value is less than or equal to 0.50 designate the outcome as heads, but if the value is greater than 0.50 designate the outcome as tails. This is a simulation, but not a Monte Carlo simulation.
Monte Carlo method: Pouring out a box of coins on a table, and then computing the ratio of coins that land heads versus tails is a Monte Carlo method of determining the behavior of repeated coin tosses, but it is not a simulation.
Monte Carlo simulation: Drawing a large number of pseudo-random uniform variables from the interval [0,1] at one time, or once at many different times, and assigning values less than or equal to 0.50 as heads and greater than 0.50 as tails, is a Monte Carlo simulation of the behavior of repeatedly tossing a coin.
Kalos and Whitlock point out that such distinctions are not always easy to maintain. For example, the emission of radiation from atoms is a natural stochastic process. It can be simulated directly, or its average behavior can be described by stochastic equations that can themselves be solved using Monte Carlo methods. "Indeed, the same computer code can be viewed simultaneously as a 'natural simulation' or as a solution of the equations by natural sampling."
Convergence of the Monte Carlo simulation can be checked with the Gelman-Rubin statistic.
Monte Carlo and random numbers
The main idea behind this method is that the results are computed based on repeated random sampling and statistical analysis. The Monte Carlo simulation is, in fact, random experimentations, in the case that, the results of these experiments are not well known.
Monte Carlo simulations are typically characterized by many unknown parameters, many of which are difficult to obtain experimentally. Monte Carlo simulation methods do not always require truly random numbers to be useful (although, for some applications such as primality testing, unpredictability is vital). Many of the most useful techniques use deterministic, pseudorandom sequences, making it easy to test and re-run simulations. The only quality usually necessary to make good simulations is for the pseudo-random sequence to appear "random enough" in a certain sense.
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest and most common ones. Weak correlations between successive samples are also often desirable/necessary.
Sawilowsky lists the characteristics of a high-quality Monte Carlo simulation:
the (pseudo-random) number generator has certain characteristics (e.g. a long "period" before the sequence repeats)
the (pseudo-random) number generator produces values that pass tests for randomness
there are enough samples to ensure accurate results
the proper sampling technique is used
the algorithm used is valid for what is being modeled
it simulates the phenomenon in question.
Pseudo-random number sampling algorithms are used to transform uniformly distributed pseudo-random numbers into numbers that are distributed according to a given probability distribution.
Low-discrepancy sequences are often used instead of random sampling from a space as they ensure even coverage and normally have a faster order of convergence than Monte Carlo simulations using random or pseudorandom sequences. Methods based on their use are called quasi-Monte Carlo methods.
In an effort to assess the impact of random number quality on Monte Carlo simulation outcomes, astrophysical researchers tested cryptographically secure pseudorandom numbers generated via Intel's RDRAND instruction set, as compared to those derived from algorithms, like the Mersenne Twister, in Monte Carlo simulations of radio flares from brown dwarfs. No statistically significant difference was found between models generated with typical pseudorandom number generators and RDRAND for trials consisting of the generation of 107 random numbers.
Monte Carlo simulation versus "what if" scenarios
There are ways of using probabilities that are definitely not Monte Carlo simulations – for example, deterministic modeling using single-point estimates. Each uncertain variable within a model is assigned a "best guess" estimate. Scenarios (such as best, worst, or most likely case) for each input variable are chosen and the results recorded.
By contrast, Monte Carlo simulations sample from a probability distribution for each variable to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. For example, a comparison of a spreadsheet cost construction model run using traditional "what if" scenarios, and then running the comparison again with Monte Carlo simulation and triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the "what if" analysis. This is because the "what if" analysis gives equal weight to all scenarios (see quantifying uncertainty in corporate finance), while the Monte Carlo method hardly samples in the very low probability regions. The samples in such regions are called "rare events".
Applications
Monte Carlo methods are especially useful for simulating phenomena with significant uncertainty in inputs and systems with many coupled degrees of freedom. Areas of application include:
Physical sciences
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms as well as in modeling radiation transport for radiation dosimetry calculations. In statistical physics, Monte Carlo molecular modeling is an alternative to computational molecular dynamics, and Monte Carlo methods are used to compute statistical field theories of simple particle and polymer systems. Quantum Monte Carlo methods solve the many-body problem for quantum systems. In radiation materials science, the binary collision approximation for simulating ion implantation is usually based on a Monte Carlo approach to select the next colliding atom. In experimental particle physics, Monte Carlo methods are used for designing detectors, understanding their behavior and comparing experimental data to theory. In astrophysics, they are used in such diverse manners as to model both galaxy evolution and microwave radiation transmission through a rough planetary surface. Monte Carlo methods are also used in the ensemble models that form the basis of modern weather forecasting.
Engineering
Monte Carlo methods are widely used in engineering for sensitivity analysis and quantitative probabilistic analysis in process design. The need arises from the interactive, co-linear and non-linear behavior of typical process simulations. For example,
In microelectronics engineering, Monte Carlo methods are applied to analyze correlated and uncorrelated variations in analog and digital integrated circuits.
In geostatistics and geometallurgy, Monte Carlo methods underpin the design of mineral processing flowsheets and contribute to quantitative risk analysis.
In fluid dynamics, in particular rarefied gas dynamics, where the Boltzmann equation is solved for finite Knudsen number fluid flows using the direct simulation Monte Carlo method in combination with highly efficient computational algorithms.
In autonomous robotics, Monte Carlo localization can determine the position of a robot. It is often applied to stochastic filters such as the Kalman filter or particle filter that forms the heart of the SLAM (simultaneous localization and mapping) algorithm.
In telecommunications, when planning a wireless network, the design must be proven to work for a wide variety of scenarios that depend mainly on the number of users, their locations and the services they want to use. Monte Carlo methods are typically used to generate these users and their states. The network performance is then evaluated and, if results are not satisfactory, the network design goes through an optimization process.
In reliability engineering, Monte Carlo simulation is used to compute system-level response given the component-level response.
In signal processing and Bayesian inference, particle filters and sequential Monte Carlo techniques are a class of mean-field particle methods for sampling and computing the posterior distribution of a signal process given some noisy and partial observations using interacting empirical measures.
Climate change and radiative forcing
The Intergovernmental Panel on Climate Change relies on Monte Carlo methods in probability density function analysis of radiative forcing.
Computational biology
Monte Carlo methods are used in various fields of computational biology, for example for Bayesian inference in phylogeny, or for studying biological systems such as genomes, proteins, or membranes.
The systems can be studied in the coarse-grained or ab initio frameworks depending on the desired accuracy.
Computer simulations allow monitoring of the local environment of a particular molecule to see if some chemical reaction is happening for instance. In cases where it is not feasible to conduct a physical experiment, thought experiments can be conducted (for instance: breaking bonds, introducing impurities at specific sites, changing the local/global structure, or introducing external fields).
Computer graphics
Path tracing, occasionally referred to as Monte Carlo ray tracing, renders a 3D scene by randomly tracing samples of possible light paths. Repeated sampling of any given pixel will eventually cause the average of the samples to converge on the correct solution of the rendering equation, making it one of the most physically accurate 3D graphics rendering methods in existence.
Applied statistics
The standards for Monte Carlo experiments in statistics were set by Sawilowsky. In applied statistics, Monte Carlo methods may be used for at least four purposes:
To compare competing statistics for small samples under realistic data conditions. Although type I error and power properties of statistics can be calculated for data drawn from classical theoretical distributions (e.g., normal curve, Cauchy distribution) for asymptotic conditions (i. e, infinite sample size and infinitesimally small treatment effect), real data often do not have such distributions.
To provide implementations of hypothesis tests that are more efficient than exact tests such as permutation tests (which are often impossible to compute) while being more accurate than critical values for asymptotic distributions.
To provide a random sample from the posterior distribution in Bayesian inference. This sample then approximates and summarizes all the essential features of the posterior.
To provide efficient random estimates of the Hessian matrix of the negative log-likelihood function that may be averaged to form an estimate of the Fisher information matrix.
Monte Carlo methods are also a compromise between approximate randomization and permutation tests. An approximate randomization test is based on a specified subset of all permutations (which entails potentially enormous housekeeping of which permutations have been considered). The Monte Carlo approach is based on a specified number of randomly drawn permutations (exchanging a minor loss in precision if a permutation is drawn twice—or more frequently—for the efficiency of not having to track which permutations have already been selected).
Artificial intelligence for games
Monte Carlo methods have been developed into a technique called Monte-Carlo tree search that is useful for searching for the best move in a game. Possible moves are organized in a search tree and many random simulations are used to estimate the long-term potential of each move. A black box simulator represents the opponent's moves.
The Monte Carlo tree search (MCTS) method has four steps:
Starting at root node of the tree, select optimal child nodes until a leaf node is reached.
Expand the leaf node and choose one of its children.
Play a simulated game starting with that node.
Use the results of that simulated game to update the node and its ancestors.
The net effect, over the course of many simulated games, is that the value of a node representing a move will go up or down, hopefully corresponding to whether or not that node represents a good move.
Monte Carlo Tree Search has been used successfully to play games such as Go, Tantrix, Battleship, Havannah, and Arimaa.
Design and visuals
Monte Carlo methods are also efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations that produce photo-realistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, and cinematic special effects.
Search and rescue
The US Coast Guard utilizes Monte Carlo methods within its computer modeling software SAROPS in order to calculate the probable locations of vessels during search and rescue operations. Each simulation can generate as many as ten thousand data points that are randomly distributed based upon provided variables. Search patterns are then generated based upon extrapolations of these data in order to optimize the probability of containment (POC) and the probability of detection (POD), which together will equal an overall probability of success (POS). Ultimately this serves as a practical application of probability distribution in order to provide the swiftest and most expedient method of rescue, saving both lives and resources.
Finance and business
Monte Carlo simulation is commonly used to evaluate the risk and uncertainty that would affect the outcome of different decision options. Monte Carlo simulation allows the business risk analyst to incorporate the total effects of uncertainty in variables like sales volume, commodity and labor prices, interest and exchange rates, as well as the effect of distinct risk events like the cancellation of a contract or the change of a tax law.
Monte Carlo methods in finance are often used to evaluate investments in projects at a business unit or corporate level, or other financial valuations. They can be used to model project schedules, where simulations aggregate estimates for worst-case, best-case, and most likely durations for each task to determine outcomes for the overall project. Monte Carlo methods are also used in option pricing, default risk analysis. Additionally, they can be used to estimate the financial impact of medical interventions.
Law
A Monte Carlo approach was used for evaluating the potential value of a proposed program to help female petitioners in Wisconsin be successful in their applications for harassment and domestic abuse restraining orders. It was proposed to help women succeed in their petitions by providing them with greater advocacy thereby potentially reducing the risk of rape and physical assault. However, there were many variables in play that could not be estimated perfectly, including the effectiveness of restraining orders, the success rate of petitioners both with and without advocacy, and many others. The study ran trials that varied these variables to come up with an overall estimate of the success level of the proposed program as a whole.
Library science
Monte Carlo approach had also been used to simulate the number of book publications based on book genre in Malaysia. The Monte Carlo simulation utilized previous published National Book publication data and book's price according to book genre in the local market. The Monte Carlo results were used to determine what kind of book genre that Malaysians are fond of and was used to compare book publications between Malaysia and Japan.
Other
Nassim Nicholas Taleb writes about Monte Carlo generators in his 2001 book Fooled by Randomness as a real instance of the reverse Turing test: a human can be declared unintelligent if their writing cannot be told apart from a generated one.
Use in mathematics
In general, the Monte Carlo methods are used in mathematics to solve various problems by generating suitable random numbers (see also Random number generation) and observing that fraction of the numbers that obeys some property or properties. The method is useful for obtaining numerical solutions to problems too complicated to solve analytically. The most common application of the Monte Carlo method is Monte Carlo integration.
Integration
Deterministic numerical integration algorithms work well in a small number of dimensions, but encounter two problems when the functions have many variables. First, the number of function evaluations needed increases rapidly with the number of dimensions. For example, if 10 evaluations provide adequate accuracy in one dimension, then 10100 points are needed for 100 dimensions—far too many to be computed. This is called the curse of dimensionality. Second, the boundary of a multidimensional region may be very complicated, so it may not be feasible to reduce the problem to an iterated integral. 100 dimensions is by no means unusual, since in many physical problems, a "dimension" is equivalent to a degree of freedom.
Monte Carlo methods provide a way out of this exponential increase in computation time. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the central limit theorem, this method displays convergence—i.e., quadrupling the number of sampled points halves the error, regardless of the number of dimensions.
A refinement of this method, known as importance sampling in statistics, involves sampling the points randomly, but more frequently where the integrand is large. To do this precisely one would have to already know the integral, but one can approximate the integral by an integral of a similar function or use adaptive routines such as stratified sampling, recursive stratified sampling, adaptive umbrella sampling or the VEGAS algorithm.
A similar approach, the quasi-Monte Carlo method, uses low-discrepancy sequences. These sequences "fill" the area better and sample the most important points more frequently, so quasi-Monte Carlo methods can often converge on the integral more quickly.
Another class of methods for sampling points in a volume is to simulate random walks over it (Markov chain Monte Carlo). Such methods include the Metropolis–Hastings algorithm, Gibbs sampling, Wang and Landau algorithm, and interacting type MCMC methodologies such as the sequential Monte Carlo samplers.
Simulation and optimization
Another powerful and very popular application for random numbers in numerical simulation is in numerical optimization. The problem is to minimize (or maximize) functions of some vector that often has many dimensions. Many problems can be phrased in this way: for example, a computer chess program could be seen as trying to find the set of, say, 10 moves that produces the best evaluation function at the end. In the traveling salesman problem the goal is to minimize distance traveled. There are also applications to engineering design, such as multidisciplinary design optimization. It has been applied with quasi-one-dimensional models to solve particle dynamics problems by efficiently exploring large configuration space. Reference is a comprehensive review of many issues related to simulation and optimization.
The traveling salesman problem is what is called a conventional optimization problem. That is, all the facts (distances between each destination point) needed to determine the optimal path to follow are known with certainty and the goal is to run through the possible travel choices to come up with the one with the lowest total distance. If instead of the goal being to minimize the total distance traveled to visit each desired destination but rather to minimize the total time needed to reach each destination, this goes beyond conventional optimization since travel time is inherently uncertain (traffic jams, time of day, etc.). As a result, to determine the optimal path a different simulation is required: optimization to first understand the range of potential times it could take to go from one point to another (represented by a probability distribution in this case rather than a specific distance) and then optimize the travel decisions to identify the best path to follow taking that uncertainty into account.
Inverse problems
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines prior information with new information obtained by measuring some observable parameters (data).
As, in the general case, the theory linking data with model parameters is nonlinear, the posterior probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as normally information on the resolution power of the data is desired. In the general case many parameters are modeled, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.
Philosophy
Popular exposition of the Monte Carlo Method was conducted by McCracken. The method's general philosophy was discussed by Elishakoff and Grüne-Yanoff and Weirich.
See also
Auxiliary-field Monte Carlo
Biology Monte Carlo method
Direct simulation Monte Carlo
Dynamic Monte Carlo method
Ergodicity
Genetic algorithms
Kinetic Monte Carlo
List of software for Monte Carlo molecular modeling
Mean-field particle methods
Monte Carlo method for photon transport
Monte Carlo methods for electron transport
Monte Carlo N-Particle Transport Code
Morris method
Multilevel Monte Carlo method
Quasi-Monte Carlo method
Sobol sequence
Temporal difference learning
References
Citations
Sources
External links
Numerical analysis
Statistical mechanics
Computational physics
Sampling techniques
Statistical approximations
Stochastic simulation
Randomized algorithms
Risk analysis methodologies | Monte Carlo method | Physics,Mathematics | 7,879 |
22,614,255 | https://en.wikipedia.org/wiki/Beyond%20Compare | Beyond Compare is a cross-platform proprietary data comparison utility. The program is able to compare files and multiple types of directories, as well as archives. Beyond Compare can be configured as a difftool and mergetool of version control systems, such as git.
Reception
In an April 2009 review, Beyond Compare received four out of five rating stars from CNET. The reviewers initially found the user interface to be "a little overwhelming," but they "quickly got the hang of it" after using the program for a while. PC World writer Michael Desmond included the program in a 2005 list of utilities for a "Trouble-Free PC" and praised its "watch list" feature. Beyond Compare also was featured in the March 2005 issue of the Windows IT Pro magazine in the "What's Hot" section.
Scott Mitchell, writing for MSDN Magazine, identified the program's comparison rules as its most powerful feature. The customizable rules control which differences between two files should be flagged as such. A set of predefined rules is included for the comparison of common file types, such as C++ source code, XML, and HTML files.
Steve Gibson of GRC described it as "a really cool...very smart Windows-based source comparison tool."
See also
Comparison of file comparison tools
Comparison of FTP client software
References
External links
Scooter Software, maker of Beyond Compare
File comparison tools
Data synchronization
Pascal (programming language) software
Proprietary commercial software for Linux | Beyond Compare | Technology | 310 |
59,742,671 | https://en.wikipedia.org/wiki/DSatur | DSatur is a graph colouring algorithm put forward by Daniel Brélaz in 1979. Similarly to the greedy colouring algorithm, DSatur colours the vertices of a graph one after another, adding a previously unused colour when needed. Once a new vertex has been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of colours in its neighbourhood and colours this vertex next. Brélaz defines this number as the degree of saturation of a given vertex. The contraction of the term "degree of saturation" forms the name of the algorithm. DSatur is a heuristic graph colouring algorithm, yet produces exact results for bipartite, cycle, and wheel graphs. DSatur has also been referred to as saturation LF in the literature.
Pseudocode
Let the "degree of saturation" of a vertex be the number of different colours being used by its neighbors. Given a simple, undirected graph compromising a vertex set and edge set , the algorithm assigns colors to all of the vertices using color labels . The algorithm operates as follows:
Let be the uncolored vertex in with the highest degree of saturation. In cases of ties, choose the vertex among these with the largest degree in the subgraph induced by the uncolored vertices.
Assign to the lowest color label not being used by any of its neighbors.
If all vertices have been colored, then end; otherwise return to Step 1.
Step 2 of this algorithm assigns colors to vertices using the same scheme as the greedy colouring algorithm. The main differences between the two approaches arises in Step 1 above, where vertices seen to be the most "constrained" are coloured first.
Example
Consider the graph shown on the right. This is a wheel graph and will therefore be optimally colored by the DSatur algorithm. Executing the algorithm results in the vertices being selected and colored as follows. (In this example, where ties occur in both of DSatur's heuristics, the vertex with lowest lexicographic labelling among these is chosen.)
Vertex (color 1)
Vertex (color 2)
Vertex (color 3)
Vertex (color 2)
Vertex (color 3)
Vertex (color 2)
Vertex (color 3)
This gives the final three-colored solution .
Performance
The worst-case complexity of DSatur is , where is the number of vertices in the graph. This is because the process of selecting the next vertex to colour takes time, and this process is carried out times. The algorithm can also be implemented using a binary heap to store saturation degrees, operating in , or using Fibonacci heap, where is the number of edges in the graph. This produces much faster runs with sparse graphs.
DSatur is known to be exact for bipartite graphs, as well as for cycle and wheel graphs. In an empirical comparison by Lewis in 2021, DSatur produced significantly better vertex colourings than the greedy algorithm on random graphs with edge probability , while in turn producing significantly worse colourings than the recursive largest first algorithm.
References
External links
High-Performance Graph Colouring Algorithms Suite of graph colouring algorithms (implemented in C++) used in the book A Guide to Graph Colouring: Algorithms and Applications (Springer International Publishers, 2021).
C++ implementation of the DSatur Algorithm, presented as part of the article The DSatur Algorithm for Graph Coloring, Geeks for Geeks (2021)
1979 in computing
Graph algorithms
Graph coloring | DSatur | Mathematics | 713 |
8,898,329 | https://en.wikipedia.org/wiki/Frequency%20scaling | In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004.
The effect of processor frequency on computer speed can be seen by looking at the equation for computer program runtime:
where instructions per program is the total instructions being executed in a given program, cycles per instruction is a program-dependent, architecture-dependent average value, and time per cycle is by definition the inverse of processor frequency. An increase in frequency thus decreases runtime.
However, power consumption in a chip is given by the equation
where P is power consumption, C is the capacitance being switched per clock cycle, V is voltage, and F is the processor frequency (cycles per second). Increases in frequency thus increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm.
Moore's Law was still in effect when frequency scaling ended. Despite power issues, transistor densities were still doubling every 18 to 24 months. With the end of frequency scaling, new transistors (which are no longer needed to facilitate frequency scaling) are used to add extra hardware, such as additional cores, to facilitate parallel computing - a technique that is being referred to as parallel scaling.
The end of frequency scaling as the dominant cause of processor performance gains has caused an industry-wide shift to parallel computing in the form of multicore processors.
See also
Dynamic frequency scaling
Overclocking
Underclocking
Voltage scaling
References
Computer architecture
Central processing unit
fr:Fréquence du processeur | Frequency scaling | Technology,Engineering | 381 |
30,983 | https://en.wikipedia.org/wiki/Testosterone | Testosterone is the primary male sex hormone and androgen in males. In humans, testosterone plays a key role in the development of male reproductive tissues such as testicles and prostate, as well as promoting secondary sexual characteristics such as increased muscle and bone mass, and the growth of body hair. It is associated with increased aggression, sex drive, dominance, courtship display, and a wide range of behavioral characteristics. In addition, testosterone in both sexes is involved in health and well-being, where it has a significant effect on overall mood, cognition, social and sexual behavior, metabolism and energy output, the cardiovascular system, and in the prevention of osteoporosis. Insufficient levels of testosterone in men may lead to abnormalities including frailty, accumulation of adipose fat tissue within the body, anxiety and depression, sexual performance issues, and bone loss.
Excessive levels of testosterone in men may be associated with hyperandrogenism, higher risk of heart failure, increased mortality in men with prostate cancer, and male pattern baldness.
Testosterone is a steroid hormone from the androstane class containing a ketone and a hydroxyl group at positions three and seventeen respectively. It is biosynthesized in several steps from cholesterol and is converted in the liver to inactive metabolites. It exerts its action through binding to and activation of the androgen receptor. In humans and most other vertebrates, testosterone is secreted primarily by the testicles of males and, to a lesser extent, the ovaries of females. On average, in adult males, levels of testosterone are about seven to eight times as great as in adult females. As the metabolism of testosterone in males is more pronounced, the daily production is about 20 times greater in men. Females are also more sensitive to the hormone.
In addition to its role as a natural hormone, testosterone is used as a medication to treat hypogonadism and breast cancer. Since testosterone levels decrease as men age, testosterone is sometimes used in older men to counteract this deficiency. It is also used illicitly to enhance physique and performance, for instance in athletes. The World Anti-Doping Agency lists it as S1 Anabolic agent substance "prohibited at all times".
Biological effects
Effects on physiological development
In general, androgens such as testosterone promote protein synthesis and thus growth of tissues with androgen receptors. Testosterone can be described as having anabolic and androgenic (virilising) effects, though these categorical descriptions are somewhat arbitrary, as there is a great deal of mutual overlap between them. The relative potency of these effects can depend on various factors and is a topic of ongoing research. Testosterone can either directly exert effects on target tissues or be metabolized by 5α-reductase into dihydrotestosterone (DHT) or aromatized to estradiol (E2). Both testosterone and DHT bind to an androgen receptor; however, DHT has a stronger binding affinity than testosterone and may have more androgenic effect in certain tissues at lower levels.
Anabolic effects include growth of muscle mass and strength, increased bone density and strength, and stimulation of linear growth and bone maturation.
Androgenic effects include maturation of the sex organs, particularly the penis, and the formation of the scrotum in the fetus, and after birth (usually at puberty) a deepening of the voice, growth of facial hair (such as the beard) and axillary (underarm) hair. Many of these fall into the category of male secondary sex characteristics.
Testosterone effects can also be classified by the age of usual occurrence. For postnatal effects in both males and females, these are mostly dependent on the levels and duration of circulating free testosterone.
Before birth
Effects before birth are divided into two categories, classified in relation to the stages of development.
The first period occurs between 4 and 6 weeks of the gestation. Examples include genital virilisation such as midline fusion, phallic urethra, scrotal thinning and rugation, and phallic enlargement; although the role of testosterone is far smaller than that of dihydrotestosterone. There is also development of the prostate gland and seminal vesicles.
During the second trimester, androgen level is associated with sex formation. Specifically, testosterone, along with anti-Müllerian hormone (AMH) promote growth of the Wolffian duct and degeneration of the Müllerian duct respectively. This period affects the femininization or masculinization of the fetus and can be a better predictor of feminine or masculine behaviours such as sex typed behaviour than an adult's own levels. Prenatal androgens apparently influence interests and engagement in gendered activities and have moderate effects on spatial abilities. Among women with congenital adrenal hyperplasia, a male-typical play in childhood correlated with reduced satisfaction with the female gender and reduced heterosexual interest in adulthood.
Early infancy
Early infancy androgen effects are the least understood. In the first weeks of life for male infants, testosterone levels rise. The levels remain in a pubertal range for a few months, but usually reach the barely detectable levels of childhood by 4–7 months of age. The function of this rise in humans is unknown. It has been theorized that brain masculinization is occurring since no significant changes have been identified in other parts of the body. The male brain is masculinized by the aromatization of testosterone into estradiol, which crosses the blood–brain barrier and enters the male brain, whereas female fetuses have α-fetoprotein, which binds the estrogen so that female brains are not affected.
Before puberty
Before puberty, effects of rising androgen levels occur in both boys and girls. These include adult-type body odor, increased oiliness of skin and hair, acne, pubarche (appearance of pubic hair), axillary hair (armpit hair), growth spurt, accelerated bone maturation, and facial hair.
Pubertal
Pubertal effects begin to occur when androgen has been higher than normal adult female levels for months or years. In males, these are usual late pubertal effects, and occur in women after prolonged periods of heightened levels of free testosterone in the blood. The effects include:
Growth of spermatogenic tissue in testicles, male fertility, penis or clitoris enlargement, increased libido and frequency of erection or clitoral engorgement occurs.
Growth of jaw, brow, chin, and nose and remodeling of facial bone contours, in conjunction with human growth hormone occurs.
Completion of bone maturation and termination of growth. This occurs indirectly via estradiol metabolites and hence more gradually in men than women.
Increased muscle strength and mass, shoulders become broader and rib cage expands, deepening of voice, growth of the Adam's apple.
Enlargement of sebaceous glands. This might cause acne, subcutaneous fat in face decreases.
Pubic hair extends to thighs and up toward umbilicus, development of facial hair (sideburns, beard, moustache), loss of scalp hair (androgenetic alopecia), increase in chest hair, periareolar hair, perianal hair, leg hair, armpit hair.
Adult
Testosterone is necessary for normal sperm development. It activates genes in Sertoli cells, which promote differentiation of spermatogonia. It regulates acute hypothalamic–pituitary–adrenal axis (HTA axis) response under dominance challenge. Androgens including testosterone enhance muscle growth. Testosterone also regulates the population of thromboxane A2 receptors on megakaryocytes and platelets and hence platelet aggregation in humans.
Adult testosterone effects are more clearly demonstrable in males than in females, but are likely important to both sexes. Some of these effects may decline as testosterone levels might decrease in the later decades of adult life.
The brain is also affected by this sexual differentiation; the enzyme aromatase converts testosterone into estradiol that is responsible for masculinization of the brain in male mice. In humans, masculinization of the fetal brain appears, by observation of gender preference in patients with congenital disorders of androgen formation or androgen receptor function, to be associated with functional androgen receptors.
There are some differences between a male and female brain that may be due to different testosterone levels, one of them being size: the male human brain is, on average, larger.
Health effects
Testosterone does not appear to increase the risk of developing prostate cancer. In people who have undergone testosterone deprivation therapy, testosterone increases beyond the castrate level have been shown to increase the rate of spread of an existing prostate cancer.
Conflicting results have been obtained concerning the importance of testosterone in maintaining cardiovascular health. Nevertheless, maintaining normal testosterone levels in elderly men has been shown to improve many parameters that are thought to reduce cardiovascular disease risk, such as increased lean body mass, decreased visceral fat mass, decreased total cholesterol, and improved glycemic control.
High androgen levels are associated with menstrual cycle irregularities in both clinical populations and healthy women. There also can be effects in unusual hair growth, acne, weight gain, infertility, and sometimes even scalp hair loss. These effects are seen largely in women with polycystic ovary syndrome (PCOS). For women with PCOS, hormones like birth control pills can be used to help lessen the effects of this increased level of testosterone.
Attention, memory, and spatial ability are key cognitive functions affected by testosterone in humans. Preliminary evidence suggests that low testosterone levels may be a risk factor for cognitive decline and possibly for dementia of the Alzheimer's type, a key argument in life extension medicine for the use of testosterone in anti-aging therapies. Much of the literature, however, suggests a curvilinear or even quadratic relationship between spatial performance and circulating testosterone, where both hypo- and hypersecretion (deficient- and excessive-secretion) of circulating androgens have negative effects on cognition.
Immune system and inflammation
Testosterone deficiency is associated with an increased risk of metabolic syndrome, cardiovascular disease and mortality, which are also sequelae of chronic inflammation. Testosterone plasma concentration inversely correlates to multiple biomarkers of inflammation including CRP, interleukin 1 beta, interleukin 6, TNF alpha and endotoxin concentration, as well as leukocyte count. As demonstrated by a meta-analysis, substitution therapy with testosterone results in a significant reduction of inflammatory markers. These effects are mediated by different mechanisms with synergistic action. In androgen-deficient men with concomitant autoimmune thyroiditis, substitution therapy with testosterone leads to a decrease in thyroid autoantibody titres and an increase in thyroid's secretory capacity (SPINA-GT).
Medical use
Testosterone is used as a medication for the treatment of male hypogonadism, gender dysphoria, and certain types of breast cancer. This is known as hormone replacement therapy (HRT) or testosterone replacement therapy (TRT), which maintains serum testosterone levels in the normal range. Decline of testosterone production with age has led to interest in androgen replacement therapy. It is unclear if the use of testosterone for low levels due to aging is beneficial or harmful.
Testosterone is included in the World Health Organization's list of essential medicines, which are the most important medications needed in a basic health system. It is available as a generic medication. It can be administered as a cream or transdermal patch that is applied to the skin, by injection into a muscle, as a tablet that is placed in the cheek, or by ingestion.
Common side effects from testosterone medication include acne, swelling, and breast enlargement in males. Serious side effects may include liver toxicity, heart disease (though a randomized trial found no evidence of major adverse cardiac events compared to placebo in men with low testosterone), and behavioral changes. Women and children who are exposed may develop virilization. It is recommended that individuals with prostate cancer not use the medication. It can cause harm if used during pregnancy or breastfeeding.
2020 guidelines from the American College of Physicians support the discussion of testosterone treatment in adult men with age-related low levels of testosterone who have sexual dysfunction. They recommend yearly evaluation regarding possible improvement and, if none, to discontinue testosterone; physicians should consider intramuscular treatments, rather than transdermal treatments, due to costs and since the effectiveness and harm of either method is similar. Testosterone treatment for reasons other than possible improvement of sexual dysfunction may not be recommended.
No immediate short term effects on mood or behavior were found from the administration of supraphysiologic doses of testosterone for 10 weeks on 43 healthy men.
Behavioural correlations
Sexual arousal
Testosterone levels follow a circadian rhythm that peaks early each day, regardless of sexual activity.
In women, correlations may exist between positive orgasm experience and testosterone levels. Studies have shown small or inconsistent correlations between testosterone levels and male orgasm experience, as well as sexual assertiveness in both sexes.
Sexual arousal and masturbation in women produce small increases in testosterone concentrations. The plasma levels of various steroids significantly increase after masturbation in men and the testosterone levels correlate to those levels.
Mammalian studies
Studies conducted in rats have indicated that their degree of sexual arousal is sensitive to reductions in testosterone. When testosterone-deprived rats were given medium levels of testosterone, their sexual behaviours (copulation, partner preference, etc.) resumed, but not when given low amounts of the same hormone. Therefore, these mammals may provide a model for studying clinical populations among humans with sexual arousal deficits such as hypoactive sexual desire disorder.
Every mammalian species examined demonstrated a marked increase in a male's testosterone level upon encountering a female. The reflexive testosterone increases in male mice is related to the male's initial level of sexual arousal.
In non-human primates, it may be that testosterone in puberty stimulates sexual arousal, which allows the primate to increasingly seek out sexual experiences with females and thus creates a sexual preference for females. Some research has also indicated that if testosterone is eliminated in an adult male human or other adult male primate's system, its sexual motivation decreases, but there is no corresponding decrease in ability to engage in sexual activity (mounting, ejaculating, etc.).
In accordance with sperm competition theory, testosterone levels are shown to increase as a response to previously neutral stimuli when conditioned to become sexual in male rats. This reaction engages penile reflexes (such as erection and ejaculation) that aid in sperm competition when more than one male is present in mating encounters, allowing for more production of successful sperm and a higher chance of reproduction.
Males
In men, higher levels of testosterone are associated with periods of sexual activity.
Men who watch a sexually explicit movie have an average increase of 35% in testosterone, peaking at 60–90 minutes after the end of the film, but no increase is seen in men who watch sexually neutral films. Men who watch sexually explicit films also report increased motivation and competitiveness, and decreased exhaustion. A link has also been found between relaxation following sexual arousal and testosterone levels.
Females
Androgens may modulate the physiology of vaginal tissue and contribute to female genital sexual arousal. Women's level of testosterone is higher when measured pre-intercourse vs. pre-cuddling, as well as post-intercourse vs. post-cuddling. There is a time lag effect when testosterone is administered, on genital arousal in women. In addition, a continuous increase in vaginal sexual arousal may result in higher genital sensations and sexual appetitive behaviors.
When females have a higher baseline level of testosterone, they have higher increases in sexual arousal levels but smaller increases in testosterone, indicating a ceiling effect on testosterone levels in females. Sexual thoughts also change the level of testosterone but not the level of cortisol in the female body, and hormonal contraceptives may affect the variation in testosterone response to sexual thoughts.
Testosterone may prove to be an effective treatment in female sexual arousal disorders, and is available as a dermal patch. There is no FDA-approved androgen preparation for the treatment of androgen insufficiency; however, it has been used as an off-label use to treat low libido and sexual dysfunction in older women. Testosterone may be a treatment for postmenopausal women as long as they are effectively estrogenized.
Romantic relationships
Falling in love has been linked with decreases in men's testosterone levels while mixed changes are reported for women's testosterone levels. There has been speculation that these changes in testosterone result in the temporary reduction of differences in behavior between the sexes. However, the testosterone changes observed do not seem to be maintained as relationships develop over time.
Men who produce less testosterone are more likely to be in a relationship or married, and men who produce more testosterone are more likely to divorce. Marriage or commitment could cause a decrease in testosterone levels. Single men who have not had relationship experience have lower testosterone levels than single men with experience. It is suggested that these single men with prior experience are in a more competitive state than their non-experienced counterparts. Married men who engage in bond-maintenance activities such as spending the day with their spouse or child have no different testosterone levels compared to times when they do not engage in such activities. Collectively, these results suggest that the presence of competitive activities rather than bond-maintenance activities is more relevant to changes in testosterone levels.
Men who produce more testosterone are more likely to engage in extramarital sex. Testosterone levels do not rely on physical presence of a partner; testosterone levels of men engaging in same-city and long-distance relationships are similar. Physical presence may be required for women who are in relationships for the testosterone–partner interaction, where same-city partnered women have lower testosterone levels than long-distance partnered women.
Fatherhood
Fatherhood decreases testosterone levels in men, suggesting that the emotions and behaviour tied to paternal care decrease testosterone levels. In humans and other species that utilize allomaternal care, paternal investment in offspring is beneficial to said offspring's survival because it allows the two parents to raise multiple children simultaneously. This increases the reproductive fitness of the parents because their offspring are more likely to survive and reproduce. Paternal care increases offspring survival due to increased access to higher quality food and reduced physical and immunological threats. This is particularly beneficial for humans since offspring are dependent on parents for extended periods of time and mothers have relatively short inter-birth intervals.
While the extent of paternal care varies between cultures, higher investment in direct child care has been seen to be correlated with lower average testosterone levels as well as temporary fluctuations. For instance, fluctuation in testosterone levels when a child is in distress has been found to be indicative of fathering styles. If a father's testosterone levels decrease in response to hearing their baby cry, it is an indication of empathizing with the baby. This is associated with increased nurturing behavior and better outcomes for the infant.
Motivation
Testosterone levels play a major role in risk-taking during financial decisions. Higher testosterone levels in men reduce the risk of becoming or staying unemployed. Research has also found that heightened levels of testosterone and cortisol are associated with an increased risk of impulsive and violent criminal behavior. On the other hand, elevated testosterone in men may increase their generosity, primarily to attract a potential mate.
Aggression and criminality
Most studies support a link between adult criminality and testosterone. Nearly all studies of juvenile delinquency and testosterone are not significant. Most studies have found testosterone to be associated with behaviors or personality traits linked with antisocial behavior and alcoholism. Many studies have been undertaken on the relationship between more general aggressive behavior, and feelings, and testosterone. About half of studies have found a relationship and about half, no relationship. Studies have found that testosterone facilitates aggression by modulating vasopressin receptors in the hypothalamus.
There are two theories on the role of testosterone in aggression and competition. The first is the challenge hypothesis which states that testosterone would increase during puberty, thus facilitating reproductive and competitive behavior which would include aggression. It is therefore the challenge of competition among males that facilitates aggression and violence. Studies conducted have found direct correlation between testosterone and dominance, especially among the most violent criminals in prison who had the highest testosterone. The same research found fathers (outside competitive environments) had the lowest testosterone levels compared to other males.
The second theory is similar and known as "evolutionary neuroandrogenic (ENA) theory of male aggression". Testosterone and other androgens have evolved to masculinize a brain to be competitive, even to the point of risking harm to the person and others. By doing so, individuals with masculinized brains as a result of pre-natal and adult life testosterone and androgens, enhance their resource acquiring abilities to survive, attract and copulate with mates as much as possible. The masculinization of the brain is not just mediated by testosterone levels at the adult stage, but also testosterone exposure in the womb. Higher pre-natal testosterone indicated by a low digit ratio as well as adult testosterone levels increased risk of fouls or aggression among male players in a soccer game. Studies have found higher pre-natal testosterone or lower digit ratio to be correlated with higher aggression.
The rise in testosterone during competition predicted aggression in males, but not in females. Subjects who interacted with handguns and an experimental game showed rise in testosterone and aggression. Natural selection might have evolved males to be more sensitive to competitive and status challenge situations, and that the interacting roles of testosterone are the essential ingredient for aggressive behaviour in these situations. Testosterone mediates attraction to cruel and violent cues in men by promoting extended viewing of violent stimuli. Testosterone-specific structural brain characteristic can predict aggressive behaviour in individuals.
The Annals of the New York Academy of Sciences has found anabolic steroid use (which increases testosterone) to be higher in teenagers, and this was associated with increased violence. Studies have found administered testosterone to increase verbal aggression and anger in some participants.
A few studies indicate that the testosterone derivative estradiol might play an important role in male aggression. Estradiol is known to correlate with aggression in male mice. Moreover, the conversion of testosterone to estradiol regulates male aggression in sparrows during breeding season. Rats who were given anabolic steroids that increase testosterone were also more physically aggressive to provocation as a result of "threat sensitivity".
The relationship between testosterone and aggression may also function indirectly, as it has been proposed that testosterone does not amplify tendencies towards aggression, but rather amplifies whatever tendencies will allow an individual to maintain social status when challenged. In most animals, aggression is the means of maintaining social status. However, humans have multiple ways of obtaining status. This could explain why some studies find a link between testosterone and pro-social behaviour, if pro-social behaviour is rewarded with social status. Thus the link between testosterone and aggression and violence is due to these being rewarded with social status. The relationship may also be one of a "permissive effect" whereby testosterone does elevate aggression levels, but only in the sense of allowing average aggression levels to be maintained; chemically or physically castrating the individual will reduce aggression levels (though not eliminate them) but the individual only needs a small-level of pre-castration testosterone to have aggression levels to return to normal, which they will remain at even if additional testosterone is added. Testosterone may also simply exaggerate or amplify existing aggression; for example, chimpanzees who receive testosterone increases become more aggressive to chimps lower than them in the social hierarchy, but will still be submissive to chimps higher than them. Testosterone thus does not make the chimpanzee indiscriminately aggressive, but instead amplifies his pre-existing aggression towards lower-ranked chimps.
In humans, testosterone appears more to promote status-seeking and social dominance than simply increasing physical aggression. When controlling for the effects of belief in having received testosterone, women who have received testosterone make fairer offers than women who have not received testosterone.
Fairness
Testosterone might encourage fair behavior. For one study, subjects took part in a behavioral experiment where the distribution of a real amount of money was decided. The rules allowed both fair and unfair offers. The negotiating partner could subsequently accept or decline the offer. The fairer the offer, the less probable a refusal by the negotiating partner. If no agreement was reached, neither party earned anything. Test subjects with an artificially enhanced testosterone level generally made better, fairer offers than those who received placebos, thus reducing the risk of a rejection of their offer to a minimum. Two later studies have empirically confirmed these results. However men with high testosterone were significantly 27% less generous in an ultimatum game.
Biological activity
Free testosterone
Lipophilic hormones (soluble in lipids but not in water), such as steroid hormones, including testosterone, are transported in water-based blood plasma through specific and non-specific proteins. Specific proteins include sex hormone-binding globulin (SHBG), which binds testosterone, dihydrotestosterone, estradiol, and other sex steroids. Non-specific binding proteins include albumin. The part of the total hormone concentration that is not bound to its respective specific carrier protein is the free part. As a result, testosterone which is not bound to SHBG is called free testosterone. Only the free amount of testosterone can bind to an androgenic receptor, which means it has biological activity. While a significant portion of testosterone is bound to SHBG, a small fraction of testosterone (1%-2%) is bound to albumin and the binding of testosterone to albumin is weak and can be reversed easily; as such, both albumin-bound and unbound testosterone are considered to be bioavailable testosterone. This binding plays an important role in regulating the transport, tissue delivery, bioactivity, and metabolism of testosterone. At the tissue level, testosterone dissociates from albumin and quickly diffuses into the tissues. The percentage of testosterone bound to SHBG is lower in men than in women. Both the free fraction and the one bound to albumin are available at the tissue level (their sum constitutes the bioavailable testosterone), while SHBG effectively and irreversibly inhibits the action of testosterone. The relationship between sex steroids and SHBG in physiological and pathological conditions is complex, as various factors may influence the levels of plasma SHBG, affecting bioavailability of testosterone.
Steroid hormone activity
The effects of testosterone in humans and other vertebrates occur by way of multiple mechanisms: by activation of the androgen receptor (directly or as dihydrotestosterone), and by conversion to estradiol and activation of certain estrogen receptors. Androgens such as testosterone have also been found to bind to and activate membrane androgen receptors.
Free testosterone (T) is transported into the cytoplasm of target tissue cells, where it can bind to the androgen receptor, or can be reduced to 5α-dihydrotestosterone (5α-DHT) by the cytoplasmic enzyme 5α-reductase. 5α-DHT binds to the same androgen receptor even more strongly than testosterone, so that its androgenic potency is about 5 times that of T. The T-receptor or DHT-receptor complex undergoes a structural change that allows it to move into the cell nucleus and bind directly to specific nucleotide sequences of the chromosomal DNA. The areas of binding are called hormone response elements (HREs), and influence transcriptional activity of certain genes, producing the androgen effects.
Androgen receptors occur in many different vertebrate body system tissues, and both males and females respond similarly to similar levels. Greatly differing amounts of testosterone prenatally, at puberty, and throughout life account for a share of biological differences between males and females.
The bones and the brain are two important tissues in humans where the primary effect of testosterone is by way of aromatization to estradiol. In the bones, estradiol accelerates ossification of cartilage into bone, leading to closure of the epiphyses and conclusion of growth. In the central nervous system, testosterone is aromatized to estradiol. Estradiol rather than testosterone serves as the most important feedback signal to the hypothalamus (especially affecting LH secretion). In many mammals, prenatal or perinatal "masculinization" of the sexually dimorphic areas of the brain by estradiol derived from testosterone programs later male sexual behavior.
Neurosteroid activity
Testosterone, via its active metabolite 3α-androstanediol, is a potent positive allosteric modulator of the GABAA receptor.
Testosterone has been found to act as an antagonist of the TrkA and p75NTR, receptors for the neurotrophin nerve growth factor (NGF), with high affinity (around 5 nM). In contrast to testosterone, DHEA and DHEA sulfate have been found to act as high-affinity agonists of these receptors.
Testosterone is an antagonist of the sigma-1 receptor (Ki = 1,014 or 201 nM). However, the concentrations of testosterone required for binding the receptor are far above even total circulating concentrations of testosterone in adult males (which range between 10 and 35 nM).
Biochemistry
Biosynthesis
Like other steroid hormones, testosterone is derived from cholesterol . The first step in the biosynthesis involves the oxidative cleavage of the side-chain of cholesterol by cholesterol side-chain cleavage enzyme (P450scc, CYP11A1), a mitochondrial cytochrome P450 oxidase with the loss of six carbon atoms to give pregnenolone. In the next step, two additional carbon atoms are removed by the CYP17A1 (17α-hydroxylase/17,20-lyase) enzyme in the endoplasmic reticulum to yield a variety of C19 steroids. In addition, the 3β-hydroxyl group is oxidized by 3β-hydroxysteroid dehydrogenase to produce androstenedione. In the final and rate limiting step, the C17 keto group androstenedione is reduced by 17β-hydroxysteroid dehydrogenase to yield testosterone.
The largest amounts of testosterone (>95%) are produced by the testes in men, while the adrenal glands account for most of the remainder. Testosterone is also synthesized in far smaller total quantities in women by the adrenal glands, thecal cells of the ovaries, and, during pregnancy, by the placenta. In the testes, testosterone is produced by the Leydig cells. The male generative glands also contain Sertoli cells, which require testosterone for spermatogenesis. Like most hormones, testosterone is supplied to target tissues in the blood where much of it is transported bound to a specific plasma protein, sex hormone-binding globulin (SHBG).
Regulation
In males, testosterone is synthesized primarily in Leydig cells. The number of Leydig cells in turn is regulated by luteinizing hormone (LH) and follicle-stimulating hormone (FSH). In addition, the amount of testosterone produced by existing Leydig cells is under the control of LH, which regulates the expression of 17β-hydroxysteroid dehydrogenase.
The amount of testosterone synthesized is regulated by the hypothalamic–pituitary–testicular axis . When testosterone levels are low, gonadotropin-releasing hormone (GnRH) is released by the hypothalamus, which in turn stimulates the pituitary gland to release FSH and LH. These latter two hormones stimulate the testis to synthesize testosterone. Finally, increasing levels of testosterone through a negative feedback loop act on the hypothalamus and pituitary to inhibit the release of GnRH and FSH/LH, respectively.
Factors affecting testosterone levels may include:
Age: Testosterone levels gradually reduce as men age. This effect is sometimes referred to as andropause or late-onset hypogonadism.
Exercise: Resistance training increases testosterone levels acutely, however, in older men, that increase can be avoided by protein ingestion. Endurance training in men may lead to lower testosterone levels.
Nutrients: Vitamin A deficiency may lead to sub-optimal plasma testosterone levels. The secosteroid vitamin D in levels of 400–1000 IU/d (10–25 μg/d) raises testosterone levels. Zinc deficiency lowers testosterone levels but over-supplementation has no effect on serum testosterone. There is limited evidence that low-fat diets may reduce total and free testosterone levels in men.
Weight loss: Reduction in weight may result in an increase in testosterone levels. Fat cells synthesize the enzyme aromatase, which converts testosterone, the male sex hormone, into estradiol, the female sex hormone. However no clear association between body mass index and testosterone levels has been found.
Miscellaneous: Sleep: (REM sleep) increases nocturnal testosterone levels.
Behavior: Dominance challenges can, in some cases, stimulate increased testosterone release in men.
Foods: Natural or man-made antiandrogens including spearmint tea reduce testosterone levels. Licorice can decrease the production of testosterone and this effect is greater in females.
Distribution
The plasma protein binding of testosterone is 98.0 to 98.5%, with 1.5 to 2.0% free or unbound. It is bound 65% to sex hormone-binding globulin (SHBG) and 33% bound weakly to albumin.
Metabolism
Both testosterone and 5α-DHT are metabolized mainly in the liver. Approximately 50% of testosterone is metabolized via conjugation into testosterone glucuronide and to a lesser extent testosterone sulfate by glucuronosyltransferases and sulfotransferases, respectively. An additional 40% of testosterone is metabolized in equal proportions into the 17-ketosteroids androsterone and etiocholanolone via the combined actions of 5α- and 5β-reductases, 3α-hydroxysteroid dehydrogenase, and 17β-HSD, in that order. Androsterone and etiocholanolone are then glucuronidated and to a lesser extent sulfated similarly to testosterone. The conjugates of testosterone and its hepatic metabolites are released from the liver into circulation and excreted in the urine and bile. Only a small fraction (2%) of testosterone is excreted unchanged in the urine.
In the hepatic 17-ketosteroid pathway of testosterone metabolism, testosterone is converted in the liver by 5α-reductase and 5β-reductase into 5α-DHT and the inactive 5β-DHT, respectively. Then, 5α-DHT and 5β-DHT are converted by 3α-HSD into 3α-androstanediol and 3α-etiocholanediol, respectively. Subsequently, 3α-androstanediol and 3α-etiocholanediol are converted by 17β-HSD into androsterone and etiocholanolone, which is followed by their conjugation and excretion. 3β-Androstanediol and 3β-etiocholanediol can also be formed in this pathway when 5α-DHT and 5β-DHT are acted upon by 3β-HSD instead of 3α-HSD, respectively, and they can then be transformed into epiandrosterone and epietiocholanolone, respectively. A small portion of approximately 3% of testosterone is reversibly converted in the liver into androstenedione by 17β-HSD.
In addition to conjugation and the 17-ketosteroid pathway, testosterone can also be hydroxylated and oxidized in the liver by cytochrome P450 enzymes, including CYP3A4, CYP3A5, CYP2C9, CYP2C19, and CYP2D6. 6β-Hydroxylation and to a lesser extent 16β-hydroxylation are the major transformations. The 6β-hydroxylation of testosterone is catalyzed mainly by CYP3A4 and to a lesser extent CYP3A5 and is responsible for 75 to 80% of cytochrome P450-mediated testosterone metabolism. In addition to 6β- and 16β-hydroxytestosterone, 1β-, 2α/β-, 11β-, and 15β-hydroxytestosterone are also formed as minor metabolites. Certain cytochrome P450 enzymes such as CYP2C9 and CYP2C19 can also oxidize testosterone at the C17 position to form androstenedione.
Two of the immediate metabolites of testosterone, 5α-DHT and estradiol, are biologically important and can be formed both in the liver and in extrahepatic tissues. Approximately 5 to 7% of testosterone is converted by 5α-reductase into 5α-DHT, with circulating levels of 5α-DHT about 10% of those of testosterone, and approximately 0.3% of testosterone is converted into estradiol by aromatase. 5α-Reductase is highly expressed in the male reproductive organs (including the prostate gland, seminal vesicles, and epididymides), skin, hair follicles, and brain and aromatase is highly expressed in adipose tissue, bone, and the brain. As much as 90% of testosterone is converted into 5α-DHT in so-called androgenic tissues with high 5α-reductase expression, and due to the several-fold greater potency of 5α-DHT as an AR agonist relative to testosterone, it has been estimated that the effects of testosterone are potentiated 2- to 3-fold in such tissues.
Levels
Total levels of testosterone in the body have been reported as 264 to 916 ng/dL (nanograms per deciliter) in non-obese European and American men age 19 to 39 years, while mean testosterone levels in adult men have been reported as 630 ng/dL. Although commonly used as a reference range, some physicians have disputed the use of this range to determine hypogonadism. Several professional medical groups have recommended that 350 ng/dL generally be considered the minimum normal level, which is consistent with previous findings. Levels of testosterone in men decline with age. In women, mean levels of total testosterone have been reported to be 32.6 ng/dL. In women with hyperandrogenism, mean levels of total testosterone have been reported to be 62.1 ng/dL.
Measurement
In measurements of testosterone in blood samples, different assay techniques can yield different results. Immunofluorescence assays exhibit considerable variability in quantifying testosterone concentrations in blood samples due to the cross-reaction of structurally similar steroids, leading to overestimating the results. In contrast, the liquid chromatography/tandem mass spectrometry method is more desirable: it offers superior specificity and precision, making it a more suitable choice for this application.
Testosterone's bioavailable concentration is commonly determined using the Vermeulen calculation or more precisely using the modified Vermeulen method, which considers the dimeric form of sex hormone-binding globulin.
Both methods use chemical equilibrium to derive the concentration of bioavailable testosterone: in circulation, testosterone has two major binding partners, albumin (weakly bound) and sex hormone-binding globulin (strongly bound). These methods are described in detail in the accompanying figure.
Distribution
Testosterone has been detected at variably higher and lower levels among men of various nations and from various backgrounds, explanations for the causes of this have been relatively diverse.
People from nations of the Eurasian Steppe and Central Asia, such as Mongolia, Kyrgyzstan and Uzbekistan, have consistently been detected to have had significantly elevated levels of testosterone, while people from Central European and Baltic nations such as the Czech Republic, Slovakia, Latvia and Estonia have been found to have had significantly decreased levels of testosterone.
The region with the highest-ever tested levels of testosterone is Chita, Russia, the people group with the highest ever tested levels of testosterone were the Yakuts.
History and production
A testicular action was linked to circulating blood fractions – now understood to be a family of androgenic hormones – in the early work on castration and testicular transplantation in fowl by Arnold Adolph Berthold (1803–1861). Research on the action of testosterone received a brief boost in 1889, when the Harvard professor Charles-Édouard Brown-Séquard (1817–1894), then in Paris, self-injected subcutaneously a "rejuvenating elixir" consisting of an extract of dog and guinea pig testicle. He reported in The Lancet that his vigor and feeling of well-being were markedly restored but the effects were transient, and Brown-Séquard's hopes for the compound were dashed. Suffering the ridicule of his colleagues, he abandoned his work on the mechanisms and effects of androgens in human beings.
In 1927, the University of Chicago's Professor of Physiologic Chemistry, Fred C. Koch, established easy access to a large source of bovine testicles – the Chicago stockyards – and recruited students willing to endure the tedious work of extracting their isolates. In that year, Koch and his student, Lemuel McGee, derived 20 mg of a substance from a supply of 40 pounds of bovine testicles that, when administered to castrated roosters, pigs and rats, re-masculinized them. The group of Ernst Laqueur at the University of Amsterdam purified testosterone from bovine testicles in a similar manner in 1934, but the isolation of the hormone from animal tissues in amounts permitting serious study in humans was not feasible until three European pharmaceutical giants – Schering (Berlin, Germany), Organon (Oss, Netherlands) and Ciba – began full-scale steroid research and development programs in the 1930s.
The Organon group in the Netherlands were the first to isolate the hormone, identified in a May 1935 paper "On Crystalline Male Hormone from Testicles (Testosterone)". They named the hormone testosterone, from the stems of testicle and sterol, and the suffix of ketone. The structure was worked out by Schering's Adolf Butenandt, at the Chemisches Institut of Technical University in Gdańsk.
The chemical synthesis of testosterone from cholesterol was achieved in August that year by Butenandt and Hanisch. Only a week later, the Ciba group in Zurich, Leopold Ruzicka (1887–1976) and A. Wettstein, published their synthesis of testosterone. These independent partial syntheses of testosterone from a cholesterol base earned both Butenandt and Ruzicka the joint 1939 Nobel Prize in Chemistry. Testosterone was identified as 17β-hydroxyandrost-4-en-3-one (C19H28O2), a solid polycyclic alcohol with a hydroxyl group at the 17th carbon atom. This also made it obvious that additional modifications on the synthesized testosterone could be made, i.e., esterification and alkylation.
The partial synthesis in the 1930s of abundant, potent testosterone esters permitted the characterization of the hormone's effects, so that Kochakian and Murlin (1936) were able to show that testosterone raised nitrogen retention (a mechanism central to anabolism) in the dog, after which Allan Kenyon's group was able to demonstrate both anabolic and androgenic effects of testosterone propionate in eunuchoidal men, boys, and women. The period of the early 1930s to the 1950s has been called "The Golden Age of Steroid Chemistry", and work during this period progressed quickly.
Like other androsteroids, testosterone is manufactured industrially from microbial fermentation of plant cholesterol (e.g., from soybean oil). In the early 2000s, the steroid market weighed around one million tonnes and was worth $10 billion, making it the 2nd largest biopharmaceutical market behind antibiotics.
Other species
Testosterone is observed in most vertebrates. Testosterone and the classical nuclear androgen receptor first appeared in gnathostomes (jawed vertebrates). Agnathans (jawless vertebrates) such as lampreys do not produce testosterone but instead use androstenedione as a male sex hormone. Fish make a slightly different form called 11-ketotestosterone. Its counterpart in insects is ecdysone. The presence of these ubiquitous steroids in a wide range of animals suggest that sex hormones have an ancient evolutionary history.
See also
List of androgens/anabolic steroids
List of human hormones
References
Further reading
Cyclopentanols
Anabolic–androgenic steroids
Androstanes
Estrogens
GABAA receptor positive allosteric modulators
Hormones of the testis
Hormones of the ovary
Hormones of the hypothalamus-pituitary-gonad axis
Hormones of the suprarenal cortex
Enones
Neuroendocrinology
Human hormones
Sex hormones | Testosterone | Biology | 9,505 |
28,621,577 | https://en.wikipedia.org/wiki/T%20pad | The T pad is a specific type of attenuator circuit in electronics whereby the topology of the circuit is formed in the shape of the letter "T".
Attenuators are used in electronics to reduce the level of a signal. They are also referred to as pads due to their effect of padding down a signal by analogy with acoustics. Attenuators have a flat frequency response attenuating all frequencies equally in the band they are intended to operate. The attenuator has the opposite task of an amplifier. The topology of an attenuator circuit will usually follow one of the simple filter sections. However, there is no need for more complex circuitry, as there is with filters, due to the simplicity of the frequency response required.
Circuits are required to be balanced or unbalanced depending on the geometry of the transmission lines they are to be used with. For radio frequency applications, the format is often unbalanced, such as coaxial. For audio and telecommunications, balanced circuits are usually required, such as with the twisted pair format. The T pad is intrinsically an unbalanced circuit. However, it can be converted to a balanced circuit by placing half the series resistances in the return path. Such a circuit is called an H-section, or else an I section because the circuit is formed in the shape of a serifed letter "I".
Terminology
An attenuator is a form of a two-port network with a generator connected to one port and a load connected to the other. In all of the circuits given below it is assumed that the generator and load impedances are purely resistive (though not necessarily equal) and that the attenuator circuit is required to perfectly match to these. The symbols used for these impedances are;
the impedance of the generator
the impedance of the load
Popular values of impedance are 600Ω in telecommucations and audio, 75Ω for video and dipole antennae, 50Ω for RF
The voltage transfer function, A, is,
While the inverse of this is the loss, L, of the attenuator,
The value of attenuation is normally marked on the attenuator as its loss, LdB, in decibels (dB). The relationship with L is;
Popular values of attenuator are 3dB, 6dB, 10dB, 20dB and 40dB.
However, it is often more convenient to express the loss in nepers,
where is the attenuation in nepers (one neper is approximately 8.7 dB).
Impedance and loss
The values of resistance of the attenuator's elements can be calculated using image parameter theory. The starting point here is the image impedances of the L section in figure 2. The image impedance of the input is,
and the image admittance of the output is,
The loss of the L section when terminated in its image impedances is,
where the image parameter transmission function, γL is given by,
The loss of this L section in the reverse direction is given by,
For an attenuator, Z and Y are simple resistors and γ becomes the image parameter attenuation (that is, the attenuation when terminated with the image impedances) in nepers. A T pad can be viewed as being two L sections back-to-back as shown in figure 3. Most commonly, the generator and load impedances are equal so that and a symmetrical T pad is used. In this case, the impedance matching terms inside the square roots all cancel and,
Substituting Z and Y for the corresponding resistors,
These equations can easily be extended to non-symmetrical cases.
Resistor values
The equations above find the impedance and loss for an attenuator with given resistor values. The usual requirement in a design is the other way around – the resistor values for a given impedance and loss are needed. These can be found by transposing and substituting the last two equations above;
See also
Π pad
L pad
References
Matthaei, Young, Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures, pp. 41–45, 4McGraw-Hill 1964.
Redifon Radio Diary, 1970, pp. 49–60, William Collins Sons & Co, 1969.
Analog circuits
Electronic design
Resistive components | T pad | Physics,Engineering | 906 |
25,682,462 | https://en.wikipedia.org/wiki/International%20Journal%20of%20Nanoscience | The International Journal of Nanoscience is an interdisciplinary peer-reviewed scientific journal published by World Scientific. It covers research in nanometer scale science and technology, with articles ranging from the "basic science of nanoscale physics and chemistry to applications in nanodevices, quantum engineering and quantum computing".
Abstracting and indexing
This journal is indexed in the following databases:
Chemical Abstracts Service
CSA Aerospace Sciences Abstracts
Compendex
Inspec
Scopus
References
External links
Academic journals established in 2002
World Scientific academic journals
English-language journals
Nanotechnology journals
Bimonthly journals | International Journal of Nanoscience | Materials_science | 116 |
561,885 | https://en.wikipedia.org/wiki/Armillaria%20mellea | Armillaria mellea, commonly known as honey fungus, is an edible basidiomycete fungus in the genus Armillaria. It is a plant pathogen and part of a cryptic species complex of closely related and morphologically similar species. It causes Armillaria root rot in many plant species and produces mushrooms around the base of trees it has infected. The symptoms of infection appear in the crowns of infected trees as discoloured foliage, reduced growth, dieback of the branches and death. The mushrooms are edible but some people may be intolerant to them. This species is capable of producing light via bioluminescence in its mycelium.
Armillaria mellea is widely distributed in temperate regions of the Northern Hemisphere. The fruit body or mushroom, commonly known as stump mushroom, stumpie, honey mushroom, pipinky or pinky, grows typically on hardwoods but may be found around and on other living and dead wood or in open areas.
Taxonomy
The species was originally named Agaricus melleus by Danish-Norwegian botanist Martin Vahl in 1790; it was transferred to the genus Armillaria in 1871 by Paul Kummer. Numerous subtaxa have been described:
Similar species
Armillaria mellea once included a range of species with similar features that have since been reclassified. The following are reassigned subtaxa, mostly variety-level entries from the 19th century:
Description
The basidiocarp of each has a smooth cap in diameter, convex at first but becoming flattened with age often with a central raised umbo, later becoming somewhat dish-shaped. The margins of the cap are often arched at maturity and the surface is sticky when wet. Though typically honey-coloured, this fungus is rather variable in appearance and sometimes has a few dark, hairy scales near the centre somewhat radially arranged. The gills are white at first, sometimes becoming pinkish-yellow or discoloured with age, broad and fairly distant, attached to the stipe at right angles or are slightly decurrent. The stipe is of variable length, up to about long and in diameter. It is fibrillose and of a firm spongy consistency at first but later becomes hollow. It is cylindrical and tapers to a point at its base where it is fused to the stipes of other mushrooms in the clump. It is whitish at the upper end and brownish-yellow below, often with a very dark-coloured base. There is a broad persistent skin-like ring attached to the upper part of the stipe. This has a velvety margin and yellowish fluff underneath and extends outwards as a white partial veil protecting the gills when young. The flesh of the cap is whitish and has a sweetish odour and flavour with a tinge of bitterness. Under the microscope, the spores are approximately elliptical, 7–9 by 6–7 μm, inamyloid with prominent apiculi (short, pointed projections) at the base. The spore print is white. The basidia (spore-producing structures) lack basal clamps.
The main part of the fungus is underground where a mat of mycelial threads may extend for great distances. They are bundled together in rhizomorphs that are black in this species. The fungal body is not bioluminescent but its mycelia are luminous when in active growth.
Pathogenesis
Armillaria mellea infects new hosts through rhizomorphs and basidiospores. It is rare for basidiospores to be successful in infecting new hosts and often colonize woody debris instead, but rhizomorphs, however, can grow up to ten feet long in order to find a new host.
Distribution and habitat
Armillaria mellea is widespread in northern temperate zones. It has been found in North America, Europe and northern Asia, and It has been introduced to South Africa. The fungus grows parasitically on a large number of broadleaf trees. It fruits in dense clusters at the base of trunks or stumps.
It has been reported in almost every state with the continental United States.
Ecology
Armillaria mellea prefers moist soil and lower soil temperatures but it can also withstand extreme temperatures, such as forest fires, due to the protection of the soil. It is found in many kinds of landscapes, including gardens, parks, vineyards, tree production areas, and natural landscapes.
Armillaria mellea typically is symbiotic with hardwood trees and conifers, this includes orchards, planted forests, vineyards, and a few herbaceous plants. There are few signs, and the ones that do exist are often hard to find. The most prominent sign is honey-coloured mushrooms at the base of the infected plant. Additional signs include white, fan-shaped mycelia and black rhizomorphs with diameters between 1/32nd of an inch and 1/8th of an inch. These usually are not as noticeable because they occur beneath the bark and in the soil, respectively. The symptoms are much more numerous, including slower growth, dieback of branches, yellowing foliage, rotted wood at base and/or roots, external cankers, cracking bark, bleeding stem, leaf wilting, defoliation, and rapid death. Leaf wilting, defoliation, and dieback occur after the destruction of the cambium.
It is one of the most common causes of death in trees and shrubs in both natural and human cultivated habitats, and cause steady and substantial losses.
Disease cycle
Armillaria mellea infects both through basidiospore and penetration of host species by rhizomorphs which can grow up to long per year to find new, living tissue to infect. However, infection of living host tissue through basidiospores is quite rare. Two basidiospores must germinate and fuse to be viable and produce mycelium. In the late summer and autumn, Armillaria mellea produces mushrooms with notched gills, a ring near the cap base, and a white to golden color. They do not always appear, but when they do they can be found on both living and dead trees near the ground. These mushrooms produce and release the sexually created basidiospore which is dispersed by the wind. This is the only spore-bearing phase. The fungus overwinters as either rhizomorphs or vegetative mycelium. Infected wood is weakened through decay in roots and tree base after destruction of the vascular cambium and underlying wood.
Trees become infected by A. mellea when rhizomorphs growing through the soil encounter uninfected roots. Alternatively, when infected roots come into contact with uninfected ones the fungal mycelium may grow across. The rhizomorphs invade the trunk, growing between the bark and the wood and causing wood decay, growth reduction and mortality. Trees that are already under stress are more likely to be attacked but healthy trees may also be parasitized. The foliage becomes sparse and discoloured, twig growth slows down and branches may die back. When they are attacked, the Douglas-fir, western larch and some other conifers often produce an extra large crop of cones shortly before dying. Coniferous trees also tend to ooze resin from infected areas whereas broad-leaved trees sometimes develop sunken cankers. A growth of fruiting bodies near the base of the trunk confirms the suspicion of Armillaria root rot.
In 1893, the American mycologist Charles Horton Peck reported finding Armillaria fruiting bodies that were "aborted", in a similar way to specimens of Entoloma abortivum. It was not until 1974 that Roy Watling showed that the aborted specimens included cells of both Armillaria mellea and Entoloma abortivum. He thought that the Armillaria was parasitizing the Entoloma, a plausible hypothesis given its pathogenic behaviour. However, a 2001 study by Czederpiltz, Volk and Burdsall showed that the Entoloma was in fact the microparasite. The whitish-grey malformed fruit bodies known as carpophoroids were the result of E. abortivum hyphae penetrating the Armillaria and disrupting its normal development.
The main part of the fungus is underground where a mat of mycelial threads may extend for great distances. The rhizomorphs of A. mellea are initiated from mycelium into multicellular apices of rhizomorphs, which are multicellular vegetative organs that exclude soil from the interior of the rhizomorph tissues. The rhizomorphs spread through far greater distances through the ground than the mycelium. The rhizomorphs are black in this species. The fungal body is not bioluminescent but its mycelia and rhizomorphs are luminous when in active growth. A. mellea producing rhizomorphs is parasitic on woody plants of many species, including especially shrubs, hardwood and evergreen trees. In one example, A. mellea spread by rhizomorphs from an initially infected tree killed 600 trees in a prune orchard in 6 years. Each infected tree was immediately adjacent to an already infected one, the spread by rhizomorphs through the tree roots and soil.
Management
There are fungicides or management practices that will kill A. mellea after infection without damaging the infected plant, but these practicies are still being studied. There are practices that can extend the life of the plant and prevent further spreading. The best way to extend the plant life is to improve the host condition through supplemental watering and fertilization. To prevent further spread, regulate irrigation to avoid water stress, keep the root collar dry, control defoliating pathogens, remove stumps, fertilize adequately, avoid physical root damage and soil compaction, and don't plant trees that are especially susceptible to the disease in places where Armillaria mellea has been recorded. There is also some evidence that biological control using the fungus genus Trichoderma may help. Trichoderma is a predator of A. mellea and is often found in woodchips. Therefore, chipping or grinding dead and infected roots will give Trichoderma its preferred habitat and help it proliferate. Solarization will also create an ideal habitat as dry soil and higher soil temperatures are preferable for Trichoderma but poor conditions for A. mellea.
Edibility
Armillaria mellea mushroom are considered good edibles, though not preferred by some, and the tough stalks are usually excluded. They are best collected when young and thoroughly cooked. Some individuals have reported "allergic" reactions that result in stomach upsets. Some authors suggest not collecting mushrooms from the wood of various trees, including hemlock, buckeye, eucalyptus, and locust. They may have been used medicinally by indigenous peoples as a laxative.
The mushrooms have a taste that has been described as slightly sweet and nutty, with a texture ranging from chewy to crunchy, depending on the method of preparation. Parboiling mushrooms before consuming removes the bitter taste present in some specimens, and may reduce the amount of gastrointestinal irritants. According to one guide, they must be cooked before eating. Drying the mushrooms preserves and intensifies their flavour, although reconstituted mushrooms tend to be tough to eat. The mushrooms can also be pickled and roasted.
Chemistry
Several bioactive compounds have been isolated and identified from the fruit bodies. The triterpenes 3β-hydroxyglutin-5-ene, friedelane-2α,3β-diol, and friedelin were reported in 2011. Indole compounds include tryptamine, and serotonin.
The fungus produces cytotoxic compounds known as melleolides. Melleolides are made from orsellinic acid and protoilludane sesquiterpene alcohols via esterification. A polyketide synthase gene, termed ArmB, was identified in the genome of the fungus, which was found expressed during melleolide production. The gene shares c. 42% similarity with the orsellinic acid synthase gene (OrsA) in Aspergillus nidulans. Characterization of the gene proved it to catalyze orsillinic acid in vitro. It is a non-reducing iterative type-1 polyketide synthase. Co-incubation of free orsellinic acid with alcohols and ArmB showed cross-coupling activity. Therefore, the enzyme has transesterification activity. Also, there are other auxiliary factors suspected to control substrate specificity. Additionally, halogen modifications have been observed. Overexpression of annotated halogenases (termed ArmH1-5) and characterization of the subsequent enzymes revealed in all five enzymes the chlorination of mellolide F. In vitro reactions of free standing substrates showed that the enzymes do not require auxiliary carrier proteins for substrate delivery.
See also
Forest pathology
List of Armillaria species
List of bioluminescent fungi
References
Bioluminescent fungi
mellea
Edible fungi
Fungi described in 1790
Fungi of Africa
Fungi of Asia
Fungi of Europe
Fungi of North America
Parasitic fungi
Fungal grape diseases
Fungal tree pathogens and diseases
Taxa named by Martin Vahl
Fungus species | Armillaria mellea | Biology | 2,783 |
12,821,985 | https://en.wikipedia.org/wiki/Xi%20Zezong | Xi Zezong (June 6, 1927, Yuanqu, Shanxi – December 27, 2008, Beijing) was a Chinese astronomer, historian, and translator. He was a member of the Chinese Academy of Sciences, and an awardee of the Astronomy Prize.
He identified a possible reference to one of the Galilean moons of Jupiter in the fragmentary ancient works of the 4th-century BC Chinese astronomer Gan De, who may have made observation of either Ganymede or Callisto in summer 365 BC.
Honors
Asteroid 85472 Xizezong, discovered by the Beijing Schmidt CCD Asteroid Program in 1997, was named in his honor. The official was published by the Minor Planet Center on April 2, 2007 ().
References
External links
85472 Xizezong, JPL Small-Body Database Browser
1927 births
2008 deaths
20th-century Chinese translators
21st-century Chinese translators
20th-century Chinese astronomers
Historians from Shanxi
Historians of astronomy
Members of the Chinese Academy of Sciences
People from Yuncheng
21st-century Chinese historians
21st-century Chinese science writers
Scientists from Shanxi | Xi Zezong | Astronomy | 217 |
40,639,288 | https://en.wikipedia.org/wiki/DeepOcean | DeepOcean is an Oslo, Norway - based company which provides subsea services to the global offshore industries such as Inspection Maintenance and Repair (IMR), Subsea Construction, Cable Lay, and Subsea Trenching. Its 1,100 employees project manage and operate a fleet of Vessels, ROV's and subsea Trenchers.
DeepOcean operates mostly in the Oil & Gas and Offshore Renewables industries globally, with offices located around the world.
History
DeepOcean Group Holding (DeepOcean) was established in May 2011.
DeepOcean offers the following three main service lines: (i) Inspection, Maintenance and Repair (IMR) and subsea construction, (ii) Seabed Intervention (incl. trenching), and (iii) Cable Installation, servicing the Global Offshore Energy industry from inception to decommissioning.
DO 1 UK Ltd., formerly known as CTC Marine Projects Ltd., was established in 1993. Its initial core business was the provision of fibre optic cable lay and seabed intervention solutions for the global telecommunication market. Later, the company diversified and added cable lay and trenching services for the oil & gas, offshore renewables and interconnectors industries.
DeepOcean AS was established in 1999. It is founded on the provision of high quality equipment and subsea services combined with a team of highly experienced personnel with knowledge of deepwater operations. DeepOcean now has the track record and experience to take on deepwater assignments anywhere in the world.
Late 2016 Funds advised by Triton became the largest shareholder of DeepOcean. The Triton funds invest in and support the positive development of medium-sized businesses headquartered in Northern Europe, Italy and Spain.
In April 2022, it was announced that DeepOcean had acquired the Norwegian engineering and technology company, Installit AS and its subsidiaries.
Offices worldwide
Organisation
DeepOcean has a Supervisory Board of Directors as well as an Executive Management Team who oversee the daily management of DeepOcean's activities.
Board of Directors:
Jo Lunder, Chairman of the Board
Terje Askvig, Board member
Kristian Diesen, Board member
Marc van der Plas, Board member
Mike Winkel, Board member
Colette Cohen, Board member
Executive Management:
Øyvind Mikaelsen, CEO
Frode Garlid, CFO
Ottar K Mæland, COO
Stephane Abergel, CCO
References
Energy engineering and contractor companies
Engineering companies of Norway | DeepOcean | Engineering | 502 |
894,198 | https://en.wikipedia.org/wiki/Curtain%20wall%20%28architecture%29 | A curtain wall is an exterior covering of a building in which the outer walls are non-structural, instead serving to protect the interior of the building from the elements. Because the curtain wall façade carries no structural load beyond its own dead load weight, it can be made of lightweight materials. The wall transfers lateral wind loads upon it to the main building structure through connections at floors or columns of the building.
Curtain walls may be designed as "systems" integrating frame, wall panel, and weatherproofing materials. Steel frames have largely given way to aluminum extrusions. Glass is typically used for infill because it can reduce construction costs, provide an architecturally pleasing look, and allow natural light to penetrate deeper within the building. However, glass also makes the effects of light on visual comfort and solar heat gain in a building more difficult to control. Other common infills include stone veneer, metal panels, louvres, and operable windows or vents.
Unlike storefront systems, curtain wall systems are designed to span multiple floors, taking into consideration building sway and movement and design requirements such as thermal expansion and contraction; seismic requirements; water diversion; and thermal efficiency for cost-effective heating, cooling, and interior lighting.
History
Historically, buildings were constructed of timber, masonry, or a combination of both. Their exterior walls were load-bearing, supporting much or all of the load of the entire structure. The nature of the materials resulted in inherent limits to a building's height and the maximum size of window openings.
The development and widespread use of structural steel and later reinforced concrete allowed relatively small columns to support large loads. The exterior walls could be non-load bearing, and thus much lighter and more open than load-bearing walls of the past. This gave way to increased use of glass as an exterior façade, and the modern-day curtain wall was born.
Post-and-beam and balloon framed timber structures effectively had an early version of curtain walls, for their frames supported loads that allowed the walls themselves to serve other functions, such as keeping weather out and allowing light in. When iron began to be used extensively in buildings in late 18th-century Britain, such as at Ditherington Flax Mill, and later when buildings of wrought iron and glass such as The Crystal Palace were built, the building blocks of structural understanding were laid for the development of curtain walls.
Oriel Chambers (1864) and 16 Cook Street (1866), both built in Liverpool, England, by local architect and civil engineer Peter Ellis, are characterised by their extensive use of glass in their facades. Toward the courtyards they boasted metal-framed glass curtain walls, which makes them two of the world's first buildings to include this architectural feature. Oriel Chambers is listed in the Guinness Book of Records as the earliest such building. The extensive glass walls allowed light to penetrate further into the building, utilizing more floor space and reducing lighting costs. Oriel Chambers comprises set over five floors without an elevator, which had only recently been invented and was not yet widespread. The Statue of Liberty (1886) features a thin, non-load-bearing copper skin. Extensive use of glass became required for large factory buildings to allow light for manufacture, sometimes making it seem like they had all glass facades.
An early example of an all-steel curtain wall used in the classical style is the department store on Leipziger Straße, Berlin, built in 1901 (since demolished).
Some of the first curtain walls were made with steel mullions, and the polished plate glass was attached to the mullions with asbestos- or fiberglass-modified glazing compound. Eventually silicone sealants or glazing tape were substituted for the glazing compound. Some designs included an outer cap to hold the glass in place and to protect the integrity of the seals. The landmarks of curtain wall design as it came to dominate construction were the very different systems used by the United Nations Headquarters and the Lever House completed in 1952.
Ludwig Mies van der Rohe's curtain wall is one of the most important aspects of his architectural design. Mies first began prototyping the curtain wall in his high-rise residential building designs along Chicago's lakeshore, achieving the look of a curtain wall at 860-880 Lake Shore Drive Apartments. He finally perfected the curtain wall at 900–910 Lake Shore Drive, where the curtain is an autonomous aluminum and glass skin. After 900–910, Mies's curtain wall appeared on all of his subsequent high-rise building designs, including the Seagram Building in New York.
The widespread use of aluminium extrusions for mullions began during the 1970s. Aluminum alloys offer the unique advantage of being able to be easily extruded into nearly any shape required for design and aesthetic purposes. Today, the design complexity and shapes available are nearly limitless. Custom shapes can be designed and manufactured with relative ease. The Omni San Diego Hotel curtain wall in California, designed by architectural firm Hornberger and Worstel and developed by JMI Realty, is an example of a unitized curtain-wall system with integrated sunshades.
Systems and principles
Stick systems
The vast majority of ground-floor curtain walls are installed as long pieces (referred to as sticks) between floors vertically and between vertical members horizontally. Framing members may be fabricated in a shop, but installation and glazing is typically performed at the jobsite.
Ladder systems
Very similar to a stick system, a ladder system has mullions which can be split and then either snapped or screwed together consisting of a half box and plate. This allows sections of curtain wall to be fabricated in a shop, effectively reducing the time spent installing the system onsite. The drawbacks of using such a system is reduced structural performance and visible joint lines down the length of each mullion.
Unitized systems
Unitized curtain walls entail factory fabrication and assembly of panels and may include factory glazing. These completed units are installed on the building structure to form the building enclosure. Unitized curtain wall has the advantages of: speed; lower field installation costs; and quality control within an interior climate-controlled environment. The economic benefits are typically realized on large projects or in areas of high field labor rates.
Rainscreen principle
A common feature in curtain wall technology, the rainscreen principle theorizes that equilibrium of air pressure between the outside and inside of the "rainscreen" prevents water penetration into the building. For example, the glass is captured between an inner and an outer gasket in a space called the glazing rebate. The glazing rebate is ventilated to the exterior so that the pressure on the inner and outer sides of the outer gasket is the same. When the pressure is equal across this gasket, water cannot be drawn through joints or defects in the gasket.
Design concerns
A curtain wall system must be designed to handle all loads imposed on it as well as keep air and water from penetrating the building envelope.
Loads
The loads imposed on the curtain wall are transferred to the building structure through the anchors which attach the mullions to the building.
Dead load
Dead load is defined as the weight of structural elements and the permanent features on the structure. In the case of curtain walls, this load is made up of the weight of the mullions, anchors and other structural components of the curtain wall, as well as the weight of the infill material. Additional dead loads imposed on the curtain wall may include sunshades or signage attached to the curtain wall.
Wind load
Wind load is a normal force acting on the building as the result of wind blowing on the building. Wind pressure is resisted by the curtain wall system since it envelops and protects the building. Wind loads vary greatly throughout the world, with the largest wind loads being near the coast in hurricane-prone regions. For each project location, building codes specify the required design wind loads. Often, a wind tunnel study is performed on large or unusually-shaped buildings. A scale model of the building and the surrounding vicinity is built and placed in a wind tunnel to determine the wind pressures acting on the structure in question. These studies take into account vortex shedding around corners and the effects of surrounding topography and buildings.
Seismic load
Seismic loads in a curtain wall system are limited to the interstory drift induced on the building during an earthquake. In most situations, the curtain wall is able to naturally withstand seismic and wind induced building sway because of the space provided between the glazing infill and the mullion. In tests, standard curtain wall systems are typically able to withstand up to of relative floor movement without glass breakage or water leakage.
Snow load
Snow loads and live loads are not typically an issue in curtain walls, since curtain walls are designed to be vertical or slightly inclined. If the slope of a wall exceeds 20 degrees or so, these loads may need to be considered.
Thermal load
Thermal loads are induced in a curtain wall system because aluminum has a relatively high coefficient of thermal expansion. This means that over the span of a couple of floors, the curtain wall will expand and contract some distance, relative to its length and the temperature differential. This expansion and contraction is accounted for by cutting horizontal mullions slightly short and allowing a space between the horizontal and vertical mullions. In unitized curtain wall, a gap is left between units, which is sealed from air and water penetration by gaskets. Vertically, anchors carrying wind load only (not dead load) are slotted to account for movement. Incidentally, this slot also accounts for live load deflection and creep in the floor slabs of the building structure.
Blast load
Accidental explosions and terrorist threats have brought on increased concern for the fragility of a curtain wall system in relation to blast loads. The bombing of the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma, has spawned much of the current research and mandates in regards to building response to blast loads. Currently, all new federal buildings in the U.S. and all U.S. embassies built on foreign soil must have some provision for resistance to bomb blasts.
Since the curtain wall is at the exterior of the building, it becomes the first line of defense in a bomb attack. As such, blast resistant curtain walls are designed to withstand such forces without compromising the interior of the building to protect its occupants. Since blast loads are very high loads with short durations, the curtain wall response should be analyzed in a dynamic load analysis, with full-scale mock-up testing performed prior to design completion and installation.
Blast resistant glazing consists of laminated glass, which is meant to break but not separate from the mullions. Similar technology is used in hurricane-prone areas for impact protection from wind-borne debris.
Air infiltration
Air infiltration is the air which passes through the curtain wall from the exterior to the interior of the building. The air is infiltrated through the gaskets, through imperfect joinery between the horizontal and vertical mullions, through weep holes, and through imperfect sealing. The American Architectural Manufacturers Association (AAMA) is an industry trade group in the U.S. that has developed voluntary specifications regarding acceptable levels of air infiltration through a curtain wall.
Water penetration
Water penetration is defined as water passing from the exterior of the building to the interior of the curtain wall system. Sometimes, depending on the building specifications, a small amount of controlled water on the interior is deemed acceptable. Controlled water penetration is defined as water that penetrates beyond the inner most vertical plane of the test specimen, but has a designed means of drainage back to the exterior. AAMA Voluntary Specifications allow for controlled water penetration while the underlying ASTM E1105 test method would define such water penetration as a failure. To test the ability of a curtain wall to withstand water penetration in the field, an ASTM E1105 water spray rack system is placed on the exterior side of the test specimen, and a positive air pressure difference is applied to the system. This set up simulates a wind driven rain event on the curtain wall to check for field performance of the product and of the installation. Field quality control and assurance checks for water penetration has become the norm as builders and installers apply such quality programs to help reduce the number of water damage litigation suits against their work.
Deflection
One of the disadvantages of using aluminum for mullions is that its modulus of elasticity is about one-third that of steel. This translates to three times more deflection in an aluminum mullion compared to a similar steel section under a given load. Building specifications set deflection limits for perpendicular (wind-induced) and in-plane (dead load-induced) deflections. These deflection limits are not imposed due to strength capacities of the mullions. Rather, they are designed to limit deflection of the glass (which may break under excessive deflection), and to ensure that the glass does not come out of its pocket in the mullion. Deflection limits are also necessary to control movement at the interior of the curtain wall. Building construction may be such that there is a wall located near the mullion, and excessive deflection can cause the mullion to contact the wall and cause damage. Also, if deflection of a wall is quite noticeable, public perception may raise undue concern that the wall is not strong enough.
Deflection limits are typically expressed as the distance between anchor points divided by a constant number. A deflection limit of L/175 is common in curtain wall specifications, based on experience with deflection limits that are unlikely to cause damage to the glass held by the mullion. Say that a given curtain wall is anchored at 12-foot (144 in) floor heights. The allowable deflection would then be 144/175 = 0.823 inches, which means the wall is allowed to deflect inward or outward a maximum of 0.823 inches at the maximum wind pressure. However, some panels require stricter movement restrictions, or certainly those that prohibit a torque-like motion.
Deflection in mullions is controlled by different shapes and depths of curtain wall members. The depth of a given curtain wall system is usually controlled by the area moment of inertia required to keep deflection limits under the specification. Another way to limit deflections in a given section is to add steel reinforcement to the inside tube of the mullion. Since steel deflects at one-third the rate of aluminum, the steel will resist much of the load at a lower cost or smaller depth.
Deflection in curtain wall mullions also differs from deflection of the building structure, whether concrete, steel, or timber. Curtain wall anchors must be designed to allow differential movement between the building structure and the curtain wall.
Strength
Strength (or maximum usable stress) available to a particular material is not related to its material stiffness (the material property governing deflection); it is a separate criterion in curtain wall design and analysis. This often affects the selection of materials and sizes for design of the system. The allowable bending strength for certain aluminum alloys, such as those typically used in curtain wall framing, approaches the allowable bending strength of steel alloys used in building construction.
Thermal criteria
Relative to other building components, aluminum has a high heat transfer coefficient, meaning that aluminum is a very good conductor of heat. This translates into high heat loss through aluminum (or steel) curtain wall mullions. There are several ways to compensate for this heat loss, the most common way being the addition of thermal breaks. These are barriers between exterior metal and interior metal, usually made of polyvinyl chloride (PVC). These breaks provide a significant decrease in the thermal conductivity of the curtain wall. However, since the thermal break interrupts the aluminum mullion, the overall moment of inertia of the mullion is reduced and must be accounted for in the structural analysis and deflection analysis of the system.
Thermal conductivity of the curtain wall system is important because of heat loss through the wall, which affects the heating and cooling costs of the building. On a poorly performing curtain wall, condensation may form on the interior of the mullions. This could cause damage to adjacent interior trim and walls.
Rigid insulation is provided in spandrel areas to provide a higher R-value at these locations.
Thermally-broken mullions with double- or triple-glazed IGUs are often referred to as "high-performance" curtain walls. While these curtain wall systems are more energy-efficient than older, single-glazed versions, they are still significantly less efficient than opaque (solid) wall construction. For example, nearly all curtain wall systems, thermally-broken or otherwise, have a U-value of 0.2 or higher, which is equivalent to an R-value of 5 or lower.
Infills
Infill refers to the large panels that are inserted into the curtain wall between mullions. Infills are typically glass but may be made up of nearly any exterior building element. Some common infills include metal panels, louvers, and photovoltaic panels. Infills are also referred to as spandrels or spandrel panels.
Glass
Float glass is by far the most common curtain wall glazing type. It can be manufactured in an almost infinite combination of color, thickness, and opacity. For commercial construction, the two most common thicknesses are monolithic and insulating glass. 1/4 inch glass is typically used only in spandrel areas, while insulating glass is used for the rest of the building (sometimes spandrel glass is specified as insulating glass as well). The 1 inch insulation glass is typically made up of two 1/4-inch lites of glass with a airspace. The air inside is usually atmospheric air, but some inert gases, such as argon or krypton, may be used in order to offer better thermal transmittance values. In Europe, triple-pane insulating glass infill is now common. In Scandinavia, the first curtain walls with quadruple-pane have been built.
Larger thicknesses are typically employed for buildings or areas with higher thermal, relative humidity, or sound transmission requirements, such as laboratory areas or recording studios. In residential construction, thicknesses commonly used are monolithic and insulating glass.
Glass may be used which is transparent, translucent, or opaque, or in varying degrees thereof. Transparent glass usually refers to vision glass in a curtain wall. Spandrel or vision glass may also contain translucent glass, which could be for security or aesthetic purposes. Opaque glass is used in areas to hide a column or spandrel beam or shear wall behind the curtain wall. Another method of hiding spandrel areas is through shadow box construction (providing a dark enclosed space behind the transparent or translucent glass). Shadow box construction creates a perception of depth behind the glass that is sometimes desired.
Stone veneer
Thin blocks () of stone can be inset within a curtain wall system. The type of stone used is limited only by the strength of the stone and the ability to manufacture it in the proper shape and size. Common stone types used are: calcium silicate, granite, marble, travertine, limestone, and engineered stone. To reduce weight and improve strength, the natural stone may be attached to an aluminum honeycomb backing.
Panels
Metal panels can take various forms including stainless steel, aluminum plate; aluminum composite panels consisting of two thin aluminum sheets sandwiching a thin plastic interlayer; copper wall cladding, and panels consisting of metal sheets bonded to rigid insulation, with or without an inner metal sheet to create a sandwich panel. Other opaque panel materials include fiber-reinforced plastic (FRP) and terracotta. Terracotta curtain wall panels were first used in Europe, but only a few manufacturers produce high quality modern terracotta curtain wall panels.
Louvers
A louver is provided in an area where mechanical equipment located inside the building requires ventilation or fresh air to operate. They can also serve as a means of allowing outside air to filter into the building to take advantage of favorable climatic conditions and minimize the usage of energy-consuming HVAC systems. Curtain wall systems can be adapted to accept most types of louver systems to maintain the same architectural sightlines and style while providing desired functionality.
Windows and vents
Most curtain wall glazing is fixed, meaning that there is no access to the exterior of the building except through doors. However, windows or vents can be glazed into the curtain wall system as well, to provide required ventilation or operable windows. Nearly any window type can be made to fit into a curtain wall system.
Fire safety
Firestopping at the perimeter slab edge, which is a gap between the floor and the curtain wall, is essential to slow the passage of fire and combustion gases between floors. Spandrel areas must have non-combustible insulation at the interior face of the curtain wall. Some building codes require the mullion to be wrapped in heat-retarding insulation near the ceiling to prevent the mullions from melting and spreading the fire to the floor above. The firestop at the perimeter slab edge is considered a continuation of the fire-resistance rating of the floor slab. The curtain wall itself, however, is not ordinarily required to have a rating. This causes a quandary as compartmentalization (fire protection) is typically based upon closed compartments to avoid fire and smoke migrations beyond each engaged compartment. A curtain wall by its very nature prevents the completion of the compartment (or envelope). The use of fire sprinklers has been shown to mitigate this matter. As such, unless the building is sprinklered, fire may still travel up the curtain wall, if the glass on the exposed floor is shattered from heat, causing flames to lick up the outside of the building.
Falling glass can endanger pedestrians, firefighters and firehoses below. An example of this is the 1988 First Interstate Tower fire in Los Angeles, California. The fire leapfrogged up the tower by shattering the glass and then consuming the aluminum framing holding the glass. Aluminum's melting temperature is 660 °C, whereas building fires can reach 1,100 °C. The melting point of aluminum is typically reached within minutes of the start of a fire.
Fireman knock-out glazing panels are often required for venting and emergency access from the exterior. Knock-out panels are generally fully tempered glass to allow full fracturing of the panel into small pieces and relatively safe removal from the opening.
Maintenance and repair
Curtain walls and perimeter sealants require maintenance to maximize service life. Perimeter sealants, properly designed and installed, have a typical service life of 10 to 15 years. Removal and replacement of perimeter sealants require meticulous surface preparation and proper detailing.
Aluminum frames are generally painted or anodized. Care must be taken when cleaning areas around anodized material as some cleaning agents will destroy the finish. Factory applied fluoropolymer thermoset coatings have good resistance to environmental degradation and require only periodic cleaning. Recoating with an air-dry fluoropolymer coating is possible but requires special surface preparation and is not as durable as the baked-on original coating. Anodized aluminum frames cannot be "re-anodized" in place but can be cleaned and protected by proprietary clear coatings to improve appearance and durability.
Stainless steel curtain walls require no coatings, and embossed, as opposed to abrasively finished, surfaces maintain their original appearance indefinitely without cleaning or other maintenance. Some specially textured matte stainless steel surface finishes are hydrophobic and resist airborne and rain-borne pollutants. This has been valuable in the American Southwest and in the Mideast for avoiding dust, as well as avoiding soot and smoke staining in polluted urban areas.
See also
Mullion wall
Insulated glazing
Quadruple glazing
Copper in architecture
References
External links
European Commission's portal for efficient Curtain Walling
EN 13830: Curtain Walling - Product Standard
EN 13119: Curtain Walling - Terminology
Understanding Curtain Wall & Window Wall differences
Types of wall
Building engineering
Construction
Architectural elements | Curtain wall (architecture) | Technology,Engineering | 4,973 |
51,184,343 | https://en.wikipedia.org/wiki/Knowledge%20inertia | Knowledge inertia (KI) is a concept in knowledge management. The term initially proposed by Shu-hsien Liao comprises a two dimensional model which incorporates experience inertia and learning inertia. Later, another dimension—the dimension of thinking inertia—has been added based on the theoretical exploration of the existing concepts of experience inertia and learning inertia.
One of the central problems in knowledge management related to organizational learning is to deal with "inertia". Besides, individuals may also exhibit a natural tendency of inertia when facing problems during utilization of knowledge. Inertia in technical jargon means inactivity or torpor. Inertia in organizational learning context may be referred to as a slowdown in organizational learning-related activities. In fact, there are many other kinds of organizational inertia: e.g., innovation inertia, workforce inertia, productivity inertia, decision inertia, emotional inertia besides others that have different meanings in their own individual contexts. Some organization theorists have adopted the definition proposed by Liao (2002) to extend its further use in organizational learning studies.
Definition
Knowledge inertia (KI) may be defined as a problem solving strategy using old, redundant, stagnant knowledge and past experience without recourse to new knowledge and experience. Inertia is a concept in physics that is used to explain the state of an object either remaining in stationary or uniform motion. Organizational theorists adopted this concept of inertia and applied it to different contexts which resulted in the emergence of diverse concepts—such as, for example, organizational inertia, consumer inertia, outsourcing inertia, and cognitive inertia. Some organization theorists have adopted the definition proposed by Liao (2002) to extend its further use in organizational learning studies. Not every instances of knowledge inertia result in gloomy of negative outcome: one study suggested that knowledge inertia could positively affect a firm's product innovation.
The concept
Knowledge inertia stems from the use of routine problem solving procedures that involves the utilization of redundant, stagnant knowledge and past experience without any recourse to new knowledge and thinking processes. Different methodologies exist for diverse types of knowledge that could be applied to manage knowledge efficiently. Since KI is a component of knowledge management, it is essential to consider the circulation of various knowledge types in avoiding inertia. The theory of KI supposedly studies the extent to which an organization's ability on problem solving is inhibited. Numerous factors could be attributed as enablers or inhibitors of the abilities on problem solving of an individual or an organization. Knowledge inertia applicable in the context of problem solving, therefore, may require inputs from all these diverse knowledge types, or it may require learning, new thinking, and experience. Emergence of new ideas to supplement the existing knowledge and assimilation of the same could be of help in avoiding the use of stagnant, outdated information while attempting to solve problems.
See also
Cognitive inertia
Neurathian bootstrap
Psychological inertia
References
Cognitive psychology
Heuristics
Knowledge management
Problem solving | Knowledge inertia | Biology | 643 |
25,510,720 | https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20July%2013%2C%202075 | An annular solar eclipse will occur at the Moon's ascending node of orbit on Saturday, July 13, 2075, with a magnitude of 0.9467. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Occurring about 1.4 days after apogee (on July 11, 2075, at 20:20 UTC), the Moon's apparent diameter will be smaller.
The path of annularity will be visible from parts of eastern Spain, southern France, Monaco, Italy, San Marino, Austria, Slovenia, Croatia, northwestern Bosnia and Herzegovina, Hungary, Slovakia, southwestern Czech Republic, extreme northwestern Romania, southeastern Poland, Ukraine, Belarus, and Russia. A partial solar eclipse will also be visible for parts of Europe, North Africa, Greenland, northern Canada, Alaska, and Asia.
The annular eclipse will cross Europe and Russia. Eight European capitals will observe annual eclipse: Monaco, San Marino, Ljubljana, Zagreb, Vienna, Bratislava, Budapest and Moscow. For Moscow it will be the first central eclipse since 1887. Other European large cities (non-capitals), in which the annular eclipse will be seen include Barcelona, Marseille, Genoa, Graz, Kraków, Lviv, Nizhny Novgorod, Kirov.
Eclipse details
Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse.
Eclipse season
This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight.
Related eclipses
Eclipses in 2075
A penumbral lunar eclipse on January 2.
A total solar eclipse on January 16.
A partial lunar eclipse on June 28.
An annular solar eclipse on July 13.
A partial lunar eclipse on December 22.
Metonic
Preceded by: Solar eclipse of September 23, 2071
Followed by: Solar eclipse of May 1, 2079
Tzolkinex
Preceded by: Solar eclipse of May 31, 2068
Followed by: Solar eclipse of August 24, 2082
Half-Saros
Preceded by: Lunar eclipse of July 7, 2066
Followed by: Lunar eclipse of July 17, 2084
Tritos
Preceded by: Solar eclipse of August 12, 2064
Followed by: Solar eclipse of June 11, 2086
Solar Saros 147
Preceded by: Solar eclipse of July 1, 2057
Followed by: Solar eclipse of July 23, 2093
Inex
Preceded by: Solar eclipse of August 2, 2046
Followed by: Solar eclipse of June 22, 2104
Triad
Preceded by: Solar eclipse of September 11, 1988
Followed by: Solar eclipse of May 14, 2162
Solar eclipses of 2073–2076
Saros 147
Metonic series
Tritos series
Inex series
References
External links
2075 7 13
2075 in science
2075 7 13
2075 7 13 | Solar eclipse of July 13, 2075 | Astronomy | 745 |
21,865,544 | https://en.wikipedia.org/wiki/Digital%20Electronic%20Message%20Service | The Digital Electronic Message Service (DEMS) is a two-way wireless radio service
for passing of message and facsimile data using the 10.6 and 24 GHz band. As of 1997, Associated Communications was expected to use the band to create a network in 31 U.S. cities.
In October 2005, the FCC moved part of the DEMS service from the 18/19 GHz band to 24 GHz.
References
Radio communications | Digital Electronic Message Service | Engineering | 90 |
16,084,455 | https://en.wikipedia.org/wiki/Feferman%E2%80%93Sch%C3%BCtte%20ordinal | In mathematics, the Feferman–Schütte ordinal (Γ0) is a large countable ordinal.
It is the proof-theoretic ordinal of several mathematical theories, such as arithmetical transfinite recursion.
It is named after Solomon Feferman and Kurt Schütte, the former of whom suggested the name Γ0.
There is no standard notation for ordinals beyond the Feferman–Schütte ordinal. There are several ways of representing the Feferman–Schütte ordinal, some of which use ordinal collapsing functions: , , , or .
Definition
The Feferman–Schütte ordinal can be defined as the smallest ordinal that cannot be obtained by starting with 0 and using the operations of ordinal addition and the Veblen functions φα(β). That is, it is the smallest α such that φα(0) = α.
Properties
This ordinal is sometimes said to be the first impredicative ordinal, though this is controversial, partly because there is no generally accepted precise definition of "predicative". Sometimes an ordinal is said to be predicative if it is less than Γ0.
Any recursive path ordering whose function symbols are well-founded with order type less than that of Γ0 itself has order type less than Γ0.
References
Proof theory
Ordinal numbers | Feferman–Schütte ordinal | Mathematics | 301 |
25,872,126 | https://en.wikipedia.org/wiki/Apollo%2013%3A%20Mission%20Control | Apollo 13: Mission Control is an interactive theatre show about NASA's failed Apollo 13 mission.
Premiere and touring history
The show premiered in October 2008 at BATS Theatre in Wellington, New Zealand and has toured Hamilton, Nelson and Auckland in New Zealand. It returned to Wellington for a season at the New Zealand International Arts Festival in 2010 and then embarked on an Australian tour beginning at the Sydney Opera House. Subsequent tours to Australia have included The Powerhouse Theater in Brisbane and The State Theater Centre in Perth.
The show made its North American debut on December 21, 2012, at the Tacoma Dome Exhibition Hall in Tacoma, Washington, United States. The following month, it toured to the Spokane Convention Center in Spokane, Washington, and the Milton Rhodes Center for the Arts in Winston-Salem, North Carolina. After 45 shows in the United States, including an extended run in Winston-Salem due to sellouts, the production returned to Wellington in March 2013.
The experience
The show tells the story from the point of view of Mission Control. Audience members are seated behind working computer consoles and are allowed to flick the switches, use the working telephones, interact with the actors and hear the three astronauts through headphones. The astronauts perform their part of the show in a command module in another room in the theatre. Each night they are joined by an audience member as the "guest astronaut" and their performances are displayed on two large screens at the front of the stage and on smaller TV monitors at the consoles. An actor playing newscaster Walter Cronkite broadcasts live news updates throughout the show, interviewing audience members and astronaut James Lovell's wife Marilyn.
Chapman Trip Theatre Awards
The Chapman Trip Theatre Awards were annual awards for Wellington theatre sponsored by law firm Chapman Trip, they have since been renamed the Ngā Whakarākei O Whātaitai / Wellington Theatre Awards.
The Weta Award for Best Set Design of the Year (Nominated)
The Montana Award for Most Original Play of the Year (Won)
Western Audio Engineering Best Sound Design of the Year (Won)
Gail Cown Management Award for Best Actor of the Year (Nominated)
References
External links
Official page of the APOLLO 13 production
NZ International Arts Show Review
SALIENT MAGAZINE, Victoria University Review
Spaceflight
Theatre in New Zealand
New Zealand plays | Apollo 13: Mission Control | Astronomy | 456 |
31,325,710 | https://en.wikipedia.org/wiki/Genpatsu-shinsai | , meaning nuclear power plant earthquake disaster (from the two words Genpatsu – nuclear power plant – and Shinsai – earthquake disaster) is a term which was coined by Japanese seismologist Professor Katsuhiko Ishibashi in 1997. It describes a domino effect scenario in which a major earthquake causes a severe accident at a nuclear power plant near a major population centre, resulting in an uncontrollable release of radiation in which the radiation levels make damage control and rescue impossible, and earthquake damage severely impedes the evacuation of the population. Ishibashi envisages that such an event would have a global impact and a 'fatal' effect on Japan, seriously affecting future generations.
In Japan, Ishibashi believes that a number of nuclear power stations could be involved in such a scenario, but that the Hamaoka Nuclear Power Plant, located near the centre of the expected Tōkai earthquakes, is the most likely candidate. He is also concerned that a similar scenario could take place elsewhere in the world. As a result, he believes that the matter should be a global concern.
See also
Nuclear power in Japan
Fukushima Daiichi nuclear disaster
2011 Japanese nuclear accidents
Nuclear power
Nuclear power debate
Lists of nuclear disasters and radioactive incidents
Seismicity in Japan
International Nuclear Event Scale
External links
Katsuhiko Ishibashi: "Why worry? Japan's nuclear plants at grave risk from quake damage" The Asia-Pacific Journal: Japan Focus (August 11, 2007)
Michael Reilly: "Insight: Where not to build nuclear power stations" (preview only) New Scientist (July 28, 2007).
References
Nuclear power in Japan
Nuclear power
Nuclear safety and security
Nuclear accidents and incidents
Earthquakes in Japan
Japanese words and phrases
Fukushima Daiichi nuclear disaster | Genpatsu-shinsai | Physics,Chemistry | 354 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.