id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
591,568 | https://en.wikipedia.org/wiki/Trigonometric%20polynomial | In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series.
Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform.
The term trigonometric polynomial for the real-valued case can be seen as using the analogy: the functions sin(nx) and cos(nx) are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of , i.e., Laurent polynomials in under the change of variables .
Definition
Any function T of the form
with coefficients and at least one of the highest-degree coefficients and non-zero, is called a complex trigonometric polynomial of degree N. Using Euler's formula the polynomial can be rewritten as
with .
Analogously, letting coefficients , and at least one of and non-zero or, equivalently, and for all , then
is called a real trigonometric polynomial of degree N.
Properties
A trigonometric polynomial can be considered a periodic function on the real line, with period some divisor of , or as a function on the unit circle.
Trigonometric polynomials are dense in the space of continuous functions on the unit circle, with the uniform norm; this is a special case of the Stone–Weierstrass theorem. More concretely, for every continuous function and every there exists a trigonometric polynomial such that for all . Fejér's theorem states that the arithmetic means of the partial sums of the Fourier series of converge uniformly to provided is continuous on the circle; these partial sums can be used to approximate .
A trigonometric polynomial of degree has a maximum of roots in a real interval unless it is the zero function.
Fejér-Riesz theorem
The Fejér-Riesz theorem states that every positive real trigonometric polynomial
satisfying for all ,
can be represented as the square of the modulus of another (usually complex) trigonometric polynomial such that:
Or, equivalently, every Laurent polynomial
with that satisfies for all can be written as:
for some polynomial .
Notes
References
.
See also
Trigonometric series
Quasi-polynomial
Exponential polynomial
Approximation theory
Fourier analysis
Polynomials
Trigonometry | Trigonometric polynomial | [
"Mathematics"
] | 539 | [
"Approximation theory",
"Polynomials",
"Mathematical relations",
"Approximations",
"Algebra"
] |
591,587 | https://en.wikipedia.org/wiki/Hurewicz%20theorem | In mathematics, the Hurewicz theorem is a basic result of algebraic topology, connecting homotopy theory with homology theory via a map known as the Hurewicz homomorphism. The theorem is named after Witold Hurewicz, and generalizes earlier results of Henri Poincaré.
Statement of the theorems
The Hurewicz theorems are a key link between homotopy groups and homology groups.
Absolute version
For any path-connected space X and positive integer n there exists a group homomorphism
called the Hurewicz homomorphism, from the n-th homotopy group to the n-th homology group (with integer coefficients). It is given in the following way: choose a canonical generator , then a homotopy class of maps is taken to .
The Hurewicz theorem states cases in which the Hurewicz homomorphism is an isomorphism.
For , if X is -connected (that is: for all ), then for all , and the Hurewicz map is an isomorphism. This implies, in particular, that the homological connectivity equals the homotopical connectivity when the latter is at least 1. In addition, the Hurewicz map is an epimorphism in this case.
For , the Hurewicz homomorphism induces an isomorphism , between the abelianization of the first homotopy group (the fundamental group) and the first homology group.
Relative version
For any pair of spaces and integer there exists a homomorphism
from relative homotopy groups to relative homology groups. The Relative Hurewicz Theorem states that if both and are connected and the pair is -connected then for and is obtained from by factoring out the action of . This is proved in, for example, by induction, proving in turn the absolute version and the Homotopy Addition Lemma.
This relative Hurewicz theorem is reformulated by as a statement about the morphism
where denotes the cone of . This statement is a special case of a homotopical excision theorem, involving induced modules for (crossed modules if ), which itself is deduced from a higher homotopy van Kampen theorem for relative homotopy groups, whose proof requires development of techniques of a cubical higher homotopy groupoid of a filtered space.
Triadic version
For any triad of spaces (i.e., a space X and subspaces A, B) and integer there exists a homomorphism
from triad homotopy groups to triad homology groups. Note that
The Triadic Hurewicz Theorem states that if X, A, B, and are connected, the pairs and are -connected and -connected, respectively, and the triad is -connected, then for and is obtained from by factoring out the action of and the generalised Whitehead products. The proof of this theorem uses a higher homotopy van Kampen type theorem for triadic homotopy groups, which requires a notion of the fundamental -group of an n-cube of spaces.
Simplicial set version
The Hurewicz theorem for topological spaces can also be stated for n-connected simplicial sets satisfying the Kan condition.
Rational Hurewicz theorem
Rational Hurewicz theorem: Let X be a simply connected topological space with for . Then the Hurewicz map
induces an isomorphism for and a surjection for .
Notes
References
Theorems in homotopy theory
Homology theory
Theorems in algebraic topology | Hurewicz theorem | [
"Mathematics"
] | 705 | [
"Theorems in algebraic topology",
"Theorems in topology"
] |
591,638 | https://en.wikipedia.org/wiki/Dihydrotestosterone | Dihydrotestosterone (DHT, 5α-dihydrotestosterone, 5α-DHT, androstanolone or stanolone) is an endogenous androgen sex steroid and hormone primarily involved in the growth and repair of the prostate and the penis, as well as the production of sebum and body hair composition.
The enzyme 5α-reductase catalyzes the formation of DHT from testosterone in certain tissues including the prostate gland, seminal vesicles, epididymides, skin, hair follicles, liver, and brain. This enzyme mediates reduction of the C4-5 double bond of testosterone. DHT may also be synthesized from progesterone and 17α-hydroxyprogesterone via the androgen backdoor pathway in the absence of testosterone. Relative to testosterone, DHT is considerably more potent as an agonist of the androgen receptor (AR).
In addition to its role as a natural hormone, DHT has been used as a medication, for instance in the treatment of low testosterone levels in men; for information on DHT as a medication, see the androstanolone article.
Biological function
DHT is biologically important for sexual differentiation of the male genitalia during embryogenesis, maturation of the penis and scrotum at puberty, growth of facial, body, and pubic hair, and development and maintenance of the prostate gland and seminal vesicles. It is produced from the less potent testosterone by the enzyme 5α-reductase in select tissues, and is the primary androgen in the genitals, prostate gland, seminal vesicles, skin, and hair follicles.
DHT signals act mainly in an intracrine and paracrine manner in the tissues in which it is produced, playing only a minor role, if any, as a circulating endocrine hormone. Circulating levels of DHT are one-tenth and one-twentieth those of testosterone in terms of total and free concentrations, respectively, whereas local DHT levels may be up to 10 times those of testosterone in tissues with high 5α-reductase expression such as the prostate gland. In addition, unlike testosterone, DHT is inactivated by 3α-hydroxysteroid dehydrogenase (3α-HSD) into the very weak androgen 3α-androstanediol in various tissues such as muscle, adipose, and liver among others, and in relation to this, DHT has been reported to be a very poor anabolic agent when administered exogenously as a medication.
In addition to normal biological functions, DHT also plays an important causative role in a number of androgen-dependent conditions including hair conditions like hirsutism (excessive facial/body hair growth) and pattern hair loss (androgenic alopecia or pattern baldness) and prostate diseases such as benign prostatic hyperplasia (BPH) and prostate cancer. 5α-Reductase inhibitors, which prevent DHT synthesis, are effective in the prevention and treatment of these conditions. Androgen deprivation is a therapeutic approach to prostate cancer that can be implemented by castration to eliminate gonadal testosterone as a precursor to DHT, but metastatic tumors may then develop into castration-resistant prostate cancer (CRPC). Although castration results in 90-95% decrease of serum testosterone, DHT in the prostate is only decreased by 50%, supporting the notion that the prostate expresses necessary enzymes (including 5α-reductase) to produce DHT without testicular testosterone, that outline the importance of 5α-reductase inhibitors.
DHT may play a function in skeletal muscle amino acid transporter recruitment and function.
Metabolites of DHT have been found to act as neurosteroids with their own AR-independent biological activity. 3α-Androstanediol is a potent positive allosteric modulator of the GABAA receptor, while 3β-androstanediol is a potent and selective agonist of the estrogen receptor (ER) subtype ERβ. These metabolites may play important roles in the central effects of DHT and by extension testosterone, including their antidepressant, anxiolytic, rewarding/hedonic, anti-stress, and pro-cognitive effects.
5α-Reductase 2 deficiency
Much of the biological role of DHT has been elucidated in studies of individuals with congenital 5α-reductase type 2 deficiency, an intersex condition caused by a loss-of-function mutation in the gene encoding 5α-reductase type 2, the major enzyme responsible for the production of DHT in the body. It is characterized by a defective and non-functional 5α-reductase type 2 enzyme and a partial but majority loss of DHT production in the body. In the condition, circulating testosterone levels are within or slightly above the normal male range, but DHT levels are low (around 30% of normal), and the ratio of circulating testosterone to DHT is greatly elevated (at about 3.5 to 5 times higher than normal).
Genetic males (46,XY) with 5α-reductase type 2 deficiency are born with undervirilization including pseudohermaphroditism (ambiguous genitalia), pseudovaginal perineoscrotal hypospadias, and usually undescended testes. Their external genitalia are female-like, with micropenis (a small, clitoris-like phallus), a partially unfused, labia-like scrotum, and a blind-ending, shallow vaginal pouch. Due to their lack of conspicuous male genitalia, genetic males with the condition are typically raised as girls. At the time of puberty however, they develop striking phenotypically masculine secondary sexual characteristics including partial virilization of the genitals (enlargement of the phallus into a near-functional penis and descent of the testes), voice deepening, typical male musculoskeletal development, and no menstruation, breast development, or other signs of feminization that occur during female puberty. In addition, normal libido and spontaneous erections develop, they usually show a sexual preference for females, and almost all develop a male gender identity.
Nonetheless, males with 5α-reductase type 2 deficiency exhibit signs of continued undervirilization in a number of domains. Facial hair was absent or sparse in a relatively large group of Dominican males with the condition, known as the Güevedoces. However, more facial hair has been observed in patients with the disorder from other parts of the world, although facial hair was still reduced relative to that of other men in the same communities. The divergent findings may reflect racial differences in androgen-dependent hair growth. A female pattern of androgenic hair growth, with terminal hair largely restricted to the axillae and lower pubic triangle, is observed in males with the condition. No temporal recession of the hairline or androgenic alopecia (pattern hair loss or baldness) has been observed in any of the cases of 5α-reductase type 2 deficiency that have been reported, whereas this is normally seen to some degree in almost all Caucasian males in their teenage years. Individuals with 5α-reductase type 2 deficiency were initially reported to have no incidence of acne, but subsequent research indicated normal sebum secretion and acne incidence.
In genetic males with 5α-reductase type 2 deficiency, the prostate gland is rudimentary or absent, and if present, remains small, underdeveloped, and unpalpable throughout life. In addition, neither BPH nor prostate cancer have been reported in these individuals. Genetic males with the condition generally show oligozoospermia due to undescended testes, but spermatogenesis is reported to be normal in those with testes that have descended, and there are case instances of men with the condition successfully fathering children.
Unlike males, genetic females with 5α-reductase type 2 deficiency are phenotypically normal. However, similarly to genetic males with the condition, they show reduced body hair growth, including an absence of hair on the arms and legs, slightly decreased axillary hair, and moderately decreased pubic hair. On the other hand, sebum production is normal. This is in accordance with the fact that sebum secretion appears to be entirely under the control of 5α-reductase type 1.
5α-Reductase inhibitors
5α-Reductase inhibitors like finasteride and dutasteride inhibit 5α-reductase type 2 and/or other isoforms and are able to decrease circulating DHT levels by 65 to 98% depending on the 5α-reductase inhibitor in question. As such, similarly to the case of 5α-reductase type 2 deficiency, they provide useful insights in the elucidation of the biological functions of DHT. 5α-Reductase inhibitors were developed and are used primarily for the treatment of BPH. The drugs are able to significantly reduce the size of the prostate gland and to alleviate symptoms of the condition. Long-term treatment with 5α-reductase inhibitors is also able to significantly reduce the overall risk of prostate cancer, although a simultaneous small increase in the risk of certain high-grade tumors has been observed. In addition to prostate diseases, 5α-reductase inhibitors have subsequently been developed and introduced for the treatment of pattern hair loss in men. They are able to prevent further progression of hair loss in most men with the condition and to produce some recovery of hair in about two-thirds of men. 5α-Reductase inhibitors seem to be less effective for pattern hair loss in women on the other hand, although they do still show some effectiveness. Aside from pattern hair loss, the drugs are also useful in the treatment of hirsutism and can greatly reduce facial and body hair growth in women with the condition.
5α-Reductase inhibitors are overall well tolerated and show a low incidence of adverse effects. Sexual dysfunction, including erectile dysfunction, loss of libido, and reduced ejaculate volume, may occur in 3.4 to 15.8% of men treated with finasteride or dutasteride. A small increase in the risk of affective symptoms including depression, anxiety, and self-harm may be seen. However risk within the affected group can vary very strongly with some patients vividly reporting very strong persistent effects. Both the sexual dysfunction and affective symptoms may be due partially or fully to prevention of the synthesis of neurosteroids like allopregnanolone rather necessarily than due to inhibition of DHT production. A small risk of gynecomastia has been associated with 5α-reductase inhibitors (1.2–3.5%). Based on reports of 5α-reductase type 2 deficiency in males and the effectiveness of 5α-reductase inhibitors for hirsutism in women, reduced body and/or facial hair growth is a likely potential side effect of these drugs in men. There are far fewer studies evaluating the side effects of 5α-reductase inhibitors in women. However, due to the known role of DHT in male sexual differentiation, 5α-reductase inhibitors may cause birth defects such as ambiguous genitalia in the male fetuses of pregnant women. As such, they are not used in women during pregnancy.
MK-386 is a selective 5α-reductase type 1 inhibitor which was never marketed. Whereas 5α-reductase type 2 inhibitors achieve much higher reductions in circulating DHT production, MK-386 decreases circulating DHT levels by 20 to 30%. Conversely, it was found to decrease sebum DHT levels by 55% in men versus a modest reduction of only 15% for finasteride. However, MK-386 failed to show significant effectiveness in a subsequent clinical study for the treatment of acne.
Biological activity
DHT is a potent agonist of the AR, and is in fact the most potent known endogenous ligand of the receptor. It has an affinity (Kd) of 0.25 to 0.5 nM for the human AR, which is about 2- to 3-fold higher than that of testosterone (Kd = 0.4 to 1.0 nM) and 15–30 times higher than that of adrenal androgens. In addition, the dissociation rate of DHT from the AR is 5-fold slower than that of testosterone. The EC50 of DHT for activation of the AR is 0.13 nM, which is about 5-fold stronger than that of testosterone (EC50 = 0.66 nM). In bioassays, DHT has been found to be 2.5- to 10-fold more potent than testosterone.
The elimination half-life of DHT in the body (53 minutes) is longer than that of testosterone (34 minutes), and this may account for some of the difference in their potency. A study of transdermal (patches) DHT and testosterone treatment reported terminal half-lives of 2.83 hours and 1.29 hours, respectively.
Unlike other androgens such as testosterone, DHT cannot be converted by the enzyme aromatase into an estrogen like estradiol. Therefore, it is frequently used in research settings to distinguish between the effects of testosterone caused by binding to the AR and those caused by testosterone's conversion to estradiol and subsequent binding to and activation of ERs. Although DHT cannot be aromatized, it is still transformed into metabolites with significant ER affinity and activity. These are 3α-androstanediol and 3β-androstanediol, which are predominant agonists of the ERβ.
Biochemistry
Biosynthesis
DHT is synthesized irreversibly from testosterone by the enzyme 5α-reductase. This occurs in various tissues including the genitals (penis, scrotum, clitoris, labia majora), prostate gland, skin, hair follicles, liver, and brain. Around 5 to 7% of testosterone undergoes 5α-reduction into DHT, and approximately 200 to 300 μg of DHT is synthesized in the body per day. Most DHT is produced in peripheral tissues like the skin and liver, whereas most circulating DHT originates specifically from the liver. The testes and prostate gland contribute relatively little to concentrations of DHT in circulation.
There are two major isoforms of 5α-reductase, SRD5A1 (type 1) and SRD5A2 (type 2), with the latter being the most biologically important isoenzyme. There is also third 5α-reductase: SRD5A3. SRD5A2 is most highly expressed in the genitals, prostate gland, epididymides, seminal vesicles, genital skin, facial and chest hair follicles, and liver, while lower expression is observed in certain brain areas, non-genital skin/hair follicles, testes, and kidneys. SRD5A1 is most highly expressed in non-genital skin/hair follicles, the liver, and certain brain areas, while lower levels are present in the prostate, epididymides, seminal vesicles, genital skin, testes, adrenal glands, and kidneys. In the skin, 5α-reductase is expressed in sebaceous glands, sweat glands, epidermal cells, and hair follicles. Both isoenzymes are expressed in scalp hair follicles, although SRD5A2 predominates in these cells. The SRD5A2 subtype is the almost exclusive isoform expressed in the prostate gland.
Backdoor pathway
DHT under certain normal and pathological conditions can additionally be produced via a route that does not involve testosterone as an intermediate but instead goes through other intermediates. This route is called the "backdoor pathway".
The pathway can start from 17α-hydroxyprogesterone or from progesterone and can be outlined as follows (depending on the initial substrate):
17α-hydroxyprogesterone → 5α-pregnan-17α-ol-3,20-dione → 5α-pregnane-3α,17α-diol-20-one → androsterone → 5α-androstane-3α,17β-diol → DHT.
progesterone → 5α-dihydroprogesterone → allopregnanolone → 5α-pregnane-3α,17α-diol-20-one → androsterone → 5α-androstane-3α,17β-diol → DHT.
This pathway is not always considered in the clinical evaluation of patients with hyperandrogenism, for instance due to rare disorders of sex development like 21α-hydroxylase deficiency. Ignoring this pathway in such instances may lead to diagnostic pitfalls and confusion, when the conventional androgen biosynthetic pathway cannot fully explain the observed consequences.
As with the conventional pathway of DHT synthesis, the backdoor pathway similarly requires 5α-reductase. Whereas 5α-reduction is the last transformation in the classical androgen pathway, it is the first step in the backdoor pathway.
Distribution
The plasma protein binding of DHT is more than 99%. In men, approximately 0.88% of DHT is unbound and hence free, while in premenopausal women, about 0.47–0.48% is unbound. In men, DHT is bound 49.7% to sex hormone-binding globulin (SHBG), 39.2% to albumin, and 0.22% to corticosteroid-binding globulin (CBG), while in premenopausal women, DHT is bound 78.1–78.4% to SHBG, 21.0–21.3% to albumin, and 0.12% to CBG. In late pregnancy, only 0.07% of DHT is unbound in women; 97.8% is bound to SHBG while 2.15% is bound to albumin and 0.04% is bound to CBG. DHT has higher affinity for SHBG than does testosterone, estradiol, or any other steroid hormone.
Metabolism
DHT is inactivated in the liver and extrahepatic tissues like the skin into 3α-androstanediol and 3β-androstanediol by the enzymes 3α-hydroxysteroid dehydrogenase and 3β-hydroxysteroid dehydrogenase, respectively. These metabolites are in turn converted, respectively, into androsterone and epiandrosterone, then conjugated (via glucuronidation and/or sulfation), released into circulation, and excreted in urine.
Unlike testosterone, DHT cannot be aromatized into an estrogen like estradiol, and for this reason, has no propensity for estrogenic effects.
Excretion
DHT is excreted in the urine as metabolites, such as conjugates of 3α-androstanediol and androsterone.
Levels
Ranges for circulating total DHT levels tested with HPLC–MS/MS and reported by LabCorp are as follows:
Men: 30–85ng/dL
Women: 4–22ng/dL
Prepubertal children: <3ng/dL
Pubertal boys: 3–65ng/dL (mean at Tanner stage 5: 43ng/dL)
Pubertal girls: 3–19ng/dL (mean at Tanner stage 5: 9ng/dL)
Ranges for circulating free DHT levels tested with HPLC–MS/MS and equilibrium dialysis and reported by LabCorp are as follows:
<18 years of age: not established
Adult males: 2.30–11.60pg/mL (0.54–2.58% free)
Adult females: 0.09–1.02pg/mL (<1.27% free)
Other studies and labs assessing circulating total DHT levels with LC–MS/MS have reported ranges of 11–95ng/dL (0.38–3.27nmol/L) in adult men, 14–77ng/dL (0.47–2.65nmol/L) for healthy adult men (age 18–59years), 23–102 ng/dL (0.8–3.5 nmol/L) for community-dwelling adult men (age <65years), and 14–92ng/dL (0.49–3.2nmol/L) for healthy older men (age 71–87years). In the case of women, mean circulating DHT levels have been found to be about 9ng/dL (0.3nmol/L) in premenopausal women and 3ng/dL (0.1nmol/L) in postmenopausal women. There was no variation in DHT levels across the menstrual cycle in premenopausal women, which is in contrast to testosterone (which shows a peak at mid-cycle). With immunoassay-based techniques, testosterone levels in premenopausal women have been found to be about 40ng/dL (1.4nmol/L) and DHT levels about 10ng/dL (0.34nmol/L). With radioimmunoassays, the ranges for testosterone and DHT levels in women have been found to be 20 to 70ng/dL and 5 to 30ng/dL, respectively.
Levels of total testosterone, free testosterone, and free DHT, but not total DHT, all measured with LC–MS/MS, are higher in women with polycystic ovary syndrome (PCOS) than in women without this condition.
Circulating DHT levels in eugonadal men are about 7- to 10-fold lower than those of testosterone, and plasma levels of testosterone and DHT are highly correlated (correlation coefficient of 0.7). In contrast to the circulation however, levels of DHT in the prostate gland are approximately 5- to 10-fold higher than those of testosterone. This is due to a more than 90% conversion of testosterone into DHT in the prostate via locally expressed 5α-reductase. Because of this, and because DHT is much more potent as an androgen receptor agonist than testosterone, DHT is the major androgen in the prostate gland.
Medical use
DHT is available in pharmaceutical formulations for medical use as an androgen or anabolic–androgenic steroid (AAS). It is used mainly in the treatment of male hypogonadism. When used as a medication, dihydrotestosterone is referred to as androstanolone () or as stanolone (), and is sold under brand names such as Andractim among others. The availability of pharmaceutical DHT is limited; it is not available in the United States or Canada, but is available in certain European countries. The available formulations of DHT include buccal or sublingual tablets, topical gels, and, as esters in oil, injectables like androstanolone propionate and androstanolone valerate.
Performance enhancement
DHT has been used as a performance enhancing drug, specifically as an alternative to testosterone, as it was once known to be capable of falsifying drug tests.
Chemistry
DHT, also known as 5α-androstan-17β-ol-3-one, is a naturally occurring androstane steroid with a ketone group at the C3 position and a hydroxyl group at the C17β position. It is the derivative of testosterone in which the double bond between the C4 and C5 positions has been reduced or hydrogenated.
History
DHT was first synthesized by Adolf Butenandt and his colleagues in 1935. It was prepared via hydrogenation of testosterone, which had been discovered earlier that year. DHT was introduced for medical use as an AAS in 1953, and was noted to be more potent than testosterone but with reduced androgenicity. It was not elucidated to be an endogenous substance until 1956, when it was shown to be formed from testosterone in rat liver homogenates. In addition, the biological importance of DHT was not realized until the early 1960s, when it was found to be produced by 5α-reductase from circulating testosterone in target tissues like the prostate gland and seminal vesicles and was found to be more potent than testosterone in bioassays. The biological functions of DHT in humans became much more clearly defined upon the discovery and characterization of 5α-reductase type 2 deficiency in 1974. DHT was the last major sex hormone, the others being testosterone, estradiol, and progesterone, to be discovered, and is unique in that it is the only major sex hormone that functions principally as an intracrine and paracrine hormone rather than as an endocrine hormone.
DHT was one of the original "underground" methods used to falsify drug testing in sport, as DHT does not alter the ratio of testosterone to epistestosterone in an athlete's urinary steroid profile, a measurement that was once the basis of drug tests used to detect steroid use. However, DHT use can still be detected by other means which are now universal in athletic drug tests, such as metabolite analysis.
In 2004, Richard Auchus, in a review published in Trends in Endocrinology and Metabolism coined the term "backdoor pathway" as a metabolic route to DHT that: 1) bypasses conventional intermediates androstenedione and testosterone; 2) involves 5α-reduction of 21-carbon (C21) pregnanes to 19-carbon (C19) androstanes; and 3) involves the 3α-oxidation of 5α-androstane-3α,17β-diol to DHT. This newly discovered pathway explained how DHT is produced under certain normal and pathological conditions in humans when the classical androgen pathway (via testosterone) cannot fully explain the observed consequences. This review was based on earlier works (published in 2000–2004) by Shaw et al., Wilson et al., and Mahendroo et al., who studied DHT biosynthesis in tammar wallaby pouch young and mice.
In 2011, Chang et al. demonstrated that yet another metabolic pathway to DHT was dominant and possibly essential in castration-resistant prostate cancer (CRPC). This pathway can be outlined as androstenedione → 5α-androstane-3,17-dione → DHT. While this pathway was described as the "5α-dione pathway" in a 2012 review, the existence of such a pathway in the prostate was hypothesized in a 2008 review by Luu-The et al.
References
5α-Reduced steroid metabolites
Anabolic–androgenic steroids
Androstanes
Animal reproductive system
Cyclopentanols
GABAA receptor positive allosteric modulators
Hormones of the hypothalamus-pituitary-gonad axis
Hormones of the testis
Human hormones
Ketones
Selective ERβ agonists
Sex hormones
Testosterone | Dihydrotestosterone | [
"Chemistry",
"Biology"
] | 5,811 | [
"Behavior",
"Sex hormones",
"Ketones",
"Functional groups",
"Sexuality"
] |
591,668 | https://en.wikipedia.org/wiki/Saponin | Saponins (Latin "sapon", soap + "-in", one of) are bitter-tasting, usually toxic plant-derived secondary metabolites. They are organic chemicals and have a foamy quality when agitated in water and a high molecular weight. They are present in a wide range of plant species throughout the bark, leaves, stems, roots and flowers but particularly in soapwort (genus Saponaria), a flowering plant, the soapbark tree (Quillaja saponaria), common corn-cockle (Agrostemma githago L.), baby's breath (Gypsophila spp.) and soybeans (Glycine max L.). They are used in soaps, medicines (e.g. drug adjuvants), fire extinguishers, dietary supplements, steroid synthesis, and in carbonated beverages (for example, being responsible for maintaining the head on root beer). Saponins are both water and fat soluble, which gives them their useful soap properties. Some examples of these chemicals are glycyrrhizin (licorice flavoring) and quillaia (alt. quillaja), a bark extract used in beverages.
Classification based on chemical structure
Structurally, they are glycosides with at least one glycosidic linkage between a sugar chain (glycone) and another non-sugar organic molecule (aglycone).
Steroid glycosides
Steroid glycosides are saponins with 27-C atoms. They are modified triterpenoids where their aglycone is a steroid, these compounds typically consist of a steroid aglycone attached to one or more sugar molecules, which can have various biological activities. These compounds are known for their significant cytotoxic, neurotrophic and antibacterial properties. These may also be used for partial synthesis of sex hormones or steroids.
Triterpene glycosides
Triterpene glycosides are natural glycosides present in various plants, herbs and sea cucumbers and possess 30-C atoms. These compounds consist of a triterpene aglycone attached to one or more sugar molecules. Triterpene glycosides exhibit a wide range of biological activities and pharmacological properties, making them valuable in traditional medicine and modern drug discovery.
Uses
The saponins are a subclass of terpenoids, the largest class of plant extracts. The amphipathic nature of saponins gives them activity as surfactants with potential ability to interact with cell membrane components, such as cholesterol and phospholipids, possibly making saponins useful for development of cosmetics and drugs. Saponins have also been used as adjuvants in development of vaccines, such as Quil A, an extract from the bark of Quillaja saponaria. This makes them of interest for possible use in subunit vaccines and vaccines directed against intracellular pathogens. In their use as adjuvants for manufacturing vaccines, toxicity associated with sterol complexation remains a concern.
Quillaja is toxic when consumed in large amounts, involving possible liver damage, gastric pain, diarrhea, or other adverse effects. The NOAEL of saponins is around 300 mg/kg in rodents, so a dose of 3 mg/kg should be safe with a safety factor (see Therapeutic index) of 100.
Saponins are used for their effects on ammonia emissions in animal feeding. In the United States, researchers are exploring the use of saponins derived from plants to control invasive worm species, including the jumping worm.
Decoction
The principal historical use of these plants was boiling down to make soap. Saponaria officinalis is most suited for this procedure, but other related species also work. The greatest concentration of saponin occurs during flowering, with the most saponin found in the woody stems and roots, but the leaves also contain some.
Biological sources
Saponins have historically been plant-derived, but they have also been isolated from marine organisms such as sea cucumber. They derive their name from the soapwort plant (genus Saponaria, family Caryophyllaceae), the root of which was used historically as a soap. In other representatives of this family, e.g. Agerostemma githago, Gypsophila spp., and Dianthus sp., saponins are also present in large quantities. Saponins are also found in the botanical family Sapindaceae, including its defining genus Sapindus (soapberry or soapnut) and the horse chestnut, and in the closely related families Aceraceae (maples) and Hippocastanaceae. It is also found heavily in Gynostemma pentaphyllum (Cucurbitaceae) in a form called gypenosides, and ginseng or red ginseng (Panax, Araliaceae) in a form called ginsenosides. Saponins are also found in the unripe fruit of Manilkara zapota (also known as sapodillas), resulting in highly astringent properties. Nerium oleander (Apocynaceae), also known as White Oleander, is a source of the potent cardiac toxin oleandrin. Within these families, this class of chemical compounds is found in various parts of the plant: leaves, stems, roots, bulbs, blossom and fruit. Commercial formulations of plant-derived saponins, e.g., from the soap bark tree, Quillaja saponaria, and those from other sources are available via controlled manufacturing processes, which make them of use as chemical and biomedical reagents. Soyasaponins are a group of structurally complex oleanane-type triterpenoid saponins that include soyasapogenol (aglycone) and oligosaccharide moieties biosynthesized on soybean tissues. Soyasaponins were previously associated to plant-microbe interactions from root exudates and abiotic stresses, as nutritional deficiency.
Role in plant ecology and impact on animal foraging
In plants, saponins may serve as anti-feedants, and to protect the plant against microbes and fungi. Some plant saponins (e.g., from oat and spinach) may enhance nutrient absorption and aid in animal digestion. However, saponins are often bitter to taste, and so can reduce plant palatability (e.g., in livestock feeds), or even imbue them with life-threatening animal toxicity. Some saponins are toxic to cold-blooded organisms and insects at particular concentrations. Further research is needed to define the roles of these natural products in their host organisms, which have been described as "poorly understood" to date.
Ethnobotany
Most saponins, which readily dissolve in water, are poisonous to fish. Therefore, in ethnobotany, they are known for their use by indigenous people in obtaining aquatic food sources. Since prehistoric times, cultures throughout the world have used fish-killing plants, typically containing saponins, for fishing.
Although prohibited by law, fish-poison plants are still widely used by indigenous tribes in Guyana.
On the Indian subcontinent, the Gondi people use poison-plant extracts in fishing.
In 16th century, saponins-rich plant, Agrostemma githago, was used to treat ulcers, fistulas, and hemorrhages.
Many of California's Native American tribes traditionally used soaproot (genus Chlorogalum), and/or the root of various yucca species, which contain saponin, as a fish poison. They would pulverize the roots, mix with water to generate a foam, then put the suds into a stream. This would kill or incapacitate the fish, which could be gathered easily from the surface of the water. Among the tribes using this technique were the Lassik, the Luiseño, and the Mattole.
Chemical structure
The vast heterogeneity of structures underlying this class of compounds makes generalizations difficult; they're a subclass of terpenoids, oxygenated derivatives of terpene hydrocarbons. Terpenes in turn are formally made up of five-carbon isoprene units (The alternate steroid base is a terpene missing a few carbon atoms). Derivatives are formed by substituting other groups for some of the hydrogen atoms of the base structure. In the case of most saponins, one of these substituents is a sugar, so the compound is a glycoside of the base molecule.
More specifically, the lipophilic base structure of a saponin can be a triterpene, a steroid (such as spirostanol or furostanol) or a steroidal alkaloid (in which nitrogen atoms replace one or more carbon atoms). Alternatively, the base structure may be an acyclic carbon chain rather than the ring structure typical of steroids. One or two (rarely three) hydrophilic monosaccharide (simple sugar) units bind to the base structure via their hydroxyl (OH) groups. In some cases other substituents are present, such as carbon chains bearing hydroxyl or carboxyl groups. Such chain structures may be 1-11 carbon atoms long, but are usually 2–5 carbons long; the carbon chains themselves may be branched or unbranched.
The most commonly encountered sugars are monosaccharides like glucose and galactose, though a wide variety of sugars occurs naturally. Other kinds of molecules such as organic acids may also attach to the base, by forming esters via their carboxyl (COOH) groups. Of particular note among these are sugar acids such as glucuronic acid and galacturonic acid, which are oxidized forms of glucose and galactose.
See also
Cardenolide
Cardiac glycoside
Phytochemical
References
Saponaceous plants
Wood extracts | Saponin | [
"Chemistry"
] | 2,123 | [
"Biomolecules by chemical classification",
"Natural products",
"Saponins"
] |
591,703 | https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s%20theorem | In arithmetic combinatorics, Szemerédi's theorem is a result concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured that every set of integers A with positive natural density contains a k-term arithmetic progression for every k. Endre Szemerédi proved the conjecture in 1975.
Statement
A subset A of the natural numbers is said to have positive upper density if
Szemerédi's theorem asserts that a subset of the natural numbers with positive upper density contains an arithmetic progression of length k for all positive integers k.
An often-used equivalent finitary version of the theorem states that for every positive integer k and real number , there exists a positive integer
such that every subset of {1, 2, ..., N} of size at least contains an arithmetic progression of length k.
Another formulation uses the function rk(N), the size of the largest subset of {1, 2, ..., N} without an arithmetic progression of length k. Szemerédi's theorem is equivalent to the asymptotic bound
That is, rk(N) grows less than linearly with N.
History
Van der Waerden's theorem, a precursor of Szemerédi's theorem, was proved in 1927.
The cases k = 1 and k = 2 of Szemerédi's theorem are trivial. The case k = 3, known as Roth's theorem, was established in 1953 by Klaus Roth via an adaptation of the Hardy–Littlewood circle method. Szemerédi next proved the case k = 4 through combinatorics. Using an approach similar to the one he used for the case k = 3, Roth gave a second proof for k = 4 in 1972.
The general case was settled in 1975, also by Szemerédi, who developed an ingenious and complicated extension of his previous combinatorial argument for k = 4 (called "a masterpiece of combinatorial reasoning" by Erdős). Several other proofs are now known, the most important being those by Hillel Furstenberg in 1977, using ergodic theory, and by Timothy Gowers in 2001, using both Fourier analysis and combinatorics while also introducing what is now called the Gowers norm. Terence Tao has called the various proofs of Szemerédi's theorem a "Rosetta stone" for connecting disparate fields of mathematics.
Quantitative bounds
It is an open problem to determine the exact growth rate of rk(N). The best known general bounds are
where . The lower bound is due to O'Bryant building on the work of Behrend, Rankin, and Elkin. The upper bound is due to Gowers.
For small k, there are tighter bounds than the general case. When k = 3, Bourgain, Heath-Brown, Szemerédi, Sanders, and Bloom established progressively smaller upper bounds, and Bloom and Sisask then proved the first bound that broke the so-called "logarithmic barrier". The current best bounds are
, for some constant ,
respectively due to O'Bryant, and Bloom and Sisask (the latter built upon the breakthrough result of Kelley and Meka, who obtained the same upper bound, with "1/9" replaced by "1/12").
For k = 4, Green and Tao proved that
For k=5 in 2023 and k≥5 in 2024 Leng, Sah and Sawhney proved in preprints that:
Extensions and generalizations
A multidimensional generalization of Szemerédi's theorem was first proven by Hillel Furstenberg and Yitzhak Katznelson using ergodic theory. Timothy Gowers, Vojtěch Rödl and Jozef Skokan with Brendan Nagle, Rödl, and Mathias Schacht, and Terence Tao provided combinatorial proofs.
Alexander Leibman and Vitaly Bergelson generalized Szemerédi's to polynomial progressions: If is a set with positive upper density and are integer-valued polynomials such that , then there are infinitely many such that for all . Leibman and Bergelson's result also holds in a multidimensional setting.
The finitary version of Szemerédi's theorem can be generalized to finite additive groups including vector spaces over finite fields. The finite field analog can be used as a model for understanding the theorem in the natural numbers. The problem of obtaining bounds in the k=3 case of Szemerédi's theorem in the vector space is known as the cap set problem.
The Green–Tao theorem asserts the prime numbers contain arbitrarily long arithmetic progressions. It is not implied by Szemerédi's theorem because the primes have density 0 in the natural numbers. As part of their proof, Ben Green and Tao introduced a "relative" Szemerédi theorem which applies to subsets of the integers (even those with 0 density) satisfying certain pseudorandomness conditions. A more general relative Szemerédi theorem has since been given by David Conlon, Jacob Fox, and Yufei Zhao.
The Erdős conjecture on arithmetic progressions would imply both Szemerédi's theorem and the Green–Tao theorem.
See also
Problems involving arithmetic progressions
Ergodic Ramsey theory
Arithmetic combinatorics
Szemerédi regularity lemma
Van der Waerden's theorem
Notes
Further reading
External links
PlanetMath source for initial version of this page
Announcement by Ben Green and Terence Tao – the preprint is available at math.NT/0404188
Discussion of Szemerédi's theorem (part 1 of 5)
Ben Green and Terence Tao: Szemerédi's theorem on Scholarpedia
Additive combinatorics
Ramsey theory
Theorems in combinatorics
Theorems in number theory | Szemerédi's theorem | [
"Mathematics"
] | 1,216 | [
"Mathematical theorems",
"Theorems in combinatorics",
"Additive combinatorics",
"Combinatorics",
"Theorems in discrete mathematics",
"Theorems in number theory",
"Mathematical problems",
"Ramsey theory",
"Number theory"
] |
591,740 | https://en.wikipedia.org/wiki/719%20Albert | 719 Albert, provisional designation , is a stony asteroid, approximately 2.5 kilometers in diameter, classified as a near-Earth object of the Amor group of asteroids. It was discovered by Austrian astronomer Johann Palisa at the Vienna Observatory on 3 October 1911, and subsequently a lost minor planet for 89 years. The asteroid was named in memory of Albert Salomon Anselm von Rothschild, an Austrian philanthropist and banker. Albert was the second Amor asteroid discovered, the first being 433 Eros.
Orbit and classification
Albert orbits the Sun at a distance of 1.2–4.1 AU once every 4 years and 3 months (1,567 days). Its orbit has an eccentricity of 0.55 and an inclination of 12° with respect to the ecliptic. The asteroid's first observation is a precovery taken in September 1911 at Heidelberg Observatory, two weeks prior to its discovery at Vienna. The body's observation arc begins the night following its official discovery observation. Albert is also a Mars-crossing asteroid.
Close approaches
The asteroid has a minimum orbital intersection distance with Earth of , which translates into 79.1 lunar distances. On 8 September 1911, shortly before its discovery, it made its closest approach at . After another close encounter in 1941, Albert will not approach Earth to a similar distance until 2078.
Discovery
Discovered in 1911 by Johann Palisa, Albert was named after one of the Imperial Observatory in Vienna's major benefactors, Albert Salomon von Rothschild, who had died some months before. Due to inaccuracies in the asteroid's computed orbit it was subsequently lost and not recovered until 2000 by Jeffrey Larsen using data from the Spacewatch asteroid survey project. Prior to being recovered in 2000, Albert was the last "lost asteroid" among those assigned numbers (69230 Hermes was not numbered until 2003). The second-last "lost" numbered asteroid, 878 Mildred, had been recovered in 1991.
When it was rediscovered, Albert was mistakenly thought to be a new asteroid and was designated . Upon further investigation, however, it was noticed that its orbital plane matched up nicely with the last remaining "lost" asteroid and it was properly identified. Using the new observational data, the period was determined to be about 4.28 years instead of the 4.1 years calculated in 1911; this discrepancy was the primary reason the asteroid was lost.
Physical properties
In the SMASS classification, Albert is a common stony S-type asteroid. Others also characterized it as a stony asteroid, while a study using Sloan photometry considers it to be an X-type asteroid.
Most of what is known about 719 Albert comes from observations taken after its rediscovery. In 2001 it passed near the Earth, allowing for a series of observations at differing phase angles. During this pass its rotational period was calculated at 5.802 hours and a measured absolute magnitude of 15.43 together with an assumed albedo of 0.12 gave a diameter of 2.8 km. Another group led by R. P. Binzel measured an absolute magnitude of 15.8; they, however, used an assumed albedo of 0.15 leading to a calculated diameter of 2.4 km.
The Collaborative Asteroid Lightcurve Link assumes a standard albedo for stony asteroids of 0.20 and calculates a diameter of 2.36 kilometers based on an absolute magnitude of 15.5.
Notes
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
(719) Albert at EARN data base
Dictionary of Minor Planet Names, Google books
Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
000719
Discoveries by Johann Palisa
Named minor planets
000719
19111003
Recovered astronomical objects | 719 Albert | [
"Astronomy"
] | 792 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
591,756 | https://en.wikipedia.org/wiki/Xanthophyll | Xanthophylls (originally phylloxanthins) are yellow pigments that occur widely in nature and form one of two major divisions of the carotenoid group; the other division is formed by the carotenes. The name is from Greek: (), meaning "yellow", and (), meaning "leaf"), due to their formation of the yellow band seen in early chromatography of leaf pigments.
Molecular structure
As both are carotenoids, xanthophylls and carotenes are similar in structure, but xanthophylls contain oxygen atoms while carotenes are purely hydrocarbons, which do not contain oxygen. Their content of oxygen causes xanthophylls to be more polar (in molecular structure) than carotenes, and causes their separation from carotenes in many types of chromatography. (Carotenes are usually more orange in color than xanthophylls.)
Xanthophylls present their oxygen either as hydroxyl groups and/or as hydrogen atoms substituted by oxygen atoms when acting as a bridge to form epoxides.
Occurrence
Like other carotenoids, xanthophylls are found in highest quantity in the leaves of most green plants, where they act to modulate light energy and perhaps serve as a non-photochemical quenching agent to deal with triplet chlorophyll (an excited form of chlorophyll), which is overproduced at high light levels in photosynthesis. The xanthophylls found in the bodies of animals including humans, and in dietary animal products, are ultimately derived from plant sources in the diet. For example, the yellow color of chicken egg yolks, fat, and skin comes from ingested xanthophylls—primarily lutein, which is added to chicken feed for this purpose.
The yellow color of the macula lutea (literally, yellow spot) in the retina of the human eye results from the presence of lutein and zeaxanthin. Again, both these specific xanthophylls require a source in the human diet to be present in the human eye. They protect the eye from ionizing light (blue and ultraviolet light), which they absorb; but xanthophylls do not function in the mechanism of sight itself as they cannot be converted to retinal (also called retinaldehyde or vitamin A aldehyde). Their physical arrangement in the macula lutea is believed to be the cause of Haidinger's brush, an entoptic phenomenon that enables perception of polarizing light.
Example compounds
The group of xanthophylls includes (among many other compounds) lutein, zeaxanthin, neoxanthin, violaxanthin, flavoxanthin, and α- and β-cryptoxanthin. The latter compound is the only known xanthophyll to contain a beta-ionone ring, and thus β-cryptoxanthin is the only xanthophyll that is known to possess pro-vitamin A activity for mammals. Even then, it is a vitamin only for plant-eating mammals that possess the enzyme to make retinal from carotenoids that contain beta-ionone (some carnivores lack this enzyme). In species other than mammals, certain xanthophylls may be converted to hydroxylated retinal-analogues that function directly in vision. For example, with the exception of certain flies, most insects use the xanthophyll derived R-isomer of 3-hydroxyretinal for visual activities, which means that β-cryptoxanthin and other xanthophylls (such as lutein and zeaxanthin) may function as forms of visual "vitamin A" for them, while carotenes (such as beta carotene) do not.
Xanthophyll cycle
The xanthophyll cycle involves the enzymatic removal of epoxy groups from xanthophylls (e.g. violaxanthin, antheraxanthin, diadinoxanthin) to create so-called de-epoxidised xanthophylls (e.g. diatoxanthin, zeaxanthin). These enzymatic cycles were found to play a key role in stimulating energy dissipation within light-harvesting antenna proteins by non-photochemical quenching- a mechanism to reduce the amount of energy that reaches the photosynthetic reaction centers. Non-photochemical quenching is one of the main ways of protecting against photoinhibition.
In higher plants, there are three carotenoid pigments that are active in the xanthophyll cycle: violaxanthin, antheraxanthin, and zeaxanthin. During light stress, violaxanthin is converted, i.e. reduced, to zeaxanthin via the intermediate antheraxanthin, which plays a direct photoprotective role acting as a lipid-protective anti-oxidant and by stimulating non-photochemical quenching within light-harvesting proteins. This conversion of violaxanthin to zeaxanthin is done by the enzyme violaxanthin de-epoxidase (EC 1.23.5.1), while the reverse reaction, i.e. oxidation, is performed by zeaxanthin epoxidase (EC 1.14.15.21).
In diatoms and dinoflagellates, the xanthophyll cycle consists of the pigment diadinoxanthin, which is transformed into diatoxanthin (diatoms) or dinoxanthin (dinoflagellates) under high-light conditions.
Wright et al. (Feb 2011) found that, "The increase in zeaxanthin appears to surpass the decrease in violaxanthin in spinach" and commented that the discrepancy could be explained by "a synthesis of zeaxanthin from beta-carotene", however they noted further study is required to explore this hypothesis.
Food sources
Xanthophylls are found in all young leaves and in etiolated leaves. Examples of other rich sources include papaya, peaches, prunes, and squash, which contain lutein diesters.
Kale contains about 18mg lutein and zeaxanthin per 100g, spinach about 11mg/100g, parsley about 6mg/100g, peas about 3mg/110g, squash about 2mg/100g, and pistachios about 1mg/100g.
References
Demmig-Adams, B & W. W. Adams, 2006. Photoprotection in an ecological context: the remarkable complexity of thermal energy dissipation, New Phytologist, 172: 11–21.
External links
Carotenoids
Fatty alcohols | Xanthophyll | [
"Biology"
] | 1,456 | [
"Biomarkers",
"Carotenoids"
] |
591,768 | https://en.wikipedia.org/wiki/Index%20of%20information%20theory%20articles | This is a list of information theory topics.
A Mathematical Theory of Communication
algorithmic information theory
arithmetic coding
channel capacity
Communication Theory of Secrecy Systems
conditional entropy
conditional quantum entropy
confusion and diffusion
cross-entropy
data compression
entropic uncertainty (Hirchman uncertainty)
entropy encoding
entropy (information theory)
Fisher information
Hick's law
Huffman coding
information bottleneck method
information theoretic security
information theory
joint entropy
Kullback–Leibler divergence
lossless compression
negentropy
noisy-channel coding theorem (Shannon's theorem)
principle of maximum entropy
quantum information science
range encoding
redundancy (information theory)
Rényi entropy
self-information
Shannon–Hartley theorem
Information theory
Information theory topics | Index of information theory articles | [
"Mathematics",
"Technology",
"Engineering"
] | 139 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
591,828 | https://en.wikipedia.org/wiki/Channel%209%20%28Microsoft%29 | Channel 9 was a Microsoft website for hosting videos and podcasts that Microsoft employees create.
Launched in 2004 when Microsoft's corporate reputation was at a low, Channel 9 was the company's first blog. It was named after the United Airlines audio channel that lets airplane passengers listen to the cockpit's conversations unhindered; the site published conversations among Microsoft developers, rather than its chairman Bill Gates, who had historically been the "face" of Microsoft. This made it an inexpensive alternative to Microsoft's Professional Developers Conference, then the main public platform where customers and outside developers could speak to Microsoft employees without the intervention of the company's PR department. The Channel 9 team produced interviews with Bill Gates, Erik Meijer, and Mark Russinovich.
On November 5, 2021, it was announced that Microsoft would merge Channel 9 into Microsoft Learn. The move was completed on December 1, effectively rendering the original site defunct. However, past videos from the former site can still be seen there.
Channel 9, however, was not a community website and did not host any content made by the community. That had not always been the case. The site once hosted discussion forums, as well as a wiki based on Microsoft's own FlexWiki. The wiki had been used to provide ad hoc feedback to Microsoft teams, such as the Internet Explorer team.
See also
LinkedIn Learning
References
External links
(Redirect)
(Archived)
(new site for hosting content)
Microsoft websites
Computing websites | Channel 9 (Microsoft) | [
"Technology"
] | 303 | [
"Computing websites"
] |
591,839 | https://en.wikipedia.org/wiki/Flammulina%20filiformis | Flammulina filiformis, commonly called enoki mushroom, is a species of edible agaric (gilled mushroom) in the family Physalacriaceae. It is widely cultivated in East Asia, and well known for its role in Japanese and Chinese cuisine. Until recently, the species was considered to be conspecific with the European Flammulina velutipes, but DNA sequencing has shown that the two are distinct.
Etymology
In Japanese, the mushroom is known as enoki-take or enoki-dake, both meaning "hackberry mushroom". This is because it is often found growing at the base of hackberry (enoki) trees.
In Mandarin Chinese, the mushroom is called jīnzhēngū ( "gold needle mushroom") or jīngū (金菇 "gold mushroom").
In Korean, it is called paengi beoseot (팽이버섯) which means "mushroom planted near catalpa". In Vietnamese it is known as nấm kim châm. In India it is called futu.
Description
Basidiocarps are agaricoid and grow in clusters. Individual fruit bodies are up to tall, the cap convex at first, becoming flat when expanded, up to across. The cap surface is smooth, viscid when damp, ochraceous yellow to yellow-brown. The lamellae (gills) are cream to yellowish white. The stipe (stem) is smooth, pale yellow at the apex, yellow-brown to dark brown towards the base, and lacking a ring. The spore print is white, the spores (under a microscope) smooth, inamyloid, ellipsoid to cylindrical, c. 5 to 7 by 3 to 3.5μm.
There is a significant difference in appearance between wild and cultivated basidiocarps. Cultivated enokitake are not exposed to light, resulting in white or pallid fruit bodies with long stipes and small caps.
Taxonomy
Flammulina filiformis was originally described from China in 2015 as a variety of F. velutipes, based on internal transcribed spacer sequences. Further molecular research using a combination of different sequences has shown that F. filiformis and F. velutipes are distinct and should be recognized as separate species.
Distribution and habitat
The fungus is found on dead wood of Betula platyphylla, Broussonetia papyrifera, Dipentodon sinicus, Neolitsea sp., Salix spp, and other broad-leaved trees. It grows naturally in China, Korea, and Japan.
Nutritional profile
Enoki mushrooms are 88% water, 8% carbohydrates, 3% protein, and contain negligible fat (table). In a 100-gram reference serving, enoki mushrooms provide of food energy and are an excellent source (20% or more of the Daily Value) of the B vitamins, thiamine, niacin, and pantothenic acid, while supplying moderate amounts of riboflavin, folate, and phosphorus (table).
Potential health benefits
The nutritional value of F. filiformis has long been recognised, which makes them an object of interest in current research. F. filiformis is a rich source for carbohydrates, proteins and unsaturated fatty acids as well as several noteworthy micronutrients and dietary fiber.
While its nutritional value and culinary applications are well established, recent studies have begun exploring its potential medicinal properties in greater depth. Several bioactive molecules from various chemical classes have been isolated from F. filiformis extracts, showing promising potential for future applications as nutraceuticals or dietary supplements. Moreover, bioactive polysaccharides derived from F. filiformis have demonstrated to exhibit a broad spectrum of bioactivities, including anticancer, immunomodulatory, and anti-neurodegenerative effects. However, the precise mechanisms underlying these actions remain unclear and warrant further investigation in future research.
In conclusion, F. filiformis holds significant promise as both a functional food and a nutraceutical, and may serve as an interesting source of bioactive compounds for therapeutic and pharmaceutical purposes.
Uses
F. filiformis has been cultivated in China since 800 AD. Commercial production in China was estimated at 1.57 million tonnes per annum in 2010, with Japan producing an additional 140,000 tonnes per annum. The fungus can be cultivated on a range of simple, lignocellulosic substrates including sawdust, wheat straw, and paddy straw. Enokitake are typically grown in the dark, producing pallid fruitbodies having long and narrow stipes with undeveloped caps. Exposure to light results in more normal, short-stiped, colored fruitbodies.
As food
The mushroom is widely eaten in East Asia. Cultivated F. filiformis is sold both fresh and canned. The fungus has a crisp texture and can be refrigerated for approximately one week. It is a common ingredient for soups, especially in East Asian cuisine, but can be used for salads and other dishes.
Improved storaging
F. filiformis extract can be added to whipped cream. It was observed that this measure helps to slow down the development of ice crystals, which would maintain the quality of whipped cream longer while storing it in a frozen state.
Nutritionally improved meat products
F. filiformis are an object of interest in current research for their potential to enhance food products and animal feed by using the stem waste.
Studies indicate that the addition of F. filiformis stem waste powder to meat products can improve nutritional quality by increasing dietary fiber and ash content. This ingredient also enhances tenderness, inhibits lipid and protein oxidation, and extends shelf life, without negatively impacting the texture or flavor of the meat products.
Feed additive for livestock
Natural feed additives become more important in livestock farming. Following this trend, F. filiformis was checked for livestock health and production efficiency improving properties. There are studies showing that the use of Enoki mushroom residue as a feed additive offers several benefits for livestock. It enhances antioxidant enzyme activity, and improves animal digestibility, hormone levels, and immunity.
The addition of mushroom residue in the livestock diet can reduce the feed cost and feed conversion ratio and enhance the meat quality, providing consumers with healthier and higher-quality meat products.
Cultivation and harvest
The common way to cultivate F. filiformis is in a large-scale factory style. By using modern possibilities to mechanize processes, over 300'000 tons a year of F. filiformis can be harvested that way.
Indoor cultivation
F. filiformis thrive in a warm, moist environment during the incubation phase, with substrate temperatures ranging from 18 to 25°C (64 to 77°F). F. filiformis need significantly cooler conditions to trigger fruiting. Pinning is triggered at temperatures ranging between 7 to 10°C (45 to 50°F), and the optimal temperature range for fruiting is 10 to 16°C (50 to 61°F). As with most fungi, F. filiformis also demand elevated humidity levels—95 to 100% during pinning and 85 to 95% during fruiting.
The ideal size to harvest enoki mushrooms is generally recommended to be about 2-4 inches in length. At that time, the cap of F. filiformis should still be tightly closed and the stem should be long and sturdy. If people grow enoki mushrooms at home, they can use a sharp knife or scissors to snip off the mushroom cluster at the base of the stem where it meets the growing medium. It's important to remove both the mushrooms and any remaining mycelium (the white, thread-like structures) from the growing medium during harvest. This helps prevent decaying, which could negatively impact future mushroom growth.
Post-harvest handling
F. filiformis have thin, delicate stems that need to be handled with care to prevent damage. The following steps are for reference. First, gently brush off any dirt or substrate with a soft brush or a damp cloth. Second, avoid rinsing them with water, as this can cause them to absorb moisture, compromising both their texture and flavor. Once cleaned, separate the clusters into individual stems for easier cooking and better presentation. After cleaning, separate the mushroom clusters into individual stems for easier cooking and presentation.
Storage
F. filiformis should be kept at temperatures between 7-10°C (44.6-50°F) for optimal freshness. For brief storage (fewer than 7 days), a temperature interval of 1-2°C (34-36°F) with 90-98% relative humidity is advised.
Proneness to Listeria
F. filiformis have the potential to be contaminated with listeria monocytogenes, which is why disease control centers recommend cooking the mushroom upon consumption.
Singapore Food Agency advise people to do the following to ensure food security when consuming F. filiformis:
Enoki mushrooms should never be eaten raw
Instead, make sure to cook the mushrooms properly before eating them
If there are cooking directions at hand, make sure to follow them
Enoki mushrooms should be stored at cold temperatures to ensure a slower growth of microbes. This should be done even if the packaging is not opened yet
Uncooked enoki mushroom should be stored separately to avoid cross-contamination
See also
Medicinal mushrooms
Shiitake
References
External links
Chinese edible mushrooms
Edible fungi
Fungi described in 2015
Fungi in cultivation
Fungi of Asia
Fungus species
Japanese cuisine terms
Medicinal fungi
Physalacriaceae | Flammulina filiformis | [
"Biology"
] | 2,000 | [
"Fungi",
"Fungus species"
] |
591,874 | https://en.wikipedia.org/wiki/Local%20multipoint%20distribution%20service | Local multipoint distribution service (LMDS) is a broadband wireless access technology originally designed for digital television transmission (DTV). It was conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile.
LMDS commonly operates on microwave frequencies across the 26 GHz and 29 GHz bands. In the United States, frequencies from 31.0 through 31.3 GHz are also considered LMDS frequencies.
Throughput capacity and reliable distance of the link depends on common radio link constraints and the modulation method used either phase-shift keying or amplitude modulation. Distance is typically limited to about due to rain fade attenuation constraints. Deployment links of up to from the base station are possible in some circumstances such as in point-to-point systems that can reach slightly farther distances due to increased antenna gain.
History and outlook
United States
There was interest in LMDS in the late 1990s and it became known in some circles as "wireless cable" for its potential to compete with cable companies for provision of broadband television to the home. The Federal Communications Commission auctioned spectrum for LMDS in 1998 and 1999.
Despite its early potential and the hype that surrounded the technology, LMDS was slow to find commercial traction. Many equipment and technology vendors simply abandoned their LMDS product portfolios.
Industry observers believe that the window for LMDS has closed with newer technologies replacing it. Major telecommunications companies have been aggressive about deploying alternative technologies such as IPTV and fiber to the premises, also called "fiber optics". Moreover, LMDS has been surpassed in both technological and commercial potential by the LTE, WiMax and 5G NR standards.
Europe and worldwide
Although some operators use LMDS to provide access services, LMDS is more commonly used for high-capacity backhaul for interconnection of networks such as GSM, UMTS, LTE and Wi-Fi.
See also
Multichannel multipoint distribution service
References
Radio technology
Telecommunication services
IEEE 802
Wireless networking
Metropolitan area networks
Networking standards
IEEE standards | Local multipoint distribution service | [
"Technology",
"Engineering"
] | 418 | [
"Information and communications technology",
"Telecommunications engineering",
"Computer standards",
"Wireless networking",
"Computer networks engineering",
"Radio technology",
"Networking standards",
"IEEE standards"
] |
591,931 | https://en.wikipedia.org/wiki/Dependency%20ratio | The dependency ratio is an age-population ratio of those typically not in the labor force (the dependent part ages 0 to 14 and 65+) and those typically in the labor force (the productive part ages 15 to 64). It is used to measure the pressure on the productive population.
Consideration of the dependency ratio is essential for governments, economists, bankers, business, industry, universities and all other major economic segments which can benefit from understanding the impacts of changes in population structure. A low dependency ratio means that there are sufficient people working who can support the dependent population.
A lower ratio could allow for better pensions and better health care for citizens. A higher ratio indicates more financial stress on working people and possible political instability. While the strategies of increasing fertility and of allowing immigration especially of younger working age people have been formulas for lowering dependency ratios, future job reductions through automation may impact the effectiveness of those strategies.
Formula
In published international statistics, the dependent part usually includes those under the age of 15 and over the age of 64. The productive part makes up the population in between, ages 15 – 64. It is normally expressed as a percentage:
As the ratio increases there may be an increased burden on the productive part of the population to maintain the upbringing and pensions of the economically dependent. This results in direct impacts on financial expenditures on things like social security, as well as many indirect consequences.
The (total) dependency ratio can be decomposed into the child dependency ratio and the aged dependency ratio:
Total dependency ratio by regions
Projections
Below is a table constructed from data provided by the UN Population Division. It shows a historical ratio for the regions shown for the period 1950 - 2010. Columns to the right show projections of the ratio. Each number in the table shows the total number of dependents (people aged 0–14 plus people aged over 65) per hundred people in the workforce (number of people aged 15–64). The number can also be expressed as a percent. So, the total dependency ratio for the world in 1950 was 64.8% of the workforce.
As of 2010, Japan and Europe had high aged dependency ratios (that is over 65 as % of workforce) compared to other parts of the world. In Europe 2010, for every adult aged 65 and older there are approximately four working age adults (15-64); This ratio (one:four, or 25%) is expected to decrease to one:two, or 50%, by 2050. An aging population is caused by a decline in fertility and longer life expectancy. The average life expectancy of males and females are expected to increase from 79 years in 1990 to 82 years in 2025. The dependency amongst Japan residents aged 65 and older is expected to increase which will have a major impact on Japan's economy.
Inverse dependency ratio
The inverse of the dependency ratio, the inverse dependency ratio can be interpreted as how many independent workers have to provide for one dependent person (pension & expenditure on children).
Old-age dependency ratio
A high dependency ratio can cause serious problems for a country if a large proportion of a government's expenditure is on health, social security & education, which are most used by the youngest and the oldest in a population. The fewer people of working age, the fewer the people who can support schools, retirement pensions, disability pensions and other assistances to the youngest and oldest members of a population, often considered the most vulnerable members of society. The ratio of old (usually retired) to young working people is called old age dependency ratio (OADR) or just dependency ratio.
The old-age dependency ratio ignores the fact that the 65+ are not necessarily dependent (an increasing proportion of them are working, see also retirement age) and that many of those of 'working age' are actually not working. Alternatives have been developed', such as the 'economic dependency ratio', but they still ignore factors such as increases in productivity and in working hours. Worries about the increasing (demographic) dependency ratio should thus be taken with caution.
Labor force dependency ratio
The labor force dependency ratio (LFDR) is a more specific metric than the old age dependency ratio because it measures the ratio of the older retired population to the employed population at all ages (or the ratio of the inactive population to the active population at all ages).
Productivity weighted labor force dependency ratio
While OADRs or LFDRs provide reasonable measures of dependency, they do not account for the fact that middle-aged and educated workers are usually the most productive. Hence the productivity weighted labor force dependency ratio (PWLFDR) may be a better metric to determine dependency. The PWLFDR is the ratio of inactive population (all ages) to active population (all ages), weighted by productivity for education level. Interestingly, while OADRs or LFDRs can change substantially, the PWLFDR is predicted to remain relatively constant in countries like China for the next couple of decades. PWLFDR assessments recommend to invest in education and life-long learning and child health to maintain social stability even when populations age.
Migrant labor dependency ratio
Migrant labor dependency ratio (MLDR) is used to describe the extent to which the domestic population is dependent upon migrant labor.
Impact on savings and housing markets
High dependency ratios can lead to long-term economic changes within the population such as saving rates, investment rates, the housing markets, and the consumption patterns. Typically, workers will start to increase their savings as they grow closer to retirement age, but this will eventually affect their long-term interest rates due to the retirement population increasing and the fertility rates decreasing. If the demographic population continues to follow this trend, their savings will decrease while their long-term interest rates increase. Due to the saving rates decreasing, the investment rate will prevent economic growth because there will be less funding for investment projects. There is a correlation between labor force and housing markets, so when there is a high age-dependency ratio in a country, the investments in housing markets will decrease since the labor force is decreasing due to a high dependency population.
Solutions
Low dependency ratios promote economic growth while high dependency ratios decrease economic growth due to the large amounts of dependents that pay little to no taxes. A solution to decreasing the dependency ratio within a country is to promote immigration for younger people. This will stimulate a higher economic growth because the working-age population will grow in number if more young adults migrate into their country.
The increase in the involvement of women in the work force has contributed to the working-age population which complements the dependency ratio for a country. Encouraging women to work will help decrease the dependency ratio. Because more women are getting higher education, it is less likely for them to have children, causing the fertility rates to decrease as well.
Using productivity weighted labor force dependency ratio (PWLFDR) suggests that even an aging or decreasing population can maintain a stable support for the dependent (primarily ageing) population by increasing its productivity. A consequence from PWLFDR assessments is the recommendation to invest in education and life-long learning, child health, and to support disabled workers.
Demographic transition model
The age-dependency ratio can determine which stage in the Demographic Transition Model a certain country is in. The dependency ratio acts like a rollercoaster when going through the stages of the Demographic Transition Model. During stages 1 and 2, the dependency ratio is high due to significantly high crude birth rates putting pressure onto the smaller working-age population to take care of all of them. In stage 3, the dependency ratio starts to decrease because fertility and mortality rates start to decrease which shows that the proportion of adults to the young and elderly are much larger in this stage.
In stages 4 and 5, the dependency ratio starts to increase once again as the working-age population retires. Because fertility rates caused the younger population to decrease, once they grow up and start working, there will be more pressure for them to take care of the previous working-age population that just retired since there will be more young and elderly people than working-age adults during that time period.
The population structure of a country is an important factor for determining the economic status of their country. Japan is a great example of an aging population. They have a 1:4 ratio of people 65 years and older. This causes trouble for them because there is not enough people in the working-age population to support all of the elders. Rwanda is another example of a population that struggles with a younger population (also known as the "youth bulge"). Both of these countries are struggling with high dependency ratios even though both countries are on opposite stages of the Demographic Transition Model.
Criticism
The dependency ratio has been criticized for ignoring that many older adults are employed, and many younger adults are not, and obscuring other trends such as improving health for older people that might make older people less economically dependent. For this reason, the Office of the United Nations High Commissioner for Human Rights has characterized the metric as ageist, and recommends avoiding its use. Alternative metrics, such as the economic dependency ratio (defined as the number of unemployed and retired people divided by the number of workers) do address this oversimplification, but ignore the effects of productivity and work hours.
See also
List of countries by dependency ratio
Demographic economics
Economic collapse
Employment-to-population ratio
Generational accounting
Pensions crisis
Sub-replacement fertility
Societal collapse
Case studies:
Ageing of Europe
Ageing of Japan
References
External links
Old Age dependency Ratios in Europe
Definition and Forecasts for Dependency Ratios
The Risk Pool, Malcolm Gladwell, The New Yorker, 8/23/2006
Demographic economic problems
Macroeconomic problems
Retirement
Ageing
Government budgets
Ratios | Dependency ratio | [
"Mathematics"
] | 1,955 | [
"Arithmetic",
"Ratios"
] |
591,972 | https://en.wikipedia.org/wiki/Paris%20Convention%20for%20the%20Protection%20of%20Industrial%20Property | The Paris Convention for the Protection of Industrial Property, signed in Paris, France, on 20 March 1883, is one of the first intellectual property treaties. It established a Union for the protection of industrial property. The convention is still in force as of 2024. The substantive provisions of the Convention fall into three main categories: national treatment, priority right and common rules.
Contents
National treatment
According to Articles 2 and 3 of this treaty, juristic and natural persons who are either national of or domiciled in a state party to the Convention shall, as regards the protection of industrial property, enjoy in all the other countries of the Union, the advantages that their respective laws grant to nationals.
In other words, when an applicant files an application for a patent or a trademark in a foreign country member of the Union, the application receives the same treatment as if it came from a national of this foreign country. Furthermore, if the intellectual property right is granted (e.g. if the applicant becomes owners of a patent or of a registered trademark), the owner benefits from the same protections and the same legal remedy against any infringement as if the owner was a national owner of this right.
Priority right
The "Convention priority right", also called "Paris Convention priority right" or "Union priority right", was also established by Article 4 of the Paris Convention, and is generally regarded as one of the cornerstones of the Paris Convention. It provides that an applicant from one contracting State shall be able to use its first filing date (in one of the contracting States) as the effective filing date in another contracting State, provided that the applicant, or the applicant's successor in title, files a subsequent application within 6 months (for industrial designs and trademarks) or 12 months (for patents and utility models) from the first filing.
Temporary protection for goods shown at some international exhibitions
Article 11(1) of the Paris Convention requires that the Countries of the Union "grant temporary protection to patentable inventions, utility models, industrial designs, and trademarks, in respect of goods exhibited at official or officially recognized international exhibitions held in the territory of any of them".
If a patent or trademark registration is applied for during the temporary period of protection, the priority date of the application may be counted "from the date of introduction of the goods into the exhibition" rather than from the date of filing of the application, if the temporary protection referred to in Article 11(1) has been implemented in such a manner in national law. There are, however, other means for the Countries of the Union to implement in their national law the temporary protection provided for in Article 11 of the Paris Convention:
Mutual independence of patents and trademarks in the different Countries of the Union
According to Articles 4bis and 6 (for patents and trademarks respectively), for foreigners, the application for a patent or the registration of a trademark shall be determined by the member state in accordance with their national law and not by the decision of the country of origin or any other countries. Patent applications and trademark registrations are independent among contracting countries.
History
After a diplomatic conference in Paris in 1880, the convention was signed on 20 March 1883 by 11 countries: Belgium, Brazil, France, Guatemala, Italy, the Netherlands, Portugal, El Salvador, Kingdom of Serbia, Spain and Switzerland. Guatemala, El Salvador and Serbia denounced and reapplied the convention via accession.
The Treaty was revised at Brussels, Belgium, on 14 December 1900, at Washington, United States, on 2 June 1911, at The Hague, Netherlands, on 6 November 1925, at London, on 2 June 1934, at Lisbon, Portugal, on 31 October 1958, and at Stockholm, Sweden, on 14 July 1967. It was amended on 28 September 1979.
Contracting parties
As of 14 December 2024, the convention has 180 contracting member countries.
Administration
The Paris Convention is administered by the World Intellectual Property Organization (WIPO) based in Geneva, Switzerland.
See also
Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPs)
Convention Establishing the World Intellectual Property Organization (WIPO Convention)
US provisional patent application
Substantive Patent Law Treaty (SPLT)
References
Further reading
Schuyler, William E. "Paris Convention for the Protection of Industrial Property-A View of the Proposed Revisions." NCJ Int'l L. & Com. Reg. 8 (1982): 155+. online
External links
Paris Convention at the World Intellectual Property Organization (WIPO)
Intellectual property treaties
Patent law treaties
Trademark legislation
World Intellectual Property Organization treaties
1883 in France
1883 treaties
Industrial design
Treaties entered into force in 1884
Treaties of Albania
Treaties of Algeria
Treaties of Andorra
Treaties of Antigua and Barbuda
Treaties of Angola
Treaties of Argentina
Treaties of Armenia
Treaties of Australia
Treaties of Austria-Hungary
Treaties of Azerbaijan
Treaties of the Bahamas
Treaties of Bahrain
Treaties of Bangladesh
Treaties of Barbados
Treaties of Belarus
Treaties of Belgium
Treaties of Belize
Treaties of the Republic of Dahomey
Treaties of Bhutan
Treaties of Bolivia
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of the Empire of Brazil
Treaties of the Kingdom of Bulgaria
Treaties of Brunei
Treaties of Burkina Faso
Treaties of Burundi
Treaties of Cambodia
Treaties of Cameroon
Treaties of Canada
Treaties of the Central African Republic
Treaties of Chad
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Colombia
Treaties of the Comoros
Treaties of Zaire
Treaties of the Republic of the Congo
Treaties of Costa Rica
Treaties of Ivory Coast
Treaties of Croatia
Treaties of Cuba
Treaties of Cyprus
Treaties of Czechoslovakia
Treaties of the Czech Republic
Treaties of Denmark
Treaties of Djibouti
Treaties of Dominica
Treaties of the Dominican Republic
Treaties of Ecuador
Treaties of the Kingdom of Egypt
Treaties of El Salvador
Treaties of Equatorial Guinea
Treaties of Estonia
Treaties of Finland
Treaties of the French Third Republic
Treaties of Gabon
Treaties of the Gambia
Treaties of Georgia (country)
Treaties of the German Empire
Treaties of Ghana
Treaties of the Kingdom of Greece
Treaties of Grenada
Treaties of Guatemala
Treaties of Guinea
Treaties of Guinea-Bissau
Treaties of Haiti
Treaties of Honduras
Treaties extended to British Hong Kong
Treaties of Iceland
Treaties of India
Treaties of Indonesia
Treaties of Pahlavi Iran
Treaties of Ba'athist Iraq
Treaties of the Irish Free State
Treaties of Israel
Treaties of the Kingdom of Italy (1861–1946)
Treaties of Jamaica
Treaties of the Empire of Japan
Treaties of Jordan
Treaties of Kazakhstan
Treaties of Kenya
Treaties of Kiribati
Treaties of North Korea
Treaties of South Korea
Treaties of Kuwait
Treaties of Kyrgyzstan
Treaties of Laos
Treaties of Latvia
Treaties of Lebanon
Treaties of Lesotho
Treaties of Liberia
Treaties of the Libyan Arab Republic
Treaties of Liechtenstein
Treaties of Lithuania
Treaties of Luxembourg
Treaties extended to Macau
Treaties of North Macedonia
Treaties of Madagascar
Treaties of Malawi
Treaties of Malaysia
Treaties of Mali
Treaties of Malta
Treaties of Mauritania
Treaties of Mauritius
Treaties of Mexico
Treaties of Moldova
Treaties of Monaco
Treaties of Montenegro
Treaties of Morocco
Treaties of Mozambique
Treaties of Namibia
Treaties of Nepal
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Niger
Treaties of Nigeria
Treaties of Norway
Treaties of Oman
Treaties of Pakistan
Treaties of Panama
Treaties of Papua New Guinea
Treaties of Paraguay
Treaties of Peru
Treaties of the Philippines
Treaties of the Second Polish Republic
Treaties of the Kingdom of Portugal
Treaties of Qatar
Treaties of the Kingdom of Romania
Treaties of the Soviet Union
Treaties of Rwanda
Treaties of Saint Kitts and Nevis
Treaties of Saint Lucia
Treaties of Saint Vincent and the Grenadines
Treaties of San Marino
Treaties of São Tomé and Príncipe
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Sierra Leone
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of the Union of South Africa
Treaties of the Spanish Empire
Treaties of the Dominion of Ceylon
Treaties of the Democratic Republic of the Sudan
Treaties of Suriname
Treaties of Eswatini
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tajikistan
Treaties of Tanzania
Treaties of Thailand
Treaties of Togo
Treaties of Tonga
Treaties of Trinidad and Tobago
Treaties of Tunisia
Treaties of Turkey
Treaties of Turkmenistan
Treaties of Uganda
Treaties of Ukraine
Treaties of the United Arab Emirates
Treaties of the United Kingdom (1801–1922)
Treaties of the United States
Treaties of Uruguay
Treaties of Uzbekistan
Treaties of the Holy See
Treaties of Venezuela
Treaties of North Vietnam
Treaties of Zambia
Treaties of Zimbabwe
Treaties of Yugoslavia
Treaties of the Orange Free State
Treaties of the Colony of Queensland
Treaties extended to the Netherlands Antilles
Treaties extended to Aruba
Treaties extended to Curaçao and Dependencies
Treaties extended to the Dutch East Indies
Treaties extended to Surinam (Dutch colony)
Treaties extended to the Isle of Man
Treaties extended to American Samoa
Treaties extended to Baker Island
Treaties extended to Guam
Treaties extended to Howland Island
Treaties extended to Jarvis Island
Treaties extended to Johnston Atoll
Treaties extended to Midway Atoll
Treaties extended to Navassa Island
Treaties extended to the Trust Territory of the Pacific Islands
Treaties extended to Palmyra Atoll
Treaties extended to Puerto Rico
Treaties extended to the United States Virgin Islands
Treaties extended to Wake Island
Treaties extended to Tanganyika (territory)
Treaties extended to the Crown Colony of Singapore
Treaties extended to Mandatory Palestine
Treaties extended to the Crown Colony of Trinidad and Tobago
Treaties extended to British Ceylon
Treaties extended to the Panama Canal Zone
Treaties extended to Portuguese Macau
Treaties extended to Australia
Treaties extended to New Zealand
Treaties extended to South Sakhalin
Treaties extended to dependent territories in Asia
Treaties extended to dependent territories of Japan
Treaties extended to Formosa
Treaties of Afghanistan
1883 | Paris Convention for the Protection of Industrial Property | [
"Engineering"
] | 1,850 | [
"Industrial design",
"Design engineering",
"Design"
] |
592,063 | https://en.wikipedia.org/wiki/Eads%20Bridge | The Eads Bridge is a combined road and railway bridge over the Mississippi River connecting the cities of St. Louis, Missouri, and East St. Louis, Illinois. It is located on the St. Louis riverfront between Laclede's Landing to the north, and the grounds of the Gateway Arch to the south. The bridge is named for its designer and builder, James Buchanan Eads. Work on the bridge began in 1867, and it was completed in 1874. The Eads Bridge was the first bridge across the Mississippi south of the Missouri River. Earlier bridges were located north of the Missouri, where the Mississippi is narrower. None of the earlier bridges survived, which means that the Eads Bridge is also the oldest bridge on the river.
To accommodate the massive size and strength of the Mississippi River, the Eads Bridge required a number of engineering feats. It pioneered the large-scale use of steel as a structural material, leading the shift from wrought-iron as the default material for large structures. Its foundations, more than 100 feet below water level, were the deepest underwater constructions at the time. They were installed using pneumatic caissons, a pioneering application of caisson technology in the United States and, at the time, by far the largest caissons ever built. Its 520-foot center arch was the longest rigid span ever built at the time. The arches were built suspended from temporary wooden towers, sometimes cited as the first use of the "cantilever principle" for a large bridge. These engineering principles were used for later bridges, including the Brooklyn Bridge, which began construction in 1870.
The Eads Bridge became a famous image of the city of St. Louis, superseded only by the Gateway Arch, completed in 1965. The highway deck was closed to automobiles from 1991 to 2003, but has been restored and now carries both vehicular and pedestrian traffic. It connects Washington Avenue in St. Louis with Riverpark Drive and East Broadway in East St. Louis. The former railroad deck now carries the St. Louis MetroLink light rail system, connecting Missouri and Illinois stations.
The bridge is listed on the National Register of Historic Places as a National Historic Landmark. As of April 2014, it carries about 8,100 vehicles daily, down 3,000 since the Stan Musial Veterans Memorial Bridge opened in February 2014.
History
The Eads Bridge was built by the Illinois and St. Louis Bridge Company. A subcontractor was the Keystone Bridge Company, founded in 1865 by Andrew Carnegie, which erected the steel superstructure.
The growth of railroads since the Civil War had depressed river shipping trade, and Chicago was fast gaining as the center of commerce in the West. The bridge was envisioned to restore St. Louis' eminence as a center of commerce by connecting railroad and vehicle transportation across the river. Although he had no experience in building bridges, James Eads was chosen as chief engineer.
In an attempt to secure their future, steamboat interests successfully lobbied to place restrictions on bridge construction, requiring spans and heights previously unheard of. This was ostensibly to maintain sufficient operating room for steamboats beneath the bridge's base for the then foreseeable future. The unproclaimed purpose was to require a bridge so grand and lofty that it was impossible to erect according to conventional building techniques. The steamboat parties planned to prevent any structure from being built, in order to ensure continued dependence on river traffic to sustain commerce in the region.
Such a bridge required a radical design solution. The Mississippi River's strong current was almost and the builders had to battle ice floes in the winter. The ribbed arch had been a known construction technique for centuries. The triple span, tubular metallic arch construction was supported by two shore abutments and two mid-river piers. Four pairs of arches per span (upper and lower) were set apart, supporting an upper deck for vehicular traffic and a lower deck for rail traffic.
Construction involved varied and confusing design elements and pressures. State and federal charters precluded suspension or draw bridges, or wood construction. There were constraints on span size and the height above the water line. The location required reconciling differences in heights - from the low Illinois floodplain of the east bank of the river to the high Missouri cliff on the west bank. The bedrock could only be reached by deep drilling, as it was below water level on the Illinois side and below on the Missouri side.
These pressures resulted in a bridge noted as innovative for precision and accuracy of construction and quality control. This was the first use of structural alloy steel in a major building construction, through use of cast chromium steel components – even though as 1988 tests showed, the amount of chromium was too low to influence the strength, and the steel in general wouldn't be considered suitable for any structural application in modern times. The completed bridge also relied on significant—and unknown—amounts of wrought iron. Eads argued that the great compressive strength of steel was ideal for use in the upright arch design. His decision resulted from a curious combination of chance and necessity, due to the insufficient strength of alternative material choices.
The particular physical difficulties of the site stimulated interesting solutions to construction problems. The deep caissons used for pier and abutment construction signaled a new chapter in civil engineering. Piers were sunk almost below the river's surface. Unable to construct falsework to erect the arches, because they would obstruct river traffic, Eads's engineers devised a cantilevered rigging system to close the arches.
Masonry piers were built to heights of almost , about the height of a ten-story building. About of that span was driven through the sandy riverbed until it hit bedrock. Eads implemented a building method that he had observed in Europe, whereby masonry was set atop a metal chamber filled with compressed air. Stone was added to the chamber, which caused the caisson to sink. Workers dove into the caisson to shovel sand into a pump that shot it out into the air so the masonry could be sunk into the riverbed. Numerous workers who operated in the Eads Bridge caissons, still among the deepest ever sunk, suffered from "caisson disease" (also known as "the bends" or decompression sickness). Fifteen workers died, two other workers were permanently disabled, and 77 were severely afflicted.
The Eads Bridge was recognized as an innovative and exciting achievement. Eads secured 47 patents during his lifetime, many of which were taken out for parts of the bridge's structure and devices for its construction. President Ulysses S. Grant dedicated the bridge on July 4, 1874, and General William T. Sherman drove the gold spike completing construction. After completion, 14 locomotives crossed the bridge to prove its stability.
On June 14, 1874, John Robinson led a "test elephant" on a stroll across the new Eads Bridge to prove that it was safe. A big crowd cheered as the elephant from a traveling circus lumbered toward Illinois. Popular belief held that elephants had instincts that would make them avoid setting foot on unsafe structures. Two weeks later, Eads sent 14 locomotives back and forth across the bridge at one time. The opening day celebration on July 4, 1874, featured a parade that stretched for through the streets of St. Louis.
The cost of building the bridge was nearly $10 million ($ million with inflation).
The Eads Bridge was undercapitalized during construction and burdened with debt. Because of its historic focus on the Mississippi and river trade, St. Louis lacked adequate rail terminal facilities, and the bridge was poorly planned to coordinate rail access. Although an engineering and aesthetic success, the bridge operations became bankrupt within a year of opening. The railroads boycotted the bridge, resulting in a loss of tolls. The bridge was later sold at auction for 20 cents on the dollar. This sale caused the National Bank of the State of Missouri to fold, which was the largest bank failure in the United States at that time. Eads did not suffer financial consequences. Many involved with financing the bridge were indicted, but Eads was not.
Granite for the bridge came from the Iron County, Missouri, quarry of B. Gratz Brown, Missouri Governor and U.S. Senator, who had helped secure federal financing for the bridge.
In April 1875, after the failure of the Illinois and St Louis Bridge Company, the bridge was sold at public auction, for $2 million, to a newly incorporated St. Louis Bridge Company controlled by the old company's creditors. This group was bought-out two years later by the Terminal Railroad Association of St. Louis (TRRA). The TRRA owned the bridge until 1989, when the Terminal Railroad transferred the bridge to the Bi-State Regional Transportation Authority and the City of St. Louis, for incorporation into St Louis' MetroLink light rail system. In exchange for Eads Bridge, the TRRA acquired the MacArthur Bridge, previously owned by the City of St Louis.
In 1949, the bridge's strength was tested with electromagnetic strain gauges. It was determined that Eads' original estimation of an allowable load of could be raised to . According to Carol Ferring Shepley, a professional writer who has written a biography of the bridge's designer, Eads Bridge is still considered one of the greatest bridges ever built.
The Eads Bridge had long hosted only passenger trains on its rail deck. In the late 20th century, however, passenger traffic had declined because of individual automobile use, and the railroad industry was restructuring. By the 1970s, the Terminal Railroad Association had abandoned its Eads trackage. The bridge had lost all remaining passenger rail traffic to the MacArthur Bridge during the early years of Amtrak; the dimensions of modern passenger diesels were incompatible with both the bridge and the adjoining tunnel linking the Union Station trackage with Eads.
MetroLink service over the bridge began in 1993. The bridge was closed to automobile traffic between 1991 and 2003, when the city of St. Louis, Missouri, completed a project to restore the highway deck.
In 1998, the Naval Facilities Engineering Service Center investigated the effects of the ramming of the bridge by the towboat Anne Holly on April 4 of that year. The ramming resulted in the near breakaway of the SS Admiral, a riverboat casino. Implementing several recommended changes reduced the odds of this happening in the future.
In 2012, the Bi-State Development Agency/Metro (BSDA/Metro) started the Eads Bridge Rehabilitation project to extend the life of the bridge to at least the year 2091. The restorations included replacing 1.2 million pounds of struts, bracing, and other support steel dating to the 1880s; removing all paint and corrosion from the superstructure; re-painting the superstructure with a rust-inhibiting coating; repairing damaged structure; rebuilding concrete supports; restoring the brick archways; and upgrading the MetroLink's rails. The total cost was $48 million, with $27 million coming from the American Recovery and Reinvestment Act of 2009. While expected to start in 2009, work did not begin until 2012 due to labor disputes and higher-than-expected cost estimates. Workers completed the project in 2016.
Tunnel
City fathers wanted a wagon bridge to the heart of town to highlight the best features of St. Louis. Economics required that it be a railroad bridge, but there was no space for railroads in the heart of downtown. Hence, a tunnel was authorized to connect the bridge to the Missouri Pacific Railroad to the south (and later to the new Union Station).
Eads worked out the specifications for the tunnel. It was to be a “cut and cover” tunnel 4000 ft long, 30 ft below street level. They advertised for bids in the Missouri Republican on August 31, 1872. The contract was awarded to William Skrainka and Company. Construction began in October. A series of problems arose including quicksand and springs on the planned route. Also several workers were injured; at least one was killed.
On November 29, the city council passed an ordinance changing the tunnel route to Eight Street and transferring the right to build to the newly formed St. Louis Tunnel Railroad Company.
In April, Skrainka and Co. decided the project was too difficult. They agreed to complete construction south of Market St. The work north of Market was assigned to James Andrews, the stonemason overseeing construction of the bridge piers.
The Eads Bridge was ready to be opened after seven years of construction on July 4, 1874. The celebration included a fifteen-car train filled with 500 dignitaries pulled by three locomotives that departed from the St. Louis, Vandalia, and Terre Haute Railroad station in East St. Louis. Locomotives were provided by the Illinois Central Railroad and the Vandalia line (a Pennsylvania Railroad subsidiary). The route crossed the Eads Bridge and traveled through the tunnel to Mill Creek Valley and then returned.
Locomotive smoke is a concern in tunnels, especially passenger tunnels. Specially designed coke-burning “smoke-consuming engines” from the Baldwin Locomotive Works had yet to be ordered. News reports tell of passengers coughing and gasping for breath. Construction of the tunnel was not yet complete. Only one of the two tracks was available and ventilation was not yet arranged.
A photograph of the St. Louis Bridge Company's coke-burning engine appears on page 38 of Brown's Baldwin Locomotive Works.
The St. Louis Bridge Company almost certainly had a transfer station in East St. Louis to switch trains entering St. Louis from Illinois between steam locomotives and the coke-burning engine used in this tunnel, as the Eads Bridge's railroad deck connects directly to the tunnel. This would have been analogous to the later (1910–1937), well-known Manhattan Transfer station in New Jersey, except there rail passengers switched between the electric trains used in the New York Tunnel Extension tunnels under the Hudson River (North River Tunnels) and thru New York City (historic Penn Station and East River Tunnels) and the steam trains then used on the Pennsylvania Railroad main line (now part of Amtrak’s electrified Northeast Corridor along with the tunnels and present-day Penn Station), instead of switching engines on the train itself as was apparently the case in St. Louis.
In 1875, the bridge and tunnel companies declared bankruptcy. In 1881, Jay Gould got control of the bridge and tunnel companies by threatening to build a competing bridge four miles north of St. Louis. In 1889, Gould was instrumental in the creation of the Terminal Railroad Association of St. Louis. He died in 1892, but this led to the construction of Union Station in 1894.
The Eads Bridge and its tunnel are now used by Metrolink, the St. Louis light rail system.
Recognition
At the 1893 Columbian Exposition, Missouri exhibited a model of the bridge made of sugar cane.
In 1898 the bridge was featured on the $2 Trans-Mississippi Issue of postage stamps. One hundred years later the design was reprinted in a commemorative souvenir sheet.
The bridge was designated as a National Historic Landmark in 1964, in recognition of its innovations in design, materials, construction methods, and importance in the history of large-scale engineering projects.
During the bridge's construction, The New York Times called it "The World's Eighth Wonder". On its 100th anniversary, the Times' architectural critic, Ada Louise Huxtable, described it as "among the most beautiful works of man."
See also
Chain of Rocks Bridge
Martin Luther King Bridge
McKinley Bridge
Merchants Bridge
Poplar Street Bridge
Stan Musial Veterans Memorial Bridge
List of bridges documented by the Historic American Engineering Record in Illinois
List of bridges documented by the Historic American Engineering Record in Missouri
List of crossings of the Upper Mississippi River
List of National Historic Landmarks in Missouri
List of National Historic Landmarks in Illinois
National Register of Historic Places listings in St. Clair County, Illinois
National Register of Historic Places listings in Downtown and Downtown West St. Louis
List of bridges on the National Register of Historic Places in Illinois
List of bridges on the National Register of Historic Places in Missouri
References
Bibliography
Further reading
External links
National Historic Landmark Designation - Statement of Significance
Eads Bridge - the History and Heritage of Civil Engineering webpage (American Society of Civil Engineers)
Eads Bridge at corellcreek
Bridge Pros: Eads Bridge
Bridge info at Historic Bridges of the United States.
maps.google.com zoomed in, hybrid mode
High resolution panoramic image of an Eads Bridge span
Picture, circa 1980
The Men Who Built America - Film Documentary that covers the importance of the bridge and development of the steel industry that made its construction possible.
Engineering Illustrated London : Office for Advertisements and Publication, July 14, 1871
Andrew Carnegie
Open-spandrel deck arch bridges in the United States
Bridges completed in 1874
Railroad bridges in Illinois
Railroad bridges in Missouri
Road-rail bridges in the United States
Bridges in St. Louis
Bridges over the Mississippi River
East St. Louis, Illinois
National Historic Landmarks in Missouri
National Historic Landmarks in Illinois
MetroLink (St. Louis)
MetroLink (St. Louis) infrastructure
Historic Civil Engineering Landmarks
National Register of Historic Places in St. Clair County, Illinois
Landmarks of St. Louis
Bridges in St. Clair County, Illinois
Historic American Buildings Survey in Missouri
Historic American Engineering Record in Missouri
Railroad bridges on the National Register of Historic Places in Illinois
Railroad-related National Historic Landmarks
Former toll bridges in Illinois
Former toll bridges in Missouri
Light rail bridges
Road bridges on the National Register of Historic Places in Illinois
Road bridges on the National Register of Historic Places in Missouri
Railroad bridges on the National Register of Historic Places in Missouri
Steel bridges in the United States
Downtown St. Louis
Interstate vehicle bridges in the United States
Buildings and structures in St. Louis
1874 establishments in Missouri | Eads Bridge | [
"Engineering"
] | 3,581 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
592,136 | https://en.wikipedia.org/wiki/Palladian%20architecture | Palladian architecture is a European architectural style derived from the work of the Venetian architect Andrea Palladio (1508–1580). What is today recognised as Palladian architecture evolved from his concepts of symmetry, perspective and the principles of formal classical architecture from ancient Greek and Roman traditions. In the 17th and 18th centuries, Palladio's interpretation of this classical architecture developed into the style known as Palladianism.
Palladianism emerged in England in the early 17th century, led by Inigo Jones, whose Queen's House at Greenwich has been described as the first English Palladian building. Its development faltered at the onset of the English Civil War. After the Stuart Restoration, the architectural landscape was dominated by the more flamboyant English Baroque. Palladianism returned to fashion after a reaction against the Baroque in the early 18th century, fuelled by the publication of a number of architectural books, including Palladio's own I quattro libri dell'architettura (The Four Books of Architecture) and Colen Campbell's Vitruvius Britannicus. Campbell's book included illustrations of Wanstead House, a building he designed on the outskirts of London and one of the largest and most influential of the early neo-Palladian houses. The movement's resurgence was championed by Richard Boyle, 3rd Earl of Burlington, whose buildings for himself, such as Chiswick House and Burlington House, became celebrated. Burlington sponsored the career of the artist, architect and landscaper William Kent, and their joint creation, Holkham Hall in Norfolk, has been described as "the most splendid Palladian house in England". By the middle of the century Palladianism had become almost the national architectural style, epitomised by Kent's Horse Guards at the centre of the nation's capital.
The Palladian style was also widely used throughout Europe, often in response to English influences. In Prussia the critic and courtier Francesco Algarotti corresponded with Burlington about his efforts to persuade Frederick the Great of the merits of the style, while Knobelsdorff's opera house in Berlin on the Unter den Linden, begun in 1741, was based on Campbell's Wanstead House. Later in the century, when the style was losing favour in Europe, Palladianism had a surge in popularity throughout the British colonies in North America. Thomas Jefferson sought out Palladian examples, which themselves drew on buildings from the time of the Roman Republic, to develop a new architectural style for the American Republic. Examples include the Hammond–Harwood House in Maryland and Jefferson's own house, Monticello, in Virginia. The Palladian style was also adopted in other British colonies, including those in the Indian subcontinent.
In the 19th century, Palladianism was overtaken in popularity by Neoclassical architecture in both Europe and in North America. By the middle of that century, both were challenged and then superseded by the Gothic Revival in the English-speaking world, whose champions such as Augustus Pugin, remembering the origins of Palladianism in ancient temples, deemed the style too pagan for true Christian worship. In the 20th and 21st centuries, Palladianism has continued to evolve as an architectural style; its pediments, symmetry and proportions are evident in the design of many modern buildings, while its inspirer is regularly cited as having been among the world's most influential architects.
Palladio's architecture
Andrea Palladio was born in Padua in 1508, the son of a stonemason. He was inspired by Roman buildings, the writings of Vitruvius (80 BC), and his immediate predecessors Donato Bramante and Raphael. Palladio aspired to an architectural style that used symmetry and proportion to emulate the grandeur of classical buildings. His surviving buildings are in Venice, the Veneto region, and Vicenza, and include villas and churches such as the Basilica del Redentore in Venice. Palladio's architectural treatises follow the approach defined by Vitruvius and his 15th-century disciple Leon Battista Alberti, who adhered to principles of classical Roman architecture based on mathematical proportions rather than the ornamental style of the Renaissance. Palladio recorded and publicised his work in the 1570 four-volume illustrated study, I quattro libri dell'architettura (The Four Books of Architecture).
Palladio's villas are designed to fit with their setting. If on a hill, such as Villa Almerico Capra Valmarana (Villa Capra, or La Rotonda), façades were of equal value so that occupants could enjoy views in all directions. Porticos were built on all sides to enable the residents to appreciate the countryside while remaining protected from the sun. Palladio sometimes used a loggia as an alternative to the portico. This is most simply described as a recessed portico, or an internal single storey room with pierced walls that are open to the elements. Occasionally a loggia would be placed at second floor level over the top of another loggia, creating what was known as a double loggia. Loggias were sometimes given significance in a façade by being surmounted by a pediment. Villa Godi's focal point is a loggia rather than a portico, with loggias terminating each end of the main building.
Palladio would often model his villa elevations on Roman temple façades. The temple influence, often in a cruciform design, later became a trademark of his work. Palladian villas are usually built with three floors: a rusticated basement or ground floor, containing the service and minor rooms; above this, the piano nobile (noble level), accessed through a portico reached by a flight of external steps, containing the principal reception and bedrooms; and lastly a low mezzanine floor with secondary bedrooms and accommodation. The proportions of each room (for example, height and width) within the villa were calculated on simple mathematical ratios like 3:4 and 4:5. The arrangement of the different rooms within the house, and the external façades, were similarly determined. Earlier architects had used these formulas for balancing a single symmetrical façade; however, Palladio's designs related to the entire structure. Palladio set out his views in I quattro libri dell'architettura: "beauty will result from the form and correspondence of the whole, with respect to the several parts, of the parts with regard to each other, and of these again to the whole; that the structure may appear an entire and complete body, wherein each member agrees with the other, and all necessary to compose what you intend to form."
Palladio considered the dual purpose of his villas as the centres of farming estates and weekend retreats. These symmetrical temple-like houses often have equally symmetrical, but low, wings, or barchessas, sweeping away from them to accommodate horses, farm animals, and agricultural stores. The wings, sometimes detached and connected to the villa by colonnades, were designed not only to be functional but also to complement and accentuate the villa. Palladio did not intend them to be part of the main house, but the development of the wings to become integral parts of the main building – undertaken by Palladio's followers in the 18th century – became one of the defining characteristics of Palladianism.
Venetian and Palladian windows
Palladian, Serlian, or Venetian windows are a trademark of Palladio's early career. There are two different versions of the motif: the simpler one is called a Venetian window, and the more elaborate a Palladian window or "Palladian motif", although this distinction is not always observed.
The Venetian window has three parts: a central high round-arched opening, and two smaller rectangular openings to the sides. The side windows are topped by lintels and supported by columns. This is derived from the ancient Roman triumphal arch, and was first used outside Venice by Donato Bramante and later mentioned by Sebastiano Serlio (1475–1554) in his seven-volume architectural book Tutte l'opere d'architettura et prospetiva (All the Works of Architecture and Perspective) expounding the ideals of Vitruvius and Roman architecture. It can be used in series, but is often only used once in a façade, as at New Wardour Castle, or once at each end, as on the inner façade of Burlington House (true Palladian windows).
Palladio's elaboration of this, normally used in a series, places a larger or giant order in between each window, and doubles the small columns supporting the side lintels, placing the second column behind rather than beside the first. This was introduced in the in Venice by Jacopo Sansovino (1537), and heavily adopted by Palladio in the Basilica Palladiana in Vicenza, where it is used on both storeys; this feature was less often copied. The openings in this elaboration are not strictly windows, as they enclose a loggia. Pilasters might replace columns, as in other contexts. Sir John Summerson suggests that the omission of the doubled columns may be allowed, but the term "Palladian motif" should be confined to cases where the larger order is present.
Palladio used these elements extensively, for example in very simple form in his entrance to Villa Forni Cerato. It is perhaps this extensive use of the motif in the Veneto that has given the window its alternative name of the Venetian window. Whatever the name or the origin, this form of window has become one of the most enduring features of Palladio's work seen in the later architectural styles evolved from Palladianism. According to James Lees-Milne, its first appearance in Britain was in the remodelled wings of Burlington House, London, where the immediate source was in the English court architect Inigo Jones's designs for Whitehall Palace rather than drawn from Palladio himself. Lees-Milne describes the Burlington window as "the earliest example of the revived Venetian window in England".
A variant, in which the motif is enclosed within a relieving blind arch that unifies the motif, is not Palladian, though Richard Boyle seems to have assumed it was so, in using a drawing in his possession showing three such features in a plain wall. Modern scholarship attributes the drawing to Vincenzo Scamozzi. Burlington employed the motif in 1721 for an elevation of Tottenham Park in Savernake Forest for his brother-in-law Lord Bruce (since remodelled). William Kent used it in his designs for the Houses of Parliament, and it appears in his executed designs for the north front of Holkham Hall. Another example is Claydon House, in Buckinghamshire; the remaining fragment is one wing of what was intended to be one of two flanking wings to a vast Palladian house. The scheme was never completed and parts of what was built have since been demolished.
Early Palladianism
During the 17th century, many architects studying in Italy learned of Palladio's work, and on returning home adopted his style, leading to its widespread use across Europe and North America. Isolated forms of Palladianism throughout the world were brought about in this way, although the style did not reach the zenith of its popularity until the 18th century. An early reaction to the excesses of Baroque architecture in Venice manifested itself as a return to Palladian principles. The earliest neo-Palladians there were the exact contemporaries Domenico Rossi (1657–1737) and Andrea Tirali (1657–1737). Their biographer, Tommaso Temanza, proved to be the movement's most able proponent; in his writings, Palladio's visual inheritance became increasingly codified and moved towards neoclassicism.
The most influential follower of Palladio was Inigo Jones, who travelled throughout Italy with the art collector Earl of Arundel in 1613–1614, annotating his copy of Palladio's treatise. The "Palladianism" of Jones and his contemporaries and later followers was a style largely of façades, with the mathematical formulae dictating layout not strictly applied. A handful of country houses in England built between 1640 and 1680 are in this style. These follow the success of Jones's Palladian designs for the Queen's House at Greenwich, the first English Palladian house, and the Banqueting House at Whitehall, the uncompleted royal palace in London of Charles I.
Palladian designs advocated by Jones were too closely associated with the court of Charles I to survive the turmoil of the English Civil War. Following the Stuart restoration, Jones's Palladianism was eclipsed by the Baroque designs of such architects as William Talman, Sir John Vanbrugh, Nicholas Hawksmoor, and Jones's pupil John Webb.
Neo-Palladianism
English Palladian architecture
The Baroque style proved highly popular in continental Europe, but was often viewed with suspicion in England, where it was considered "theatrical, exuberant and Catholic." It was superseded in Britain in the first quarter of the 18th century when four books highlighted the simplicity and purity of classical architecture. These were:
Vitruvius Britannicus (The British Architect), published by Colen Campbell in 1715 (of which supplemental volumes appeared through the century);
I quattro libri dell'architettura (The Four Books of Architecture), by Palladio himself, translated by Giacomo Leoni and published from 1715 onwards;
(On the Art of Building), by Leon Battista Alberti, translated by Giacomo Leoni and published in 1726; and
The Designs of Inigo Jones... with Some Additional Designs, published by William Kent in two volumes in 1727. A further volume, Some Designs of Mr. Inigo Jones and Mr. William Kent was published in 1744 by the architect John Vardy, an associate of Kent.
The most favoured among patrons was the four-volume Vitruvius Britannicus by Campbell, The series contains architectural prints of British buildings inspired by the great architects from Vitruvius to Palladio; at first mainly those of Inigo Jones, but the later works contained drawings and plans by Campbell and other 18th-century architects. These four books greatly contributed to Palladian architecture becoming established in 18th-century Britain. Campbell and Kent became the most fashionable and sought-after architects of the era. Campbell had placed his 1715 designs for the colossal Wanstead House near to the front of Vitruvius Britannicus, immediately following the engravings of buildings by Jones and Webb, "as an exemplar of what new architecture should be". On the strength of the book, Campbell was chosen as the architect for Henry Hoare I's Stourhead house. Hoare's brother-in-law, William Benson, had designed Wilbury House, the earliest 18th-century Palladian house in Wiltshire, which Campbell had also illustrated in Vitruvius Britannicus.
At the forefront of the new school of design was the "architect earl", Richard Boyle, 3rd Earl of Burlington, according to Dan Cruikshank the "man responsible for this curious elevation of Palladianism to the rank of a quasi-religion". In 1729 he and Kent designed Chiswick House. This house was a reinterpretation of Palladio's Villa Capra, but purified of 16th century elements and ornament. This severe lack of ornamentation was to be a feature of English Palladianism.
In 1734 Kent and Burlington designed Holkham Hall in Norfolk. James Stevens Curl considers it "the most splendid Palladian house in England". The main block of the house followed Palladio's dictates, but his low, often detached, wings of farm buildings were elevated in significance. Kent attached them to the design, banished the farm animals, and elevated the wings to almost the same importance as the house itself. It was the development of the flanking wings that was to cause English Palladianism to evolve from being a pastiche of Palladio's original work. Wings were frequently adorned with porticos and pediments, often resembling, as at the much later Kedleston Hall, small country houses in their own right.
Architectural styles evolve and change to suit the requirements of each individual client. When in 1746 the Duke of Bedford decided to rebuild Woburn Abbey, he chose the fashionable Palladian style, and selected the architect Henry Flitcroft, a protégé of Burlington. Flitcroft's designs, while Palladian in nature, had to comply with the Duke's determination that the plan and footprint of the earlier house, originally a Cistercian monastery, be retained. The central block is small, has only three bays, while the temple-like portico is merely suggested, and is closed. Two great flanking wings containing a vast suite of state rooms replace the walls or colonnades which should have connected to the farm buildings; the farm buildings terminating the structure are elevated in height to match the central block and given Palladian windows, to ensure they are seen as of Palladian design. This development of the style was to be repeated in many houses and town halls in Britain over one hundred years. Often the terminating blocks would have blind porticos and pilasters themselves, competing for attention with, or complementing the central block. This was all very far removed from the designs of Palladio two hundred years earlier. Falling from favour during the Victorian era, the approach was revived by Sir Aston Webb for his refacing of Buckingham Palace in 1913.
The villa tradition continued throughout the late 18th century, particularly in the suburbs around London. Sir William Chambers built many examples, such as Parkstead House. But the grander English Palladian houses were no longer the small but exquisite weekend retreats that their Italian counterparts were intended as. They had become "power houses", in Sir John Summerson's words, the symbolic centres of the triumph and dominance of the Whig Oligarchy who ruled Britain unchallenged for some fifty years after the death of Queen Anne. Summerson thought Kent's Horse Guards on Whitehall epitomised "the establishment of Palladianism as the official style of Great Britain". As the style peaked, thoughts of mathematical proportion were swept away. Rather than square houses with supporting wings, these buildings had the length of the façade as their major consideration: long houses often only one room deep were deliberately deceitful in giving a false impression of size.
Irish Palladian architecture
During the Palladian revival period in Ireland, even modest mansions were cast in a neo-Palladian mould. Irish Palladian architecture subtly differs from the England style. While adhering as in other countries to the basic ideals of Palladio, it is often truer to them. In Ireland, Palladianism became political; both the original and the present Irish parliaments in Dublin occupy Palladian buildings.
The Irish architect Sir Edward Lovett Pearce (1699–1733) became a leading advocate. He was a cousin of Sir John Vanbrugh, and originally one of his pupils. He rejected the Baroque style, and spent three years studying architecture in France and Italy before returning to Ireland. His most important Palladian work is the former Irish Houses of Parliament in Dublin. Christine Casey, in her 2005 volume Dublin, in the Pevsner Buildings of Ireland series, considers the building, "arguably the most accomplished public set-piece of the Palladian style in [Britain]". Pearce was a prolific architect who went on to design the southern façade of Drumcondra House in 1725 and Summerhill House in 1731, which was completed after his death by Richard Cassels. Pearce also oversaw the building of Castletown House near Dublin, designed by the Italian architect Alessandro Galilei (1691–1737). It is perhaps the only Palladian house in Ireland built with Palladio's mathematical ratios, and one of a number of Irish mansions which inspired the design of the White House in Washington, D.C.
Other examples include Russborough, designed by Richard Cassels, who also designed the Palladian Rotunda Hospital in Dublin and Florence Court in County Fermanagh. Irish Palladian country houses often feature robust Rococo plasterwork – an Irish specialty which was frequently executed by the Lafranchini brothers and far more flamboyant than the interiors of their contemporaries in England. In the 20th century, during and following the Irish War of Independence and the subsequent civil war, large numbers of Irish country houses, including some fine Palladian examples such as Woodstock House, were abandoned to ruin or destroyed.
North American Palladian architecture
Palladio's influence in North America is evident almost from its first architect-designed buildings. The Irish philosopher George Berkeley, who may be America's first recorded Palladian, bought a large farmhouse in Middletown, Rhode Island, in the late 1720s, and added a Palladian doorcase derived from Kent's Designs of Inigo Jones (1727), which he may have brought with him from London. Palladio's work was included in the library of a thousand volumes amassed for Yale College. Peter Harrison's 1749 designs for the Redwood Library in Newport, Rhode Island, borrow directly from Palladio's I quattro libri dell'architettura, while his plan for the Newport Brick Market, conceived a decade later, is also Palladian.
Two colonial period houses that can be definitively attributed to designs from I quattro libri dell'architettura are the Hammond-Harwood House (1774) in Annapolis, Maryland, and Thomas Jefferson's first Monticello (1770). Hammond-Harwood was designed by the architect William Buckland in 1773–1774 for the wealthy farmer Matthias Hammond of Anne Arundel County, Maryland. The design source is the Villa Pisani, and that for the first Monticello, the Villa Cornaro at Piombino Dese. Both are taken from Book II, Chapter XIV of I quattro libri dell'architettura. Jefferson later made substantial alterations to Monticello, known as the second Monticello (1802–1809), making the Hammond-Harwood House the only remaining house in North America modelled directly on a Palladian design.
Jefferson referred to I quattro libri dell'architettura as his bible. Although a statesman, his passion was architecture, and he developed an intense appreciation of Palladio's architectural concepts; his designs for the James Barbour Barboursville estate, the Virginia State Capitol, and the University of Virginia campus were all based on illustrations from Palladio's book. Realising the political significance of ancient Roman architecture to the fledgling American Republic, Jefferson designed his civic buildings, such as The Rotunda, in the Palladian style, echoing in his buildings for the new republic examples from the old.
In Virginia and the Carolinas, the Palladian style is found in numerous plantation houses, such as Stratford Hall, Westover Plantation and Drayton Hall. Westover's north and south entrances, made of imported English Portland stone, were patterned after a plate in William Salmon's Palladio Londinensis (1734). The distinctive feature of Drayton Hall, its two-storey portico, was derived from Palladio, as was Mount Airy, in Richmond County, Virginia, built in 1758–1762. A particular feature of American Palladianism was the re-emergence of the great portico which, as in Italy, fulfilled the need of protection from the sun; the portico in various forms and size became a dominant feature of American colonial architecture. In the north European countries the portico had become a mere symbol, often closed, or merely hinted at in the design by pilasters, and sometimes in very late examples of English Palladianism adapted to become a porte-cochère; in America, the Palladian portico regained its full glory.
The White House in Washington, D.C., was inspired by Irish Palladianism. Its architect James Hoban, who built the executive mansion between 1792 and 1800, was born in Callan, County Kilkenny, in 1762, the son of tenant farmers on the estate of Desart Court, a Palladian House designed by Pearce. He studied architecture in Dublin, where Leinster House (built ) was one of the finest Palladian buildings of the time. Both Cassel's Leinster House and James Wyatt's Castle Coole have been cited as Hoban's inspirations for the White House but the more neoclassical design of that building, particularly of the South façade which closely resembles Wyatt's 1790 design for Castle Coole, suggests that Coole is perhaps the more direct progenitor. The architectural historian Gervase Jackson-Stops describes Castle Coole as "a culmination of the Palladian traditions, yet strictly neoclassical in its chaste ornament and noble austerity", while Alistair Rowan, in his 1979 volume, North West Ulster, of the Buildings of Ireland series, suggests that, at Coole, Wyatt designed a building, "more massy, more masculine and more totally liberated from Palladian practice than anything he had done before."
Because of its later development, Palladian architecture in Canada is rarer. In her 1984 study, Palladian Style in Canadian Architecture, Nathalie Clerk notes its particular impact on public architecture, as opposed to the private houses in the United States. One example of historical note is the Nova Scotia Legislature building, completed in 1819. Another example is Government House in St. John's, Newfoundland.
Palladianism elsewhere
The rise of neo-Palladianism in England contributed to its adoption in Prussia. Count Francesco Algarotti wrote to Lord Burlington to inform him that he was recommending to Frederick the Great the adoption in his own country of the architectural style Burlington had introduced in England. By 1741, Georg Wenzeslaus von Knobelsdorff had already begun construction of the Berlin Opera House on the Unter den Linden, based on Campbell's Wanstead House.
Palladianism was particularly adopted in areas under British colonial rule. Examples can be seen in the Indian subcontinent; the Raj Bhavan, Kolkata (formerly Government House) was modelled on Kedleston Hall, while the architectural historian Pilar Maria Guerrieri identifies its influences in Lutyens' Delhi. In South Africa, Federico Freschi notes the "Tuscan colonnades and Palladian windows" of Herbert Baker's Union Buildings.
Legacy
By the 1770s, British architects such as Robert Adam and William Chambers were in high demand, but were now drawing on a wide variety of classical sources, including from ancient Greece, so much so that their forms of architecture became defined as neoclassical rather than Palladian. In Europe, the Palladian revival ended by the close of the 18th century. In the 19th century, proponents of the Gothic Revival such as Augustus Pugin, remembering the origins of Palladianism in ancient temples, considered it pagan, and unsuited to Anglican and Anglo-Catholic worship. In North America, Palladianism lingered a little longer; Thomas Jefferson's floor plans and elevations owe a great deal to Palladio's I quattro libri dell'architettura.
The term Palladian is often misused in modern discourse and tends to be used to describe buildings with any classical pretensions. There was a revival of a more serious Palladian approach in the 20th century when Colin Rowe, an influential architectural theorist, published his essay, The Mathematics of the Ideal Villa, (1947), in which he drew links between the compositional "rules" in Palladio's villas and Le Corbusier's villas at Poissy and Garches. Suzanne Walters' article The Two Faces of Modernism suggests a continuing influence of Palladio's ideas on architects of the 20th century. In the 21st century Palladio's name regularly appears among the world's most influential architects. In England, Raymond Erith (1904–1973) drew on Palladian inspirations, and was followed in this by his pupil, subsequently partner, Quinlan Terry. Their work, and that of others, led the architectural historian John Martin Robinson to suggest that "the Quattro Libri continues as the fountainhead of at least one strand in the English country house tradition."
See also
City of Vicenza and the Palladian Villas of the Veneto
New Classical architecture
Giacomo Quarenghi
Riviera del Brenta
Notes, references and sources
Notes
References
Sources
External links
Center for Palladian Studies in America
Inigo Jones document collection at Worcester College, Oxford
International centre for the study of the architecture of Andrea Palladio (CISA)
Thomas Jefferson's architecture
Article on Palladian architecture in colonial Singapore, published by the Department of Architecture and Urban Planning
Architectural history
Architectural styles
Architectural design
British architectural styles
House styles | Palladian architecture | [
"Engineering"
] | 6,018 | [
"Design",
"Architectural history",
"Architectural design",
"Architecture"
] |
592,151 | https://en.wikipedia.org/wiki/Even%20and%20odd%20functions | In mathematics, an even function is a real function such that for every in its domain. Similarly, an odd function is a function such that for every in its domain.
They are named for the parity of the powers of the power functions which satisfy each condition: the function is even if n is an even integer, and it is odd if n is an odd integer.
Even functions are those real functions whose graph is self-symmetric with respect to the and odd functions are those whose graph is self-symmetric with respect to the origin.
If the domain of a real function is self-symmetric with respect to the origin, then the function can be uniquely decomposed as the sum of an even function and an odd function.
Definition and examples
Evenness and oddness are generally considered for real functions, that is real-valued functions of a real variable. However, the concepts may be more generally defined for functions whose domain and codomain both have a notion of additive inverse. This includes abelian groups, all rings, all fields, and all vector spaces. Thus, for example, a real function could be odd or even (or neither), as could a complex-valued function of a vector variable, and so on.
The given examples are real functions, to illustrate the symmetry of their graphs.
Even functions
A real function is even if, for every in its domain, is also in its domain and
or equivalently
Geometrically, the graph of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis.
Examples of even functions are:
The absolute value
cosine
hyperbolic cosine
Gaussian function
Odd functions
A real function is odd if, for every in its domain, is also in its domain and
or equivalently
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin.
If is in the domain of an odd function , then .
Examples of odd functions are:
The sign function
The identity function
sine
hyperbolic sine
The error function
Basic properties
Uniqueness
If a function is both even and odd, it is equal to 0 everywhere it is defined.
If a function is odd, the absolute value of that function is an even function.
Addition and subtraction
The sum of two even functions is even.
The sum of two odd functions is odd.
The difference between two odd functions is odd.
The difference between two even functions is even.
The sum of an even and odd function is not even or odd, unless one of the functions is equal to zero over the given domain.
Multiplication and division
The product of two even functions is an even function.
That implies that product of any number of even functions is an even function as well.
The product of two odd functions is an even function.
The product of an even function and an odd function is an odd function.
The quotient of two even functions is an even function.
The quotient of two odd functions is an even function.
The quotient of an even function and an odd function is an odd function.
Composition
The composition of two even functions is even.
The composition of two odd functions is odd.
The composition of an even function and an odd function is even.
The composition of any function with an even function is even (but not vice versa).
Even–odd decomposition
If a real function has a domain that is self-symmetric with respect to the origin, it may be uniquely decomposed as the sum of an even and an odd function, which are called respectively the even part (or the even component) and the odd part (or the odd component) of the function, and are defined by
and
It is straightforward to verify that is even, is odd, and
This decomposition is unique since, if
where is even and is odd, then and since
For example, the hyperbolic cosine and the hyperbolic sine may be regarded as the even and odd parts of the exponential function, as the first one is an even function, the second one is odd, and
.
Fourier's sine and cosine transforms also perform even–odd decomposition by representing a function's odd part with sine waves (an odd function) and the function's even part with cosine waves (an even function).
Further algebraic properties
Any linear combination of even functions is even, and the even functions form a vector space over the reals. Similarly, any linear combination of odd functions is odd, and the odd functions also form a vector space over the reals. In fact, the vector space of all real functions is the direct sum of the subspaces of even and odd functions. This is a more abstract way of expressing the property in the preceding section.
The space of functions can be considered a graded algebra over the real numbers by this property, as well as some of those above.
The even functions form a commutative algebra over the reals. However, the odd functions do not form an algebra over the reals, as they are not closed under multiplication.
Analytic properties
A function's being odd or even does not imply differentiability, or even continuity. For example, the Dirichlet function is even, but is nowhere continuous.
In the following, properties involving derivatives, Fourier series, Taylor series are considered, and these concepts are thus supposed to be defined for the considered functions.
Basic analytic properties
The derivative of an even function is odd.
The derivative of an odd function is even.
The integral of an odd function from −A to +A is zero (where A can be finite or infinite, and the function has no vertical asymptotes between −A and A). For an odd function that is integrable over a symmetric interval, e.g. , the result of the integral over that interval is zero; that is
.
The integral of an even function from −A to +A is twice the integral from 0 to +A (where A is finite, and the function has no vertical asymptotes between −A and A. This also holds true when A is infinite, but only if the integral converges); that is
.
Series
The Maclaurin series of an even function includes only even powers.
The Maclaurin series of an odd function includes only odd powers.
The Fourier series of a periodic even function includes only cosine terms.
The Fourier series of a periodic odd function includes only sine terms.
The Fourier transform of a purely real-valued even function is real and even. (see )
The Fourier transform of a purely real-valued odd function is imaginary and odd. (see )
Harmonics
In signal processing, harmonic distortion occurs when a sine wave signal is sent through a memory-less nonlinear system, that is, a system whose output at time t only depends on the input at time t and does not depend on the input at any previous times. Such a system is described by a response function . The type of harmonics produced depend on the response function f:
When the response function is even, the resulting signal will consist of only even harmonics of the input sine wave;
The fundamental is also an odd harmonic, so will not be present.
A simple example is a full-wave rectifier.
The component represents the DC offset, due to the one-sided nature of even-symmetric transfer functions.
When it is odd, the resulting signal will consist of only odd harmonics of the input sine wave;
The output signal will be half-wave symmetric.
A simple example is clipping in a symmetric push-pull amplifier.
When it is asymmetric, the resulting signal may contain either even or odd harmonics;
Simple examples are a half-wave rectifier, and clipping in an asymmetrical class-A amplifier.
This does not hold true for more complex waveforms. A sawtooth wave contains both even and odd harmonics, for instance. After even-symmetric full-wave rectification, it becomes a triangle wave, which, other than the DC offset, contains only odd harmonics.
Generalizations
Multivariate functions
Even symmetry:
A function is called even symmetric if:
Odd symmetry:
A function is called odd symmetric if:
Complex-valued functions
The definitions for even and odd symmetry for complex-valued functions of a real argument are similar to the real case. In signal processing, a similar symmetry is sometimes considered, which involves complex conjugation.
Conjugate symmetry:
A complex-valued function of a real argument is called conjugate symmetric if
A complex valued function is conjugate symmetric if and only if its real part is an even function and its imaginary part is an odd function.
A typical example of a conjugate symmetric function is the cis function
Conjugate antisymmetry:
A complex-valued function of a real argument is called conjugate antisymmetric if:
A complex valued function is conjugate antisymmetric if and only if its real part is an odd function and its imaginary part is an even function.
Finite length sequences
The definitions of odd and even symmetry are extended to N-point sequences (i.e. functions of the form ) as follows:
Even symmetry:
A N-point sequence is called conjugate symmetric if
Such a sequence is often called a palindromic sequence; see also Palindromic polynomial.
Odd symmetry:
A N-point sequence is called conjugate antisymmetric if
Such a sequence is sometimes called an anti-palindromic sequence; see also Antipalindromic polynomial.
See also
Hermitian function for a generalization in complex numbers
Taylor series
Fourier series
Holstein–Herring method
Parity (physics)
Notes
References
Calculus
Parity (mathematics)
Types of functions | Even and odd functions | [
"Mathematics"
] | 2,006 | [
"Functions and mappings",
"Calculus",
"Mathematical objects",
"Mathematical relations",
"Types of functions"
] |
592,172 | https://en.wikipedia.org/wiki/FU%20Orionis%20star | In stellar evolution, an FU Orionis star (also FU Orionis object, or FUor) is a pre–main-sequence star which displays an extreme change in magnitude and spectral type. One example is the star V1057 Cyg, which became six magnitudes brighter and went from spectral type dKe to F-type supergiant during 1969-1970. These stars are named after their type-star, FU Orionis.
The current model developed primarily by Lee Hartmann and Scott Jay Kenyon associates the FU Orionis flare with abrupt mass transfer from an accretion disc onto a young, low mass T Tauri star. Mass accretion rates for these objects are estimated to be around 10−4 solar masses per year. The rise time of these eruptions is typically on the order of 1 year, but can be much longer. The lifetime of this high-accretion, high-luminosity phase is on the order of decades. However, even with such a relatively short timespan, no FU Orionis object had been observed shutting off. By comparing the number of FUor outbursts to the rate of star formation in the solar neighborhood, it is estimated that the average young star undergoes approximately 10–20 FUor eruptions over its lifetime.
The spectra of FU Orionis stars are dominated by absorption features produced in the inner accretion disc. The spectrum of the inner part produce a spectrum of a F-G supergiant, while the outer parts and slightly colder parts of the disk produce a K-M type supergiant spectrum that can be observed in the near-infrared. In FU Orionis stars the disk radiation dominates, which can be used to study the inner parts of the disk.
The prototypes of this class are: FU Orionis, V1057 Cygni, V1515 Cygni, and the embedded protostar V1647 Orionis, which erupted in January 2004.
See also
Orion variable
T Tauri star
EX Lup variable star (also called an EXor)
References
Juhan Frank, Andrew King, Derek Raine (2002). Accretion power in astrophysics, Third Edition, Cambridge University Press. .
External links
The Furor over FUOrs (15 November 2010)
Discovery of possible FU-Ori and UX-Ori type objects (18 November 2009)
https://web.archive.org/web/20060831060814/http://www.aavso.org/vstar/vsots/0202.shtml
Star types
Stellar evolution
Articles containing video clips | FU Orionis star | [
"Physics",
"Astronomy"
] | 530 | [
"Astronomical classification systems",
"Star types",
"Astrophysics",
"Stellar evolution"
] |
592,198 | https://en.wikipedia.org/wiki/Stretch%20rule | In classical mechanics, the stretch rule (sometimes referred to as Routh's rule) states that the moment of inertia of a rigid object is unchanged when the object is stretched parallel to an axis of rotation that is a principal axis, provided that the distribution of mass remains unchanged except in the direction parallel to the axis. This operation leaves cylinders oriented parallel to the axis unchanged in radius.
This rule can be applied with the parallel axis theorem and the perpendicular axis theorem to find moments of inertia for a variety of shapes.
Derivation
The (scalar) moment of inertia of a rigid body around the z-axis is given by:
Where is the distance of a point from the z-axis. We can expand as follows, since we are dealing with stretching over the z-axis only:
Here, is the body's height. Stretching the object by a factor of along the z-axis is equivalent to dividing the mass density by (meaning ), as well as integrating over new limits and (the new height of the object), thus leaving the total mass unchanged. This means the new moment of inertia will be:
References
Classical mechanics
Moment (physics) | Stretch rule | [
"Physics",
"Mathematics"
] | 241 | [
"Physical quantities",
"Quantity",
"Classical mechanics",
"Mechanics",
"Moment (physics)"
] |
592,346 | https://en.wikipedia.org/wiki/Gregory%20Mathews | Gregory Macalister Mathews CBE FRSE FZS FLS (10 September 1876 – 27 March 1949) was an Australian-born amateur ornithologist who spent most of his later life in England.
Life
He was born in Biamble in New South Wales the son of Robert H. Mathews. He was educated at The King's School, Parramatta.
Mathews made his fortune in mining shares and moved to England in 1902. In 1910, he was elected a Fellow of the Royal Society of Edinburgh. His proposers were William Eagle Clarke, Ramsay Heatley Traquair, John Alexander Harvie-Brown and William Evans.
Ornithology
Mathews was a controversial figure in Australian ornithology. He was responsible for bringing trinomial nomenclature into local taxonomy, however he was regarded as an extreme splitter. He recognised large numbers of subspecies on scant evidence and few notes. The extinct Lord Howe Pigeon was described by Mathews in 1915, using a painting as a guide. At the time, he named it Raperia godmanae for Alice Mary Godman.
His approach drew a hostile response from Archibald James Campbell, a leading Australian figure in birds at the time. He later began splitting genera. Dominic Serventy foretold that although a great many of these subspecies ceased to be recognised, future research would have to resort to the use of some of them if and when evidence supported their distinct status.
He was Chairman of the British Ornithologists' Club from 1935 to 1938. He was made CBE in 1939 for his services to ornithology.
Mathews described M. s. musgravei, currently recognised as a subspecies of the splendid fairy-wren, in 1922, as a new species of bird.
In 1939, he was elected a Fellow of the Royal Australasian Ornithologists Union and served as its president from 1946–1947. Mathews built up a collection of 30,000 bird skins and a library of 5,000 books on ornithology. He donated his ornithological library to the National Library of Australia in 1939.
In 1939, Matthews donated a small collection of Aboriginal ethnographic items from Australia to the British Museum.
Family
He married Mrs Marian Wynne, a widow.
He died in Winchester on 27 March 1949.
Publications
Mathews contributed numerous papers to the ornithological literature, especially on avian taxonomy and nomenclature, as well as founding, funding, editing and being the principal contributor to the journal The Austral Avian Record. Monographic or book-length works authored or coauthored by him include:
1908 – The Handlist of the Birds of Australia. (Based on A Handlist of Birds by Bowdler Sharpe).
1910–1927 – The Birds of Australia Witherby: London. (12 volumes, assisted by Tom Iredale).
1912 – The Reference List of the Birds of Australia. (Novitates Zoologicae, 18 January 1912).
1913 – A List of the Birds of Australia. Witherby: London.
1920 – The Name List of the Birds of Australia.
1921 – A Manual of the Birds of Australia. Volume I: Orders Casuarii to Columbae. Witherby: London. (With Tom Iredale. Only one volume published of a projected four).
1924 – The Check-List of the Birds of Australia. Witherby: London. (Comprising Supplements 1-3 of The Birds of Australia).
1925 – The Bibliography of the Birds of Australia. Witherby: London. (Comprising Supplements 4 and 5 of The Birds of Australia).
1927 – Systema Avium Australasianarum. a Systematic List of the Birds of the Australasian Region. BOU: London. (2 volumes).
1928 – The Birds of Norfolk and Lord Howe Islands and the Australian South Polar Quadrant. Witherby: London.
1931 – A List of the Birds of Australasia, Including New Zealand, Lord Howe and Norfolk Islands, and the Australasian Antarctic Quadrant.
1936 – A Supplement to the Birds of Norfolk and Lord Howe Islands to which is Added those Birds of New Zealand not figured by Buller. Witherby: London.
1942 – Birds and Books: the Story of the Mathews Ornithological Library. Verity Hewitt Bookshop: Canberra.
1943 – Notes on the Order Procellariiformes. (With Edward Hallstrom).
1946 – A Working List of Australian Birds, including the Australian Quadrant and New Zealand. Shepherd Press: Sydney.
References
Robin, Libby. (2001). The Flight of the Emu: a hundred years of Australian ornithology 1901-2001. Carlton, Vic. Melbourne University Press.
External links
Find G.M. Mathews in Libraries Australia – click on the name 'Heading' to find related works in 800+ Australian library collections
Illustrations from The birds of Australia
1876 births
1949 deaths
Australian ornithologists
Australian nature writers
Taxon authorities
Australian book and manuscript collectors
Australian emigrants to the United Kingdom
People educated at The King's School, Parramatta | Gregory Mathews | [
"Biology"
] | 1,032 | [
"Taxon authorities",
"Taxonomy (biology)"
] |
592,437 | https://en.wikipedia.org/wiki/Declassification | Declassification is the process of ceasing a protective classification, often under the principle of freedom of information. Procedures for declassification vary by country. Papers may be withheld without being classified as secret, and eventually made available.
United Kingdom
Classified information has been governed by various Official Secrets Acts, the latest being the Official Secrets Act 1989. Until 1989 requested information was routinely kept secret invoking the public interest defence; this was largely removed by the 1989 Act. The Freedom of Information Act 2000 largely requires information to be disclosed unless there are good reasons for secrecy.
Confidential government papers such as the yearly cabinet papers used routinely to be withheld formally, although not necessarily classified as secret, for 30 years under the thirty year rule, and released usually on a New Year's Day; freedom of information legislation has relaxed this rigid approach.
United States
Executive Order 13526 establishes the mechanisms for most declassifications, within the laws passed by Congress. The originating agency assigns a declassification date, by default 25 years. After 25 years, declassification review is automatic with nine narrow exceptions that allow information to remain as classified. At 50 years, there are two exceptions, and classifications beyond 75 years require special permission. Because of changes in policy and circumstances, agencies are expected to actively review documents that have been classified for fewer than 25 years. They must also respond to Mandatory Declassification Review and Freedom of Information Act requests. The National Archives and Records Administration houses the National Declassification Center to coordinate reviews and Information Security Oversight Office to promulgate rules and enforce quality measures across all agencies. NARA reviews documents on behalf of defunct agencies and permanently stores declassified documents for public inspection. The Interagency Security Classification Appeals Panel has representatives from several agencies.
See also
Freedom of information
Freedom of information legislation
United States v. Reynolds
References
Classified information
Information privacy
Knowledge economy | Declassification | [
"Engineering"
] | 377 | [
"Cybersecurity engineering",
"Information privacy"
] |
592,486 | https://en.wikipedia.org/wiki/Refrigerant | A refrigerant is a working fluid used in cooling, heating or reverse cooling and heating of air conditioning systems and heat pumps where they undergo a repeated phase transition from a liquid to a gas and back again. Refrigerants are heavily regulated because of their toxicity and flammability and the contribution of CFC and HCFC refrigerants to ozone depletion and that of HFC refrigerants to climate change.
Refrigerants are used in a direct expansion (DX- Direct Expansion) system (circulating system)to transfer energy from one environment to another, typically from inside a building to outside (or vice versa) commonly known as an air conditioner cooling only or cooling & heating reverse DX system or heat pump a heating only DX cycle. Refrigerants can carry 10 times more energy per kg than water, and 50 times more than air.
Refrigerants are controlled substances and classified by International safety regulations ISO 817/5149, AHRAE 34/15 & BS EN 378 due to high pressures (), extreme temperatures ( to over ), flammability (A1 class non-flammable, A2/A2L class flammable and A3 class extremely flammable/explosive) and toxicity (B1-low, B2-medium & B3-high). The regulations relate to situations when these refrigerants are released into the atmosphere in the event of an accidental leak not while circulated.
Refrigerants (controlled substances) must only be handled by qualified/certified engineers for the relevant classes (in the UK, C&G 2079 for A1-class and C&G 6187-2 for A2/A2L & A3-class refrigerants).
Refrigerants (A1 class only) Due to their non-flammability, A1 class non-flammability, non-explosivity, and non-toxicity, non-explosivity they have been used in open systems (consumed when used) like fire extinguishers, inhalers, computer rooms fire extinguishing and insulation, etc.) since 1928.
History
The first air conditioners and refrigerators employed toxic or flammable gases, such as ammonia, sulfur dioxide, methyl chloride, or propane, that could result in fatal accidents when they leaked.
In 1928 Thomas Midgley Jr. created the first non-flammable, non-toxic chlorofluorocarbon gas, Freon (R-12). The name is a trademark name owned by DuPont (now Chemours) for any chlorofluorocarbon (CFC), hydrochlorofluorocarbon (HCFC), or hydrofluorocarbon (HFC) refrigerant. Following the discovery of better synthesis methods, CFCs such as R-11, R-12, R-123 and R-502 dominated the market.
Phasing out of CFCs
In the mid-1970s, scientists discovered that CFCs were causing major damage to the ozone layer that protects the earth from ultraviolet radiation, and to the ozone holes over polar regions. This led to the signing of the Montreal Protocol in 1987 which aimed to phase out CFCs and HCFC but did not address the contributions that HFCs made to climate change. The adoption of HCFCs such as R-22, and R-123 was accelerated and so were used in most U.S. homes in air conditioners and in chillers from the 1980s as they have a dramatically lower Ozone Depletion Potential (ODP) than CFCs, but their ODP was still not zero which led to their eventual phase-out.
Hydrofluorocarbons (HFCs) such as R-134a, R-407A, R-407C, R-404A, R-410A (a 50/50 blend of R-125/R-32) and R-507 were promoted as replacements for CFCs and HCFCs in the 1990s and 2000s. HFCs were not ozone-depleting but did have global warming potentials (GWPs) thousands of times greater than CO2 with atmospheric lifetimes that can extend for decades. This in turn, starting from the 2010s, led to the adoption in new equipment of Hydrocarbon and HFO (hydrofluoroolefin) refrigerants R-32, R-290, R-600a, R-454B, R-1234yf, R-514A, R-744 (), R-1234ze(E) and R-1233zd(E), which have both an ODP of zero and a lower GWP. Hydrocarbons and are sometimes called natural refrigerants because they can be found in nature.
The environmental organization Greenpeace provided funding to a former East German refrigerator company to research alternative ozone- and climate-safe refrigerants in 1992. The company developed a hydrocarbon mixture of propane and isobutane, or pure isobutane, called "Greenfreeze", but as a condition of the contract with Greenpeace could not patent the technology, which led to widespread adoption by other firms. Policy and political influence by corporate executives resisted change however, citing the flammability and explosive properties of the refrigerants, and DuPont together with other companies blocked them in the U.S. with the U.S. EPA.
Beginning on 14 November 1994, the U.S. Environmental Protection Agency restricted the sale, possession and use of refrigerants to only licensed technicians, per rules under sections 608 and 609 of the Clean Air Act. In 1995, Germany made CFC refrigerators illegal.
In 1996 Eurammon, a European non-profit initiative for natural refrigerants, was established and comprises European companies, institutions, and industry experts.
In 1997, FCs and HFCs were included in the Kyoto Protocol to the Framework Convention on Climate Change.
In 2000 in the UK, the Ozone Regulations came into force which banned the use of ozone-depleting HCFC refrigerants such as R22 in new systems. The Regulation banned the use of R22 as a "top-up" fluid for maintenance from 2010 for virgin fluid and from 2015 for recycled fluid.
Addressing greenhouse gases
With growing interest in natural refrigerants as alternatives to synthetic refrigerants such as CFCs, HCFCs and HFCs, in 2004, Greenpeace worked with multinational corporations like Coca-Cola and Unilever, and later Pepsico and others, to create a corporate coalition called Refrigerants Naturally!. Four years later, Ben & Jerry's of Unilever and General Electric began to take steps to support production and use in the U.S. It is estimated that almost 75 percent of the refrigeration and air conditioning sector has the potential to be converted to natural refrigerants.
In 2006, the EU adopted a Regulation on fluorinated greenhouse gases (FCs and HFCs) to encourage to transition to natural refrigerants (such as hydrocarbons). It was reported in 2010 that some refrigerants are being used as recreational drugs, leading to an extremely dangerous phenomenon known as inhalant abuse.
From 2011 the European Union started to phase out refrigerants with a global warming potential (GWP) of more than 150 in automotive air conditioning (GWP = 100-year warming potential of one kilogram of a gas relative to one kilogram of CO2) such as the refrigerant HFC-134a (known as R-134a in North America) which has a GWP of 1526. In the same year the EPA decided in favour of the ozone- and climate-safe refrigerant for U.S. manufacture.
A 2018 study by the nonprofit organization "Drawdown" put proper refrigerant management and disposal at the very top of the list of climate impact solutions, with an impact equivalent to eliminating over 17 years of US carbon dioxide emissions.
In 2019 it was estimated that CFCs, HCFCs, and HFCs were responsible for about 10% of direct radiative forcing from all long-lived anthropogenic greenhouse gases. and in the same year the UNEP published new voluntary guidelines, however many countries have not yet ratified the Kigali Amendment.
From early 2020 HFCs (including R-404A, R-134a and R-410A) are being superseded: Residential air-conditioning systems and heat pumps are increasingly using R-32. This still has a GWP of more than 600. Progressive devices use refrigerants with almost no climate impact, namely R-290 (propane), R-600a (isobutane) or R-1234yf (less flammable, in cars). In commercial refrigeration also (R-744) can be used.
Requirements and desirable properties
A refrigerant needs to have: a boiling point that is somewhat below the target temperature (although boiling point can be adjusted by adjusting the pressure appropriately), a high heat of vaporization, a moderate density in liquid form, a relatively high density in gaseous form (which can also be adjusted by setting pressure appropriately), and a high critical temperature. Working pressures should ideally be containable by copper tubing, a commonly available material. Extremely high pressures should be avoided.
The ideal refrigerant would be: non-corrosive, non-toxic, non-flammable, with no ozone depletion and global warming potential. It should preferably be natural with well-studied and low environmental impact. Newer refrigerants address the issue of the damage that CFCs caused to the ozone layer and the contribution that HCFCs make to climate change, but some do raise issues relating to toxicity and/or flammability.
Common refrigerants
Refrigerants with very low climate impact
With increasing regulations, refrigerants with a very low global warming potential are expected to play a dominant role in the 21st century, in particular, R-290 and R-1234yf. Starting from almost no market share in 2018, low GWPO devices are gaining market share in 2022.
Most used
Banned / Phased out
Other
Refrigerant reclamation and disposal
Coolant and refrigerants are found throughout the industrialized world, in homes, offices, and factories, in devices such as refrigerators, air conditioners, central air conditioning systems (HVAC), freezers, and dehumidifiers. When these units are serviced, there is a risk that refrigerant gas will be vented into the atmosphere either accidentally or intentionally, hence the creation of technician training and certification programs in order to ensure that the material is conserved and managed safely. Mistreatment of these gases has been shown to deplete the ozone layer and is suspected to contribute to global warming.
With the exception of isobutane and propane (R600a, R441A and R290), ammonia and CO2 under Section 608 of the United States' Clean Air Act it is illegal to knowingly release any refrigerants into the atmosphere.
Refrigerant reclamation is the act of processing used refrigerant gas which has previously been used in some type of refrigeration loop such that it meets specifications for new refrigerant gas. In the United States, the Clean Air Act of 1990 requires that used refrigerant be processed by a certified reclaimer, which must be licensed by the United States Environmental Protection Agency (EPA), and the material must be recovered and delivered to the reclaimer by EPA-certified technicians.
Classification of refrigerants
Refrigerants may be divided into three classes according to their manner of absorption or extraction of heat from the substances to be refrigerated:
Class 1: This class includes refrigerants that cool by phase change (typically boiling), using the refrigerant's latent heat.
Class 2: These refrigerants cool by temperature change or 'sensible heat', the quantity of heat being the specific heat capacity x the temperature change. They are air, calcium chloride brine, sodium chloride brine, alcohol, and similar nonfreezing solutions. The purpose of Class 2 refrigerants is to receive a reduction of temperature from Class 1 refrigerants and convey this lower temperature to the area to be cooled.
Class 3: This group consists of solutions that contain absorbed vapors of liquefiable agents or refrigerating media. These solutions function by nature of their ability to carry liquefiable vapors, which produce a cooling effect by the absorption of their heat of solution. They can also be classified into many categories.
R numbering system
The R- numbering system was developed by DuPont (which owned the Freon trademark), and systematically identifies the molecular structure of refrigerants made with a single halogenated hydrocarbon. ASHRAE has since set guidelines for the numbering system as follows:
R-X1X2X3X4
X1 = Number of unsaturated carbon-carbon bonds (omit if zero)
X2 = Number of carbon atoms minus 1 (omit if zero)
X3 = Number of hydrogen atoms plus 1
X4 = Number of fluorine atoms
Series
R-xx Methane Series
R-1xx Ethane Series
R-2xx Propane Series
R-4xx Zeotropic blend
R-5xx Azeotropic blend
R-6xx Saturated hydrocarbons (except for propane which is R-290)
R-7xx Inorganic Compounds with a molar mass < 100
R-7xxx Inorganic Compounds with a molar mass ≥ 100
Ethane Derived Chains
Number Only Most symmetrical isomer
Lower Case Suffix (a, b, c, etc.) indicates increasingly unsymmetrical isomers
Propane Derived Chains
Number Only If only one isomer exists; otherwise:
First lower case suffix (a-f):
a Suffix Cl2 central carbon substitution
b Suffix Cl, F central carbon substitution
c Suffix F2 central carbon substitution
d Suffix Cl, H central carbon substitution
e Suffix F, H central carbon substitution
f Suffix H2 central carbon substitution
2nd Lower Case Suffix (a, b, c, etc.) Indicates increasingly unsymmetrical isomers
Propene derivatives
First lower case suffix (x, y, z):
x Suffix Cl substitution on central atom
y Suffix F substitution on central atom
z Suffix H substitution on central atom
Second lower case suffix (a-f):
a Suffix =CCl2 methylene substitution
b Suffix =CClF methylene substitution
c Suffix =CF2 methylene substitution
d Suffix =CHCl methylene substitution
e Suffix =CHF methylene substitution
f Suffix =CH2 methylene substitution
Blends
Upper Case Suffix (A, B, C, etc.) Same blend with different compositions of refrigerants
Miscellaneous
R-Cxxx Cyclic compound
R-Exxx Ether group is present
R-CExxx Cyclic compound with an ether group
R-4xx/5xx + Upper Case Suffix (A, B, C, etc.) Same blend with different composition of refrigerants
R-6xx + Lower Case Letter Indicates increasingly unsymmetrical isomers
7xx/7xxx + Upper Case Letter Same molar mass, different compound
R-xxxxB# Bromine is present with the number after B indicating how many bromine atoms
R-xxxxI# Iodine is present with the number after I indicating how many iodine atoms
R-xxx(E) Trans Molecule
R-xxx(Z) Cis Molecule
For example, R-134a has 2 carbon atoms, 2 hydrogen atoms, and 4 fluorine atoms, an empirical formula of tetrafluoroethane. The "a" suffix indicates that the isomer is unbalanced by one atom, giving 1,1,1,2-Tetrafluoroethane. R-134 (without the "a" suffix) would have a molecular structure of 1,1,2,2-Tetrafluoroethane.
The same numbers are used with an R- prefix for generic refrigerants, with a "Propellant" prefix (e.g., "Propellant 12") for the same chemical used as a propellant for an aerosol spray, and with trade names for the compounds, such as "Freon 12". Recently, a practice of using abbreviations HFC- for hydrofluorocarbons, CFC- for chlorofluorocarbons, and HCFC- for hydrochlorofluorocarbons has arisen, because of the regulatory differences among these groups.
Refrigerant safety
ASHRAE Standard 34, Designation and Safety Classification of Refrigerants, assigns safety classifications to refrigerants based upon toxicity and flammability.
Using safety information provided by producers, ASHRAE assigns a capital letter to indicate toxicity and a number to indicate flammability. The letter "A" is the least toxic and the number 1 is the least flammable.
See also
Brine (Refrigerant)
Section 608
List of Refrigerants
References
Sources
IPCC reports
(pb: ). Fifth Assessment Report - Climate Change 2013
Other
External links
US Environmental Protection Agency page on the GWPs of various substances
Green Cooling Initiative on alternative natural refrigerants cooling technologies
International Institute of Refrigeration
Heating, ventilation, and air conditioning
Industrial gases | Refrigerant | [
"Chemistry"
] | 3,669 | [
"Chemical process engineering",
"Industrial gases"
] |
592,505 | https://en.wikipedia.org/wiki/Padding%20%28cryptography%29 | In cryptography, padding is any of a number of distinct practices which all include adding data to the beginning, middle, or end of a message prior to encryption. In classical cryptography, padding may include adding nonsense phrases to a message to obscure the fact that many messages end in predictable ways, e.g. sincerely yours.
Classical cryptography
Official messages often start and end in predictable ways: My dear ambassador, Weather report, Sincerely yours, etc. The primary use of padding with classical ciphers is to prevent the cryptanalyst from using that predictability to find known plaintext that aids in breaking the encryption. Random length padding also prevents an attacker from knowing the exact length of the plaintext message.
A famous example of classical padding which caused a great misunderstanding is "the world wonders" incident, which nearly caused an Allied loss at the World War II Battle off Samar, part of the larger Battle of Leyte Gulf. In that example, Admiral Chester Nimitz, the Commander in Chief, U.S. Pacific Fleet in WWII, sent the following message to Admiral Bull Halsey, commander of Task Force Thirty Four (the main Allied fleet) at the Battle of Leyte Gulf, on October 25, 1944:
With padding (bolded) and metadata added, the message became:
Halsey's radio operator mistook some of the padding for the message and so Admiral Halsey ended up reading the following message:
Admiral Halsey interpreted the padding phrase "the world wonders" as a sarcastic reprimand, which caused him to have an emotional outburst and then lock himself in his bridge and sulk for an hour before he moved his forces to assist at the Battle off Samar. Halsey's radio operator should have been tipped off by the letters RR that "the world wonders" was padding; all other radio operators who received Admiral Nimitz's message correctly removed both padding phrases.
Many classical ciphers arrange the plaintext into particular patterns (e.g., squares, rectangles, etc.) and if the plaintext does not exactly fit, it is often necessary to supply additional letters to fill out the pattern. Using nonsense letters for this purpose has a side benefit of making some kinds of cryptanalysis more difficult.
Symmetric cryptography
Hash functions
Most modern cryptographic hash functions process messages in fixed-length blocks; all but the earliest hash functions include some sort of padding scheme. It is critical for cryptographic hash functions to employ termination schemes that prevent a hash from being vulnerable to length extension attacks.
Many padding schemes are based on appending predictable data to the final block. For example, the pad could be derived from the total length of the message. This kind of padding scheme is commonly applied to hash algorithms that use the Merkle–Damgård construction such as MD-5, SHA-1, and SHA-2 family such as SHA-224, SHA-256, SHA-384, SHA-512, SHA-512/224, and SHA-512/256
Block cipher mode of operation
Cipher-block chaining (CBC) mode is an example of block cipher mode of operation. Some block cipher modes (CBC and PCBC essentially) for symmetric-key encryption algorithms require plain text input that is a multiple of the block size, so messages may have to be padded to bring them to this length.
There is currently a shift to use streaming mode of operation instead of block mode of operation. An example of streaming mode encryption is the counter mode of operation. Streaming modes of operation can encrypt and decrypt messages of any size and therefore do not require padding. More intricate ways of ending a message such as ciphertext stealing or residual block termination avoid the need for padding.
A disadvantage of padding is that it makes the plain text of the message susceptible to padding oracle attacks. Padding oracle attacks allow the attacker to gain knowledge of the plain text without attacking the block cipher primitive itself. Padding oracle attacks can be avoided by making sure that an attacker cannot gain knowledge about the removal of the padding bytes. This can be accomplished by verifying a message authentication code (MAC) or digital signature before removal of the padding bytes, or by switching to a streaming mode of operation.
Bit padding
Bit padding can be applied to messages of any size.
A single '1' bit is added to the message and then as many '0' bits as required (possibly none) are added. The number of '0' bits added will depend on the block boundary to which the message needs to be extended. In bit terms this is "1000 ... 0000".
This method can be used to pad messages which are any number of bits long, not necessarily a whole number of bytes long. For example, a message of 23 bits that is padded with 9 bits in order to fill a 32-bit block:
... | 1011 1001 1101 0100 0010 0111 0000 0000 |
This padding is the first step of a two-step padding scheme used in many hash functions including MD5 and SHA. In this context, it is specified by RFC1321 step 3.1.
This padding scheme is defined by ISO/IEC 9797-1 as Padding Method 2.
Byte padding
Byte padding can be applied to messages that can be encoded as an integral number of bytes.
ANSI X9.23
In ANSI X9.23, between 1 and 8 bytes are always added as padding. The block is padded with random bytes (although many implementations use 00) and the last byte of the block is set to the number of bytes added.
Example:
In the following example the block size is 8 bytes, and padding is required for 4 bytes (in hexadecimal format)
... | DD DD DD DD DD DD DD DD | DD DD DD DD 00 00 00 04 |
ISO 10126
ISO 10126 (withdrawn, 2007) specifies that the padding should be done at the end of that last block with random bytes, and the padding boundary should be specified by the last byte.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 81 A6 23 04 |
PKCS#5 and PKCS#7
PKCS#7 is described in RFC 5652.
Padding is in whole bytes. The value of each added byte is the number of bytes that are added, i.e. bytes, each of value are added. The number of bytes added will depend on the block boundary to which the message needs to be extended.
The padding will be one of:
01
02 02
03 03 03
04 04 04 04
05 05 05 05 05
06 06 06 06 06 06
etc.
This padding method (as well as the previous two) is well-defined if and only if is less than 256.
Example:
In the following example, the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 04 04 04 04 |
If the length of the original data is an integer multiple of the block size , then an extra block of bytes with value is added. This is necessary so the deciphering algorithm can determine with certainty whether the last byte of the last block is a pad byte indicating the number of padding bytes added or part of the plaintext message. Consider a plaintext message that is an integer multiple of bytes with the last byte of plaintext being 01. With no additional information, the deciphering algorithm will not be able to determine whether the last byte is a plaintext byte or a pad byte. However, by adding bytes each of value after the 01 plaintext byte, the deciphering algorithm can always treat the last byte as a pad byte and strip the appropriate number of pad bytes off the end of the ciphertext; said number of bytes to be stripped based on the value of the last byte.
PKCS#5 padding is identical to PKCS#7 padding, except that it has only been defined for block ciphers that use a 64-bit (8-byte) block size. In practice, the two can be used interchangeably.
The maximum block size is 255, as it is the biggest number a byte can contain.
ISO/IEC 7816-4
ISO/IEC 7816-4:2005 is identical to the bit padding scheme, applied to a plain text of N bytes. This means in practice that the first byte is a mandatory byte valued '80' (Hexadecimal) followed, if needed, by 0 to N − 1 bytes set to '00', until the end of the block is reached. ISO/IEC 7816-4 itself is a communication standard for smart cards containing a file system, and in itself does not contain any cryptographic specifications.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 80 00 00 00 |
The next example shows a padding of just one byte
... | DD DD DD DD DD DD DD DD | DD DD DD DD DD DD DD 80 |
Zero padding
All the bytes that are required to be padded are padded with zero. The zero padding scheme has not been standardized for encryption, although it is specified for hashes and MACs as Padding Method 1 in ISO/IEC 10118-1 and ISO/IEC 9797-1.
Example:
In the following example the block size is 8 bytes and padding is required for 4 bytes
... | DD DD DD DD DD DD DD DD | DD DD DD DD 00 00 00 00 |
Zero padding may not be reversible if the original file ends with one or more zero bytes, making it impossible to distinguish between plaintext data bytes and padding bytes. It may be used when the length of the message can be derived out-of-band. It is often applied to binary encoded strings (null-terminated string) as the null character can usually be stripped off as whitespace.
Zero padding is sometimes also referred to as "null padding" or "zero byte padding". Some implementations may add an additional block of zero bytes if the plaintext is already divisible by the block size.
Public key cryptography
In public key cryptography, padding is the process of preparing a message for encryption or signing using a specification or scheme such as PKCS#1 v2.2, OAEP, PSS, PSSR, IEEE P1363 EMSA2 and EMSA5. A modern form of padding for asymmetric primitives is OAEP applied to the RSA algorithm, when it is used to encrypt a limited number of bytes.
The operation is referred to as "padding" because originally, random material was simply appended to the message to make it long enough for the primitive. This form of padding is not secure and is therefore no longer applied. A modern padding scheme aims to ensure that the attacker cannot manipulate the plaintext to exploit the mathematical structure of the primitive and will usually be accompanied by a proof, often in the random oracle model, that breaking the padding scheme is as hard as solving the hard problem underlying the primitive.
Traffic analysis and protection via padding
Even if perfect cryptographic routines are used, the attacker can gain knowledge of the amount of traffic that was generated. The attacker might not know what Alice and Bob were talking about, but can know that they were talking and how much they talked. In some circumstances this leakage can be highly compromising. Consider for example when a military is organising a secret attack against another nation: it may suffice to alert the other nation for them to know merely that there is a lot of secret activity going on.
As another example, when encrypting Voice Over IP streams that use variable bit rate encoding, the number of bits per unit of time is not obscured, and this can be exploited to guess spoken phrases. Similarly, the burst patterns that common video encoders produce are often sufficient to identify the streaming video a user is watching uniquely. Even the total size of an object alone, such as a website, file, software package download, or online video, can uniquely identify an object, if the attacker knows or can guess a known set the object comes from. The side-channel of encrypted content length was used to extract passwords from HTTPS communications in the well-known CRIME and BREACH attacks.
Padding an encrypted message can make traffic analysis harder by obscuring the true length of its payload. The choice of length to pad a message to may be made either deterministically or randomly; each approach has strengths and weaknesses that apply in different contexts.
Randomized padding
A random number of additional padding bits or bytes may be appended to the end of a message, together with an indication at the end how much padding was added. If the amount of padding is chosen as a uniform random number between 0 and some maximum M, for example, then an eavesdropper will be unable to determine the message's length precisely within that range. If the maximum padding M is small compared to the message's total size, then this padding will not add much overhead, but the padding will obscure only the least-significant bits of the object's total length, leaving the approximate length of large objects readily observable and hence still potentially uniquely identifiable by their length. If the maximum padding M is comparable to the size of the payload, in contrast, an eavesdropper's uncertainty about the message's true payload size is much larger, at the cost that padding may add up to 100% overhead ( blow-up) to the message.
In addition, in common scenarios in which an eavesdropper has the opportunity to see many successive messages from the same sender, and those messages are similar in ways the attacker knows or can guess, then the eavesdropper can use statistical techniques to decrease and eventually even eliminate the benefit of randomized padding. For example, suppose a user's application regularly sends messages of the same length, and the eavesdropper knows or can guess fact based on fingerprinting the user's application for example. Alternatively, an active attacker might be able to induce an endpoint to send messages regularly, such as if the victim is a public server. In such cases, the eavesdropper can simply compute the average over many observations to determine the length of the regular message's payload.
Deterministic padding
A deterministic padding scheme always pads a message payload of a given length to form an encrypted message of a particular corresponding output length. When many payload lengths map to the same padded output length, an eavesdropper cannot distinguish or learn any information about the payload's true length within one of these length buckets, even after many observations of the identical-length messages being transmitted. In this respect, deterministic padding schemes have the advantage of not leaking any additional information with each successive message of the same payload size.
On the other hand, suppose an eavesdropper can benefit from learning about small variations in payload size, such as plus or minus just one byte in a password-guessing attack for example. If the message sender is unlucky enough to send many messages whose payload lengths vary by only one byte, and that length is exactly on the border between two of the deterministic padding classes, then these plus-or-minus one payload lengths will consistently yield different padded lengths as well (plus-or-minus one block for example), leaking exactly the fine-grained information the attacker desires. Against such risks, randomized padding can offer more protection by independently obscuring the least-significant bits of message lengths.
Common deterministic padding methods include padding to a constant block size and padding to the next-larger power of two. Like randomized padding with a small maximum amount M, however, padding deterministically to a block size much smaller than the message payload obscures only the least-significant bits of the messages true length, leaving the messages's true approximate length largely unprotected. Padding messages to a power of two (or any other fixed base) reduces the maximum amount of information that the message can leak via its length from to . Padding to a power of two increases message size overhead by up to 100%, however, and padding to powers of larger integer bases increase maximum overhead further.
The PADMÉ scheme, proposed for padded uniform random blobs or PURBs, deterministically pads messages to lengths representable as a floating point number whose mantissa is no longer (i.e., contains no more significant bits) than its exponent. This length constraint ensures that a message leaks at most bits of information via its length, like padding to a power of two, but incurs much less overhead of at most 12% for tiny messages and decreasing gradually with message size.
See also
Chaffing and winnowing, mixing in large amounts of nonsense before sending
Ciphertext stealing, another approach to deal with messages that are not a multiple of the block length
Initialization vector, salt (cryptography), which are sometimes confused with padding
Key encapsulation, an alternative to padding for public key systems used to exchange symmetric keys
PURB or padded uniform random blob, an encryption discipline that minimizes leakage from either metadata or length
Russian copulation, another technique to prevent cribs
References
Further reading
XCBC: csrc.nist.gov/groups/ST/toolkit/BCM/documents/workshop2/presentations/xcbc.pdf
Cryptography
Padding algorithms | Padding (cryptography) | [
"Mathematics",
"Engineering"
] | 3,709 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
592,513 | https://en.wikipedia.org/wiki/Superlubricity | Superlubricity is a regime of relative motion in which friction vanishes or very nearly vanishes. However, the definition of "vanishing" friction level is not clear, which makes the term vague. As an ad hoc definition, a kinetic coefficient of friction less than 0.01 can be adopted. This definition also requires further discussion and clarification.
Superlubricity may occur when two crystalline surfaces slide over each other in dry incommensurate contact. This was first described in the early 1980s for Frenkel–Kontorova models and is called the Aubry transition. It has been extensively studied as a mathematical model, in atomistic simulations and in a range of experimental systems.
This effect, also called structural lubricity, was verified between two graphite surfaces in 2004.
The atoms in graphite are oriented in a hexagonal manner and form an atomic hill-and-valley landscape, which looks like an egg-crate. When the two graphite surfaces are in registry (every 60 degrees), the friction force is high. When the two surfaces are rotated out of registry, the friction is greatly reduced. This is like two egg-crates which can slide over each other more easily when they are "twisted" with respect to each other.
Observation of superlubricity in microscale graphite structures was reported in 2012, by shearing a square graphite mesa a few micrometers across, and observing the self-retraction of the sheared layer. Such effects were also theoretically described for a model of graphene and nickel layers. This observation, which is reproducible even under ambient conditions, shifts interest in superlubricity from a primarily academic topic, accessible only under highly idealized conditions, to one with practical implications for micro and nanomechanical devices.
A state of ultralow friction can also be achieved when a sharp tip slides over a flat surface and the applied load is below a certain threshold. Such a "superlubric" threshold depends on the tip-surface interaction and the stiffness of the materials in contact, as described by the Tomlinson model.
The threshold can be significantly increased by exciting the sliding system at its resonance frequency, which suggests a practical way to limit wear in nanoelectromechanical systems.
Superlubricity was also observed between a gold AFM tip and Teflon substrate due to repulsive Van der Waals forces and hydrogen-bonded layer formed by glycerol on the steel surfaces. Formation of the hydrogen-bonded layer was also shown to lead to superlubricity between quartz glass surfaces lubricated by biological liquid obtained from mucilage of Brasenia schreberi. Other mechanisms of superlubricity may include: (a) thermodynamic repulsion due to a layer of free or grafted macromolecules between the bodies so that the entropy of the intermediate layer decreases at small distances due to stronger confinement; (b) electrical repulsion due to external electrical voltage; (c) repulsion due to electrical double layer; (d) repulsion due to thermal fluctuations.
The similarity of the term superlubricity with terms such as superconductivity and superfluidity is misleading; other energy dissipation mechanisms can lead to a finite (normally small) friction force. Superlubricity is more analogous to phenomena such as superelasticity, in which substances such as Nitinol have very low, but nonzero, elastic moduli; supercooling, in which substances remain liquid until a lower-than-normal temperature; super black, which reflects very little light; giant magnetoresistance, in which very large but finite magnetoresistance effects are observed in alternating nonmagnetic and ferromagnetic layers; superhard materials, which are diamond or nearly as hard as diamond; and superlensing, which have a resolution which, while finer than the diffraction limit, is still finite.
Macroscale
In 2015, researchers first obtained evidence for superlubricity at microscales. The experiments were supported by computational studies. The Mira supercomputer simulated up to 1.2 million atoms for dry environments and up to 10 million atoms for humid environments. The researchers used LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) code to carry out reactive molecular dynamics simulations. The researchers optimized LAMMPS and its implementation of ReaxFF by adding OpenMP threading, replacing MPI point-to-point communication with MPI collectives in key algorithms, and leveraging MPI I/O. These enhancements doubled performance.
Applications
Friction is known to be a major consumer of energy; for instance in a detailed study it was found that it may lead to one third of the energy losses in new automobile engines. Superlubricious coatings could reduce this. Potential applications include computer hard drives, wind turbine gears, and mechanical rotating seals for microelectromechanical and nanoelectromechanical systems.
See also
Friction force microscopy
Tomlinson model
References
Condensed matter physics | Superlubricity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,047 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
592,558 | https://en.wikipedia.org/wiki/WiX | Windows Installer XML Toolset (WiX, pronounced "wicks") is a free software toolset that builds Windows Installer packages from XML. It consists of a command-line environment that developers may integrate into their build processes to build MSI and MSM packages. WiX was the first Microsoft project to be released under an open-source license, the Common Public License. It was also the first Microsoft project to be hosted on an external website.
After its release in 2004, Microsoft has used WiX to package Office 2007, SQL Server 2005, Visual Studio 2005/2008, and other products.
WiX includes Votive, a Visual Studio add-in that allows creating and building WiX setup projects using the Visual Studio IDE. Votive supports syntax highlighting and IntelliSense for source files and adds a WiX setup project type to Visual Studio.
History
WiX was the first Microsoft project to be released under an open-source license, the Common Public License. Initially hosted on SourceForge, it was also the first Microsoft project to be hosted externally.
On June 6, 2010, WiX moved from SourceForge to CodePlex. On August 14, 2012, Microsoft transferred the WiX copyright to the Microsoft-sponsored Outercurve Foundation. At the same time, the license was changed from the Common Public License to the Microsoft Reciprocal License. On May 4, 2016, WiX was transferred to the .NET Foundation.
Since Visual Studio 2012, the traditional setup project type has been removed from Visual Studio (available only as an extension since Visual Studio 2013). WiX is a recommended alternative.
Functions
WiX is a toolset designed to build Windows Installer (.msi) packages using the command line. It comes with the following tools:
Candle: compiles source files into object files
Light: combines object files into a .msi file
Lit: creates libraries that can be linked by Light.exe
Dark: decompiles a .msi file into WiX code
Heat: creates a WiX source file
Pyro: creates Patch files (.msp) without needing the Windows Installer SDK
Burn: coordinates dependency installer
See also
List of installation software
Shared Source Initiative
References
External links
Interview with Rob Mensching of Microsoft's WiX Project
C Sharp software
Free and open-source software
Free installation software
Free software programmed in C++
Free software programmed in C Sharp
Free software projects
Microsoft development tools
Microsoft free software
Windows-only free software
XML-based standards
2004 software | WiX | [
"Technology"
] | 514 | [
"Computer standards",
"XML-based standards"
] |
592,613 | https://en.wikipedia.org/wiki/YCbCr | YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y′CBCR, is a family of color spaces used as a part of the color image pipeline in digital video and photography systems.
Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Luma Y′ (with prime) is distinguished from luminance Y, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
Y′CbCr color spaces are defined by a mathematical coordinate transformation from an associated RGB primaries and white point. If the underlying RGB color space is absolute, the Y′CbCr color space is an absolute color space as well; conversely, if the RGB space is ill-defined, so is Y′CbCr. The transformation is defined in equations 32, 33 in ITU-T H.273.
Rationale
Black and white television was in wide use before color television. Due to the number of existing TV sets and cameras, some form of backwards compatibility was desired for the new color broadcasts. French engineer Georges Valensi developed and patented a system for transmitting RGB color as luma and chroma signals in 1938. This would allow existing black and white televisions to process only the luma information and ignore the chroma, essentially packaging a black and white video within the color video. Because of this backwards compatibility the system based on Valensi's idea was called compatible color. In the same way, and black and white broadcast could be received by a color television without any additional processing circuitry. To preserve existing broadcast frequency allocations, the new chroma information was given lower bandwidth than the luma information. This is possible because humans are more sensitive to the black-and-white information (see image example to the right). This is called chroma subsampling.
YCbCr and Y′CbCr are a practical approximation to color processing and perceptual uniformity, where the primary colors corresponding roughly to red, green and blue are processed into perceptually meaningful information. By doing this, subsequent image/video processing, transmission and storage can do operations and introduce errors in perceptually meaningful ways. Y′CbCr is used to separate out a luma signal (Y′) that can be stored with high resolution or transmitted at high bandwidth, and two chroma components (CB and CR) that can be bandwidth-reduced, subsampled, compressed, or otherwise treated separately for improved system efficiency.
CbCr
YCbCr is sometimes abbreviated to YCC.
Typically the terms Y′CbCr, YCbCr, YPbPr and YUV are used interchangeably, leading to some confusion. The main difference is that YPbPr is used with analog images and YCbCr with digital images, leading to different scaling values for Umax and Vmax (in YCbCr both are ) when converting to/from YUV. Y′CbCr and YCbCr differ due to the values being gamma corrected or not.
The equations below give a better picture of the common principles and general differences between these formats.
RGB conversion
R'G'B' to Y′PbPr
Y′CbCr signals (prior to scaling and offsets to place the signals into digital form) are called YPbPr, and are created from the corresponding gamma-adjusted RGB (red, green and blue) source using three defined constants KR, KG, and KB as follows:
where KR, KG, and KB are ordinarily derived from the definition of the corresponding RGB space, and required to satisfy .
The equivalent matrix manipulation is often referred to as the "color matrix":
And its inverse:
Here, the prime (′) symbols mean gamma correction is being used; thus R′, G′ and B′ nominally range from 0 to 1, with 0 representing the minimum intensity (e.g., for display of the color black) and 1 the maximum (e.g., for display of the color white). The resulting luma (Y) value will then have a nominal range from 0 to 1, and the chroma (PB and PR) values will have a nominal range from -0.5 to +0.5. The reverse conversion process can be readily derived by inverting the above equations.
Y′PbPr to Y′CbCr
When representing the signals in digital form, the results are scaled and rounded, and offsets are typically added. For example, the scaling and offset applied to the Y′ component per specification (e.g. MPEG-2) results in the value of 16 for black and the value of 235 for white when using an 8-bit representation. The standard has 8-bit digitized versions of CB and CR scaled to a different range of 16 to 240. Consequently, rescaling by the fraction (235-16)/(240-16) = 219/224 is sometimes required when doing color matrixing or processing in YCbCr space, resulting in quantization distortions when the subsequent processing is not performed using higher bit depths.
The scaling that results in the use of a smaller range of digital values than what might appear to be desirable for representation of the nominal range of the input data allows for some "overshoot" and "undershoot" during processing without necessitating undesirable clipping. This "headroom" and "toeroom" can also be used for extension of the nominal color gamut, as specified by xvYCC.
The value 235 accommodates a maximum overshoot of (255 - 235) / (235 - 16) = 9.1%, which is slightly larger than the theoretical maximum overshoot (Gibbs' Phenomenon) of about 8.9% of the maximum (black-to-white) step. The toeroom is smaller, allowing only 16 / 219 = 7.3% overshoot, which is less than the theoretical maximum overshoot of 8.9%. In addition, because values 0 and 255 are reserved in HDMI, the room is actually slightly less.
Y′CbCr to xvYCC
Since the equations defining Y′CbCr are formed in a way that rotates the entire nominal RGB color cube and scales it to fit within a (larger) YCbCr color cube, there are some points within the Y′CbCr color cube that cannot be represented in the corresponding RGB domain (at least not within the nominal RGB range). This causes some difficulty in determining how to correctly interpret and display some Y′CbCr signals. These out-of-range Y′CbCr values are used by xvYCC to encode colors outside the BT.709 gamut.
ITU-R BT.601 conversion
The form of Y′CbCr that was defined for standard-definition television use in the ITU-R BT.601 (formerly CCIR 601) standard for use with digital component video is derived from the corresponding RGB space (ITU-R BT.470-6 System M primaries) as follows:
From the above constants and formulas, the following can be derived for ITU-R BT.601.
Analog YPbPr from analog R'G'B' is derived as follows:
Digital Y′CbCr (8 bits per sample) is derived from analog R'G'B' as follows:
or simply componentwise
The resultant signals range from 16 to 235 for Y′ (Cb and Cr range from 16 to 240); the values from 0 to 15 are called footroom, while the values from 236 to 255 are called headroom. The same quantisation ranges, different for Y and Cb, Cr also apply to BT.2020 and BT.709.
Alternatively, digital Y′CbCr can derived from digital R'dG'dB'd (8 bits per sample, each using the full range with zero representing black and 255 representing white) according to the following equations:
In the formula below, the scaling factors are multiplied by . This allows for the value 256 in the denominator, which can be calculated by a single bitshift.
If the R'd G'd B'd digital source includes footroom and headroom, the footroom offset 16 needs to be subtracted first from each signal, and a scale factor of needs to be included in the equations.
The inverse transform is:
The inverse transform without any roundings (using values coming directly from ITU-R BT.601 recommendation) is:
This form of Y′CbCr is used primarily for older standard-definition television systems, as it uses an RGB model that fits the phosphor emission characteristics of older CRTs.
ITU-R BT.709 conversion
A different form of Y′CbCr is specified in the ITU-R BT.709 standard, primarily for HDTV use. The newer form is also used in some computer-display oriented applications, as sRGB (though the matrix used for sRGB form of YCbCr, sYCC, is still BT.601). In this case, the values of Kb and Kr differ, but the formulas for using them are the same. For ITU-R BT.709, the constants are:
This form of Y′CbCr is based on an RGB model that more closely fits the phosphor emission characteristics of newer CRTs and other modern display equipment.
The conversion matrices for BT.709 are these:
The definitions of the R', G', and B' signals also differ between BT.709 and BT.601, and differ within BT.601 depending on the type of TV system in use (625-line as in PAL and SECAM or 525-line as in NTSC), and differ further in other specifications. In different designs there are differences in the definitions of the R, G, and B chromaticity coordinates, the reference white point, the supported gamut range, the exact gamma pre-compensation functions for deriving R', G' and B' from R, G, and B, and in the scaling and offsets to be applied during conversion from R'G'B' to Y′CbCr. So proper conversion of Y′CbCr from one form to the other is not just a matter of inverting one matrix and applying the other. In fact, when Y′CbCr is designed ideally, the values of KB and KR are derived from the precise specification of the RGB color primary signals, so that the luma (Y′) signal corresponds as closely as possible to a gamma-adjusted measurement of luminance (typically based on the CIE 1931 measurements of the response of the human visual system to color stimuli).
ITU-R BT.2020 conversion
The ITU-R BT.2020 standard uses the same gamma function as BT.709. It defines:
Non-constant luminance Y'CbCr, similar to the previous entries, except with different and .
Constant luminance Y'cCbcCrc, a formulation where Y' is the gamma-codec version of the true luminance.
For both, the coefficients derived from the primaries are:
For NCL, the definition is classical: ; ; . The encoding conversion can, as usual, be written as a matrix. The decoding matrix for BT.2020-NCL is this with 14 decimal places:
The smaller values in the matrix are not rounded; they are precise values. For systems with limited precision (8 or 10 bit, for example) a lower precision of the above matrix could be used, for example, retaining only 6 digits after decimal point.
The CL version, YcCbcCrc, codes:
. This is the gamma function applied to the true luminance calculated from linear RGB.
if otherwise . and are the theoretical minimum and maximum of corresponding to the gamut. The rounded "practical" values are , . The full derivation can be found in the recommendation.
if otherwise . Again, and are theoretical limits. The rounded values are , .
The CL function can be used when preservation of luminance is of primary importance (see: ), or when "there is an expectation of improved coding efficiency for delivery." The specification refers to Report ITU-R BT.2246 on this matter. BT.2246 states that CL has improved compression efficiency and luminance preservation, but NCL will be more familiar to a staff that has previously handled color mixing and other production practices in HDTV YCbCr.
BT.2020 does not define PQ and thus HDR, it is further defined in SMPTE ST 2084 and BT.2100. BT.2100 will introduce the use of ICTCP, a semi-perceptual colorspace derived from linear RGB with good hue linearity. It is "near-constant luminance".
SMPTE 240M conversion
The SMPTE 240M standard (used on the MUSE analog HD television system) defines YCC with these coefficients:
The coefficients are derived from SMPTE 170M primaries and white point, as used in 240M standard.
JPEG conversion
JFIF usage of JPEG supports a modified Rec. 601 Y′CbCr where Y′, CB and CR have the full 8-bit range of [0...255]. Below are the conversion equations expressed to six decimal digits of precision. (For ideal equations, see ITU-T T.871.)
Note that for the following formulae, the range of each input (R,G,B) is also the full 8-bit range of [0...255].
And back:
The above conversion is identical to sYCC when the input is given as sRGB, except that IEC 61966-2-1:1999/Amd1:2003 only gives four decimal digits.
JPEG also defines a "YCCK" format from Adobe for CMYK input. In this format, the "K" value is passed as-is, while CMY are used to derive YCbCr with the above matrix by assuming , , and . As a result, a similar set of subsampling techniques can be used.
Coefficients for BT.470-6 System B, G primaries
These coefficients are not in use and were never in use.
Chromaticity-derived luminance systems
H.273 also describes constant and non-constant luminance systems which are derived strictly from primaries and white point, so that situations like sRGB/BT.709 default primaries of JPEG that use BT.601 matrix (that is derived from BT.470-6 System M) do not happen.
Numerical approximations
Prior to the development of fast SIMD floating-point processors, most digital implementations of RGB → Y′UV used integer math, in particular fixed-point approximations. Approximation means that the precision of the used numbers (input data, output data and constant values) is limited, and thus a precision loss of typically about the last binary digit is accepted by whoever makes use of that option in typically a trade-off to improved computation speeds.
Y′ values are conventionally shifted and scaled to the range [16, 235] (referred to as studio swing or "TV levels") rather than using the full range of [0, 255] (referred to as full swing or "PC levels"). This practice was standardized in SMPTE-125M in order to accommodate signal overshoots ("ringing") due to filtering. U and V values, which may be positive or negative, are summed with 128 to make them always positive, giving a studio range of 16–240 for U and V. (These ranges are important in video editing and production, since using the wrong range will result either in an image with "clipped" blacks and whites, or a low-contrast image.)
Approximate 8-bit matrices for BT.601
These matrices round all factors to the closest 1/256 unit. As a result, only one 16-bit intermediate value is formed for each component, and a simple right-shift with rounding can take care of the division.
For studio-swing:
For full-swing:
Google's Skia used to use the above 8-bit full-range matrix, resulting in a slight greening effect on JPEG images encoded by Android devices, more noticeable on repeated saving. The issue was fixed in 2016, when the more accurate version was used instead. Due to SIMD optimizations in libjpeg-turbo, the accurate version is actually faster.
Packed pixel formats and conversion
RGB files are typically encoded in 8, 12, 16 or 24 bits per pixel. In these examples, we will assume 24 bits per pixel, which is written as RGB888. The standard byte format is simply r0, g0, b0, r1, g1, b1, ....
YCbCr Packed pixel formats are often referred to as "YUV". Such files can be encoded in 12, 16 or 24 bits per pixel. Depending on subsampling, the formats can largely be described as 4:4:4, 4:2:2, and 4:2:0p. The apostrophe after the Y is often omitted, as is the "p" (for planar) after YUV420p. In terms of actual file formats, 4:2:0 is the most common, as the data is more reduced, and the file extension is usually ".YUV". The relation between data rate and sampling (A:B:C) is defined by the ratio between Y to U and V channel. The notation of "YUV" followed by three numbers is vague: the three numbers could refer to the subsampling (as is done in "YUV420"), or it could refer to bit depth in each channel (as is done in "YUV565"). The unambiguous way to refer to these formats is via the FourCC code.
To convert from RGB to YUV or back, it is simplest to use RGB888 and 4:4:4. For 4:1:1, 4:2:2 and 4:2:0, the bytes need to be converted to 4:4:4 first.
4:4:4
4:4:4 is straightforward, as no pixel-grouping is done: the difference lies solely in how many bits each channel is given, and their arrangement. The basic scheme uses 3 bytes per pixel, with the order y0, u0, v0, y1, u1, v1 (using "u" for Cb and "v" for Cr; the same applies to content below). In computers, it is more common to see a format, which adds an alpha channel and goes a0, y0, u0, v0, a1, y1, u1, v1, because groups of 32-bits are easier to deal with.
4:2:2
4:2:2 groups 2 pixels together horizontally in each conceptual "container". Two main arrangements are:
YUY2: also called YUYV, runs in the format y0, u, y1, v.
UYVY: the byte-swapped reverse of YUY2, runs in the format u, y0, v, y1.
4:1:1
4:1:1 is rarely used. Pixels are in horizontal groups of 4.
4:2:0
4:2:0 is very commonly used. The main formats are IMC2, IMC4, YV12, and NV12. All of these four formats are "planar", meaning that the Y, U, and V values are grouped together instead of interspersed. They all occupy 12 bits per pixel, assuming a 8-bit channel.
IMC2 first lays the full images out in Y. It then arranges each line of chroma in the order of V0 ... Vn, U0 ... Un, where n is the number of chroma samples per line, equal to half the width of Y.
IMC4 is similar to IMC2, except it runs in U0 ... Un, V0 ... Vn.
I420 is a simpler design and is more commonly used. The entire image in Y is written out, followed by the image in U, then by the whole image in V.
YV12 follows the same general design as I420, only the order between the U and V images is flipped.
NV12 is possibly the most commonly-used 8-bit 4:2:0 format. It is the default for Android camera preview. The entire image in Y is written out, followed by interleaved lines that go U0, V0, U1, V1, etc.
There are also "tiled" variants of planar formats.
References
External links
Y′CbCr calculator, including BT.1886
Charles Poynton — Color FAQ
Charles Poynton — Video engineering
Color Space Visualization
YUV, YCbCr, YPbPr color spaces.
YCbCr Definition
Software resources for packed pixels:
Kohn, Mike. Y′UV422 to RGB using SSE/Assembly
libyuv
pixfc-sse – C library of SSE-optimized color format conversions
YUV files – Sample / Demo YUV/RGB video files in many YUV formats, help you for the testing.
Color space | YCbCr | [
"Mathematics"
] | 4,495 | [
"Color space",
"Space (mathematics)",
"Metric spaces"
] |
592,687 | https://en.wikipedia.org/wiki/Network%20security | Network security are security controls, policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password.
Network security concept
Network security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan).
Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevention system (IPS) help detect and inhibit the action of such malware. An anomaly-based intrusion detection system may also monitor the network like wireshark traffic and may be logged for audit purposes and for later high-level analysis. Newer systems combining unsupervised machine learning with full network traffic analysis can detect active network attackers from malicious insiders or targeted external attackers that have compromised a user machine or account.
Communication between two hosts using a network may be encrypted to maintain security and privacy.
Honeypots, essentially decoy network-accessible resources, may be deployed in a network as surveillance and early-warning tools, as the honeypots are not normally accessed for legitimate purposes. Honeypots are placed at a point in the network where they appear vulnerable and undefended, but they Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID ...are actually isolated and monitored. Techniques used by the attackers that attempt to compromise these decoy resources are studied during and after an attack to keep an eye on new exploitation techniques. Such analysis may be used to further tighten security of the actual network being protected by the honeypot. A honeypot can also direct an attacker's attention away from legitimate servers. A honeypot encourages attackers to spend their time and energy on the decoy server while distracting their attention from the data on the real server. Similar to a honeypot, a honeynet is a network set up with intentional vulnerabilities. Its purpose is also to invite attacks so that the attacker's methods can be studied and that information can be used to increase network security. A honeynet typically contains one or more honeypots.
Previous research on network security was mostly about using tools to secure transactions and information flow, and how well users knew about and used these tools. However, more recently, the discussion has expanded to consider information security in the broader context of the digital economy and society. This indicates that it's not just about individual users and tools; it's also about the larger culture of information security in our digital world.
Security management
Security management for networks is different for all kinds of situations. A home or small office may only require basic security while large businesses may require high-maintenance and advanced software and hardware to prevent malicious attacks from hacking and spamming. In order to minimize susceptibility to malicious attacks from external threats to the network, corporations often employ tools which carry out network security verifications].
Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes.
Types of attack
Networks are subject to attacks from malicious sources. Attacks can be from two categories: "Passive" when a network intruder intercepts data traveling through the network, and "Active" in which an intruder initiates commands to disrupt the network's normal operation or to conduct reconnaissance and lateral movements to find and gain access to assets available via the network.
Types of attacks include:
Passive
Network
Active:
Network virus (router viruses)
Data modification
See also
References
Further reading
Case Study: Network Clarity , SC Magazine 2014
Cisco. (2011). What is network security?. Retrieved from cisco.com
Security of the Internet (The Froehlich/Kent Encyclopedia of Telecommunications vol. 15. Marcel Dekker, New York, 1997, pp. 231–255.)
Introduction to Network Security , Matt Curtin, 1997.
Security Monitoring with Cisco Security MARS, Gary Halleen/Greg Kellogg, Cisco Press, Jul. 6, 2007.
Self-Defending Networks: The Next Generation of Network Security, Duane DeCapite, Cisco Press, Sep. 8, 2006.
Security Threat Mitigation and Response: Understanding CS-MARS, Dale Tesch/Greg Abelar, Cisco Press, Sep. 26, 2006.
Securing Your Business with Cisco ASA and PIX Firewalls, Greg Abelar, Cisco Press, May 27, 2005.
Deploying Zone-Based Firewalls, Ivan Pepelnjak, Cisco Press, Oct. 5, 2006.
Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman | Radia Perlman | Mike Speciner, Prentice-Hall, 2002.
Network Infrastructure Security, Angus Wong and Alan Yeung, Springer, 2009.
Cybersecurity engineering | Network security | [
"Technology",
"Engineering"
] | 1,278 | [
"Cybersecurity engineering",
"Computer network security",
"Computer engineering",
"Computer networks engineering"
] |
592,703 | https://en.wikipedia.org/wiki/Usability%20lab | A usability lab is a place where usability testing is done. It is an environment where users are studied interacting with a system for the sake of evaluating the system's usability.
Depending on the kind of system that is evaluated, the user sits in front of a personal computer or stands in front of the systems interface, alongside a facilitator who gives the user tasks to perform. Behind a one-way mirror, a number of observers watch the interaction, make notes, and ensure the activity is recorded. Very often the testing and the observing room are not placed alongside. In this case the video and audio observation are transmitted through a (wireless) network and broadcast via a video monitor or video beamer and loudspeakers. Usually, sessions will be filmed and the software will log interaction details.
Benefits of usability testing
Usability is defined by how effectively users can use a product, a brochure, application, website, software package, or video game to achieve their goals. Usability testing is a practice used within the field of user-centered design and user experience that allows for the designers to interact with the users directly about the product to make any necessary modifications to the prototype of the product, whether it be software, a device, or a website. The purpose of the practice is to discover any missed requirements or any kind of development that was seen to be intuitive but ended up confusing new users. By testing user needs and how they interact with the product, designers are able to assess on the product's capacity to meet its intended purpose.
Usability labs help optimize UI designs, work flows, understanding the voice of the customers, and understanding what customers really do. Through in-lab sessions at a specified location, designers, stakeholders and anyone else involved in the project, are observing the process of how a customer interacts with the current prototype. To understand user needs, engineers must observe people while they are actually using computer systems and collect data from on system usability. In-lab usability testing usually has small and specific sample sizes to better obtain qualitative data on the product. The participants cooperate with engineers to understand how the user interacts with the system being tested through hands-on testing.
"Through this process, developers are able to identify issues with the product. To aid fixing any problems, observers pay strict attention to:
Learn if participants are able to complete specified tasks successfully
Identify how long it takes to complete specified tasks
Find out how satisfied participants are with your Web site or other product
Identify changes required to improve user performance and satisfaction
Analyze the performance to see if it meets your usability objectives"
Usability and user experience
User experience is important to customer response in the market. The causes of failed designs and ad design decisions can usually be attributed to a lack of information. A poor user experience can ruin a product launch, drive users away for good and impact the reputation of a company.
Lab-based testing environment
Usability tests are both formal and informal attempts to gather data about how users experience interfaces (Angelo), devices, software, sites, and many more. Usability tests have a wide range of involvement in other fields of product development.
Tools and technology
Usability labs usually feature two rooms. One room containing the lab with the system being tested for usability and all the other necessary equipment such as video and audio recording devices or eye motion trackers. Here, the participant is asked to come in and they are provided tasks to complete to test specific ideas of the product, but sometimes are allowed to explore the product by trying what a certain feature does.
Audience
In formal labs, there is typically a second room with a one-way mirror. Here, the observation room is held that allows stakeholders, designers, developers, and other parties involved in the project to observe and understand that some things they might have found to be intuitive among their team to actually be more complex than the feature had to be.
Recruiting participants
Choosing participants for lab testing involves consideration. Not just anyone is a suitable participant for the in-lab test. It is vital to recruit participants who are similar to the site users for usability testing. Developers and designers are not the users, so refrain from using internal staff as participants unless the individual has had no involvement in the design or development of the site or product and they represent a target audience. It is also a good idea to compensate participants for taking time out of their schedule to involve their self in a voluntary experiment; however, there are restrictions. For example, federal employees cannot be paid for their time.
The number of users to test is also an important consideration when recruiting participants. Usability tests cost money and resources which is unfortunately very limited, especially with smaller-scaled projects. One effective approach is to consider using five participants. "Zero users give you zero insights." The moment a single user has been observed in a lab setting, insight on the product is immediately gained. Features in the current design need to be redesigned and revisited to essentially fix anything that was not helping users with their experience. However, there is a limit to how many users should be considered because "as you add more and more users, you learn less and less because you will keep seeing the same things again and again."
What to look for during testing
User research is the process of observing and understanding how people interact with different objects in everyday life. These can range anywhere from websites and software products to hardware and other gadgets.
Different techniques
Think-aloud experiments
Contextual interviews
Concurrent probing
Retrospective probing
First click testing
Focus groups
Individual interviews
Online surveys
Task analysis
References
EvocInsights. http://www.evocinsights.com/pdf/eVOC_Services_Overview_Usability_Labs.pdf
The Chisel Group. http://thechiselgroup.org/usability-lab/
Teced. http://teced.com/services/usability-testing-and-evaluation/lab-usability-testing/
External links
Survey of Usability Labs — Summary statistics for size and layout of 13 usability labs (1994)
Laboratory types
Usability | Usability lab | [
"Chemistry"
] | 1,239 | [
"Laboratory types"
] |
592,816 | https://en.wikipedia.org/wiki/Gate%20array | A gate array is an approach to the design and manufacture of application-specific integrated circuits (ASICs) using a prefabricated chip with components that are later interconnected into logic devices (e.g. NAND gates, flip-flops, etc.) according to custom order by adding metal interconnect layers in the factory. It was popular during the upheaval in the semiconductor industry in the 1980s, and its usage declined by the end of the 1990s.
Similar technologies have also been employed to design and manufacture analog, analog-digital, and structured arrays, but, in general, these are not called gate arrays.
Gate arrays have also been known as uncommitted logic arrays ('ULAs'), which also offered linear circuit functions, and semi-custom chips.
History
Development
Gate arrays had several concurrent development paths. Ferranti in the UK pioneered commercializing bipolar ULA technology, offering circuits of "100 to 10,000 gates and above" by 1983. The company's early lead in semi-custom chips, with the initial application of a ULA integrated circuit involving a camera from Rollei in 1972, expanding to "practically all European camera manufacturers" as users of the technology, led to the company's dominance in this particular market throughout the 1970s. However, by 1982, as many as 30 companies had started to compete with Ferranti, reducing the company's market share to around 30 percent. Ferranti's "major competitors" were other British companies such as Marconi and Plessey, both of which had licensed technology from another British company, Micro Circuit Engineering. A contemporary initiative, UK5000, also sought to produce a CMOS gate array with "5,000 usable gates", with involvement from British Telecom and a number of other major British technology companies.
IBM developed proprietary bipolar master slices that it used in mainframe manufacturing in the late 1970s and early 1980s, but never commercialized them externally. Fairchild Semiconductor also flirted briefly in the late 1960s with bipolar arrays diode–transistor logic and transistor-transistor logic called Micromosaic and Polycell.
CMOS (complementary metal–oxide–semiconductor) technology opened the door to the broad commercialization of gate arrays. The first CMOS gate arrays were developed by Robert Lipp in 1974 for International Microcircuits, Inc. (IMI) a Sunnyvale photo-mask shop started by Frank Deverse, Jim Tuttle and Charlie Allen, ex-IBM employees. This first product line employed 7.5 micron single-level metal CMOS technology and ranged from 50 to 400 gates. Computer-aided design (CAD) technology at the time was very rudimentary due to the low processing power available, so the design of these first products was only partially automated.
This product pioneered several features that went on to become standard in future designs. The most important were: the strict organization of n-channel and p-channel transistors in 2-3 row pairs across the chip; and running all interconnect on grids rather than minimum custom spacing, which had been the standard until then. This later innovation paved the way to full automation when coupled with the development of 2-layer CMOS arrays. Customizing these first parts was somewhat tedious and error-prone due to the lack of good software tools. IMI tapped into PC board development techniques to minimize manual customization effort. Chips at the time were designed by hand, drawing all components and interconnecting on precision gridded Mylar sheets, using colored pencils to delineate each processing layer. Rubylith sheets were then cut and peeled to create a (typically) 200x to 400x scale representation of the process layer. This was then photo-reduced to make a 1x mask. Digitization rather than rubylith cutting was just coming in as the latest technology, but initially, it only removed the rubylith stage; drawings were still manual and then "hand" digitized. PC boards, meanwhile, had moved from custom rubylith to PC tape for interconnects. IMI created to-scale photo enlargements of the base layers. Using decals of logic gate connections and PC tape to interconnect these gates, custom circuits could be quickly laid out by hand for these relatively small circuits, and photo-reduced using existing technologies.
After a falling out with IMI, Robert Lipp went on to start California Devices, Inc. (CDI) in 1978 with two silent partners, Bernie Aronson, and Brian Tighe. CDI quickly developed a product line competitive to IMI and, shortly thereafter, a 5-micron silicon gate single-layer product line with densities of up to 1,200 gates. A couple of years later, CDI followed up with "channel-less" gate arrays that reduced the row blockages caused by a more complex silicon underlayer that pre-wired the individual transistor connections to locations needed for common logic functions, simplifying the first-level metal interconnect. This increased chip densities by 40%, significantly reducing manufacturing costs.
Innovation
Early gate arrays were low-performance and relatively large and expensive compared to state-of-the-art n-MOS technology then being used for custom chips. CMOS technology was being driven by very low-power applications such as watch chips and battery-operated portable instrumentation, not performance. They were also well under the performance of the existing dominant logic technology, transistor–transistor logic. However, there were many niche applications where they were invaluable, particularly in low power, size reduction, portable and aerospace applications as well as time-to-market sensitive products. Even these small arrays could replace a board full of transistor–transistor logic gates if performance were not an issue. A common application was combining a number of smaller circuits that were supporting a larger LSI circuit on a board was affectionately known as "garbage collection". And the low cost of development and custom tooling made the technology available to the most modest budgets. Early gate arrays played a large part in the CB craze in the 1970s as well as a vehicle for the introduction of other later mass-produced products such as modems and cell phones.
By the early 1980s, gate arrays were starting to move out of their niche applications to the general market. Several factors in technology and markets were converging. Size and performance were increasing; automation was maturing; the technology became "hot" when in 1981 IBM introduced its new flagship 3081 mainframe with CPU comprising gate arrays. They were used in a consumer product, the ZX81, and new entrants to the market increased visibility and credibility.
In 1981, Wilfred Corrigan, Bill O'Meara, Rob Walker, and Mitchell "Mick" Bohn founded LSI Logic. Their initial intention was to commercialize emitter coupled logic gate arrays, but discovered the market was quickly moving towards CMOS. Instead, they licensed CDI's silicon gate CMOS line as a second source. This product established them in the market while they developed their own proprietary 5-micron 2-layer metal line. This latter product line was the first commercial gate array product amenable to full automation. LSI developed a suite of proprietary development tools that allowed users to design their own chip from their own facility by remote login to LSI Logic's system.
Sinclair Research ported an enhanced ZX80 design to a ULA chip for the ZX81, and later used a ULA in the ZX Spectrum. A compatible chip was made in Russia as T34VG1. Acorn Computers used several ULA chips in the BBC Micro, and later a single ULA for the Acorn Electron. Many other manufacturers from the time of the home computer boom period used ULAs in their machines. The IBM PC took over much of the personal computer market, and the sales volumes made full-custom chips more economical. Commodore's Amiga series used gate arrays for the Gary and Gayle custom chips, as their code names may suggest.
In an attempt to reduce the costs and increase the accessibility of gate array design and production, Ferranti introduced in 1982 a computer-aided design tool for their uncommitted logic array (ULA) product called ULA Designer. Although costing £46,500 to acquire, this tool promised to deliver reduced costs of around £5,000 per design plus manufacturing costs of £1-2 per chip in high volumes, in contrast to the £15,000 design costs incurred by engaging Ferranti's services for the design process. Based on a PDP-11/23 minicomputer running RSX/11M, together with graphical display, keyboard, "digitalizing board", control desk and optional plotter, the solution aimed to satisfy the design needs of gate arrays from 100 to 10,000 gates, with the design being undertaken entirely by the organisation acquiring the solution, starting with a "logic plan", proceeding through the layout of the logic in the gate array itself, and concluding with the definition of a test specification for verification of the logic and for establishing an automated testing regime. Verification of completed designs was performed by "external specialists" after the transfer of the design to a "CAD center" in Manchester, England or Sunnyvale, California, potentially over the telephone network. Prototyping completed designs took an estimated 3 to 4 weeks. The minicomputer itself was also adaptable to run as a laboratory or office system where appropriate.
Ferranti followed up on the ULA Designer with the Silicon Design System product based on the VAX-11/730 with 1 MB of RAM, 120 MB Winchester disk, and utilising a high-resolution display driven by a graphics unit with 500 KB of its own memory for "high speed windowing, painting, and editing capabilities". The software itself was available separately for organisations already likely to be using VAX-11/780 systems to provide a multi-user environment, but the "standalone system" package of hardware and software was intended to provide a more affordable solution with a "faster response" during the design process. The suite of tools involved in the use of the product included logic entry and test schedule definition (using Ferranti's own description languages), logic simulation, layout definition and checking, and mask generation for prototype gate arrays. The system also sought to support completely auto-routed designs, utilising architectural features of Ferranti's auto-routable (AR) arrays to deliver a "100-percent success auto-layout system" with this convenience incurring an increase in silicon area of approximately 25 percent.
Other British companies developed products for gate array design and fabrication. Qudos Limited, a spin-off from Cambridge University, offered a chip design product called Quickchip available for VAX and MicroVAX II systems and as a complete $11,000 turnkey solution, providing a suite of tools broadly similar to those of Ferranti's products including automatic layout, routing, rule checking and simulation functionality for the design of gate arrays. Qudos employed electron beam lithography, etching designs onto Ferranti ULA devices that formed the physical basis of these custom chips. Typical prototype production costs were stated as £100 per chip. Quickchip was subsequently ported to the Acorn Cambridge Workstation, with a low-end version for the BBC Micro, and to the Acorn Archimedes.
Alternatives
Indirect competition arose with the development of the field-programmable gate array (FPGA). Xilinx was founded in 1984, and its first products were much like early gate arrays, slow and expensive, fit only for some niche markets. However, Moore's Law quickly made them a force and, by the early 1990s, were seriously disrupting the gate array market.
Designers still wished for a way to create their own complex chips without the expense of full-custom design, and eventually, this wish was granted with the arrival of not only the FPGA, but complex programmable logic device (CPLD), metal configurable standard cells (MCSC), and structured ASICs. Whereas a gate array required a back-end semiconductor wafer foundry to deposit and etch the interconnections, the FPGA and CPLD had user-programmable interconnections. Today's approach is to make the prototypes by FPGAs, as the risk is low and the functionality can be verified quickly. For smaller devices, production costs are sufficiently low. But for large FPGAs, production is very expensive, power-hungry, and in many cases, do not reach the required speed. To address these issues, several ASIC companies like BaySand, Faraday, Gigoptics, and others offer FPGA to ASIC conversion services.
Decline
While the market boomed, profits for the industry were lacking. Semiconductors underwent a series of rolling recessions during the 1980s that created a boom-bust cycle. The 1980 and 1981–1982 general recessions were followed by high-interest rates that curbed capital spending. This reduction played havoc on the semiconductor business, which at the time was highly dependent on capital spending. Manufacturers desperate to keep their fab plants full and afford constant modernization in a fast-moving industry became hyper-competitive. The many new entrants to the market drove gate array prices down to the marginal costs of the silicon manufacturers. Fabless companies such as LSI Logic and CDI survived on selling design services and computer time rather than on production revenues.
As of the early 21st century, the gate array market was a remnant of its former self, driven by the FPGA conversions done for cost or performance reasons. IMI moved out of gate arrays into mixed-signal circuits and was later acquired by Cypress Semiconductor in 2001; CDI closed its doors in 1989; and LSI Logic abandoned the market in favor of standard products and was eventually acquired by Broadcom.
Design
A gate array is a prefabricated silicon chip with most transistors having no predetermined function. These transistors can be connected by metal layers to form standard NAND or NOR logic gates. These logic gates can then be further interconnected into a complete circuit on the same or later metal layers. The creation of a circuit with a specified function is accomplished by adding this final layer or layers of metal interconnects to the chip late in the manufacturing process, allowing the function of the chip to be customized as desired. These layers are analogous to the copper layers of a printed circuit board.
The earliest gate arrays comprised bipolar transistors, usually configured as high-performance transistor–transistor logic, emitter-coupled logic, or current-mode logic logic configurations. CMOS (complementary metal–oxide–semiconductor) gate arrays were later developed and came to dominate the industry.
Gate array master slices with unfinished chips arrayed across a wafer are usually prefabricated and stockpiled in large quantities regardless of customer orders. The design and fabrication according to the individual customer specifications can be finished in a shorter time than standard cell or full custom design. The gate array approach reduces the non-recurring engineering mask costs as fewer custom masks need to be produced. In addition, manufacturing test tooling lead time and costs are reduced — the same test fixtures can be used for all gate array products manufactured on the same die size. Gate arrays were the predecessor of the more complex structured ASIC; unlike gate arrays, structured ASICs tend to include predefined or configurable memories and/or analog blocks.
An application circuit must be built on a gate array that has enough gates, wiring, and I/O pins. Since requirements vary, gate arrays usually come in families, with larger members having more of all resources, but correspondingly more expensive. While the designer can fairly easily count how many gates and I/Os pins are needed, the number of routing tracks needed may vary considerably even among designs with the same amount of logic. (For example, a crossbar switch requires much more routing than a systolic array with the same gate count.) Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, gate array manufacturers try to provide just enough tracks so that most designs that will fit in terms of gates and I/O pins can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs.
The main drawbacks of gate arrays are their somewhat lower density and performance compared with other approaches to ASIC design. However, this style is often a viable approach for low production volumes.
Uses
Gate arrays were used widely in the home computers in the early to mid 1980s, including in the ZX81, ZX Spectrum, BBC Micro, Acorn Electron, Advance 86, and Commodore Amiga.
In the 1980s, the Forth Novix N4016 and HP 3000 Series 37 CPUs, both stack machines were implemented by gate arrays as were some graphic terminal functions. Some supporting hardware in at least 1990s DEC and HP servers was implemented by gate arrays.
References
Further reading
Databooks
External links | Gate array | [
"Technology",
"Engineering"
] | 3,538 | [
"Computer engineering",
"Gate arrays"
] |
592,830 | https://en.wikipedia.org/wiki/Monkeys%20and%20apes%20in%20space | Before humans went into space in the 1960s, several other animals were launched into space, including numerous other primates, so that scientists could investigate the biological effects of spaceflight. The United States launched flights containing primate passengers primarily between 1948 and 1961 with one flight in 1969 and one in 1985. France launched two monkey-carrying flights in 1967. The Soviet Union and Russia launched monkeys between 1983 and 1996. Most primates were anesthetized before lift-off.
Over thirty-two non-human primates flew in the space program; none flew more than once. Numerous backup primates also went through the programs but never flew. Monkeys and non-human apes from several species were used, including rhesus macaque, crab-eating macaque, squirrel monkeys, pig-tailed macaques, and chimpanzees.
United States
The first primate launched into high subspace, although not a space flight, was Albert I, a rhesus macaque, who on June 18, 1948, rode a rocket flight to over in Earth's atmosphere on a V-2 rocket. Albert I died of suffocation during the flight and may actually have died in the cramped space capsule before launch.
On June 14, 1949, Albert II survived a sub-orbital V-2 flight into space (but died on impact after a parachute failure) to become the first monkey, first primate, and first mammal in space. His flight reached – past the Kármán line of 100 km which designates the beginning of space.
On September 16, 1949, Albert III died below the Kármán line, at 35,000 feet (10.7 km), in an explosion of his V2. On December 8, Albert IV, the second mammal in space, flew on the last monkey V-2 flight and died on impact after another parachute failure after reaching 130.6 km. Alberts, I, II, and IV were rhesus macaques while Albert III was a crab-eating macaque.
Monkeys later flew on Aerobee rockets. On April 18, 1951, a monkey, possibly called Albert V, died due to parachute failure. Yorick, also called Albert VI, along with 11 mouse crewmates, reached 236,000 ft (72 km, 44.7 mi) and survived the landing, on September 20, 1951, the first monkey to do so (the dogs Dezik and Tsygan had survived a trip to space in July of that year), although he died two hours later. Two of the mice also died after recovery; all of the deaths were thought to be related to stress from overheating in the sealed capsule in the New Mexico sun while awaiting the recovery team. Albert VI's flight surpassed the 50-mile boundary the U.S. used for spaceflight but was below the international definition of space. Patricia and Mike, two cynomolgus monkeys, flew on May 21, 1952, and survived, but their flight was only to 26 kilometers.
On December 13, 1958, Gordo, also called Old Reliable, a squirrel monkey, survived being launched aboard Jupiter AM-13 by the US Army. After flying for over 1,500 miles and reaching a height of 310 miles (500 km) before returning to Earth, Gordo landed in the South Atlantic and was killed due to mechanical failure of the parachute recovery system in the rocket nose cone.
On May 28, 1959, aboard the JUPITER AM-18, Miss Able, a rhesus macaque, and Miss Baker, a squirrel monkey from Peru, flew a successful mission. Able was born at the Ralph Mitchell Zoo in Independence, Kansas. They traveled in excess of 16,000 km/h, and withstood 38 g (373 m/s2). Able died June 1, 1959, while undergoing surgery to remove an infected medical electrode, from a reaction to the anesthesia. Baker became the first monkey to survive the stresses of spaceflight and the related medical procedures. Baker died November 29, 1984, at the age of 27 and is buried on the grounds of the United States Space & Rocket Center in Huntsville, Alabama. Able was preserved, and is now on display at the Smithsonian Institution's National Air and Space Museum. Their names were taken from the 1943–1955 US military phonetic alphabet.
On December 4, 1959, from Wallops Island, Virginia, Sam, a rhesus macaque, flew on the Little Joe 2 in the Mercury program to 53 miles high. On January 21, 1960, Miss Sam, also a rhesus macaque, followed, on Little Joe 1B although her flight was only to in a test of emergency procedures.
Chimpanzees Ham and Enos also flew in the Mercury program, with Ham becoming the first great ape or Hominidae in space. The names "Sam" and "Ham" were acronyms. Sam was named in homage to the School of Aerospace Medicine at Brooks Air Force Base in San Antonio, Texas, and the name "Ham" was taken from Holloman Aerospace Medicine at Holloman Air Force Base, New Mexico. Ham and Enos were among 60 chimpanzees brought to New Mexico by the U.S. Air Force for space flight tests. Six were selected to be trained at Cape Canaveral by Tony Gentry et al.
Goliath, a squirrel monkey, died in the explosion of his Atlas rocket on November 10, 1961. A rhesus macaque called Scatback flew a sub-orbital flight on December 20, 1961, but was lost at sea after landing.
Bonny, a pig-tailed macaque, flew on Biosatellite 3, a mission which lasted from June 29 to July 8, 1969. This was the first multi-day monkey flight but came after longer human spaceflights were common. He died within a day of landing.
Spacelab 3 on the Space Shuttle flight STS-51-B featured two squirrel monkeys named No. 3165 and No. 384-80. The flight was from April 29 to May 6, 1985.
France
France launched a pig-tailed macaque named Martine on a Vesta rocket on March 7, 1967, and another named Pierrette on March 13. These suborbital flights reached and , respectively. Martine became the first monkey to survive more than a couple of hours after flying above the international definition of the edge of space (Ham and Enos, launched earlier by the United States, were chimpanzees).
Soviet Union and Russia
The Soviet /Russian space program used only rhesus macaques in its Bion satellite program in 1980s and 1990s. The names of the monkeys began with sequential letters of the Russian alphabet (А, Б, В, Г, Д, Е, Ё, Ж, З...). The animals all survived their missions but for a single fatality in post-flight surgery, after which the program was canceled.
The first monkeys launched by Soviet space program, Abrek and Bion, flew on Bion 6. They remained aloft from December 14, 1983 – December 20, 1983.
Next came Bion 7 with monkeys Verny and Gordy from July 10, 1985 – July 17, 1985.
Then Dryoma and Yerosha on Bion 8 from September 29, 1987 – October 12, 1987. After returning from space Dryoma was presented to Cuban leader Fidel Castro.
Bion 9 with monkeys Zhakonya and Zabiyaka followed from September 15, 1989, to September 28, 1989. The two took the space endurance record for monkeys at 13 days, 17 hours in space.
Monkeys Ivasha and Krosh flew on Bion 10 from December 29, 1992, to January 7, 1993. Krosh produced offspring, after rehabilitation upon returning to Earth.
Lapik and Multik were the last monkeys in space until Iran launched one of its own in 2013. The pair flew aboard Bion 11 from December 24, 1996, to January 7, 1997. Upon return, Multik died while under anesthesia for US biopsy sampling on January 8. Lapik nearly died while undergoing the identical procedure. No follow-up research has been conducted to determine whether these two incidents, together with the 1959 loss of the US monkey Able in post-flight surgery, contraindicate the administration of anesthesia during or shortly after spaceflights. Further US support of the Bion program was canceled.
Argentina
On December 23, 1969, as part of the 'Operación Navidad' (Operation Christmas), Argentina launched Juan (a tufted capuchin, native to Argentina's Misiones Province) using a two-stage Rigel 04 rocket. It ascended perhaps up to 82 kilometers and then was recovered successfully. Other sources give 30, 60 or 72 kilometers. All of these are below the international definition of space (100 km). Later, on February 1, 1970, the experience was repeated with a female monkey of the same species using an X-1 Panther rocket. Although it reached a higher altitude than its predecessor, it was lost after the capsule's parachute failed.
China
The PRC spacecraft Shenzhou 2 launched on January 9, 2001. It is rumored that inside the reentry module (precise information is lacking due to the secrecy surrounding China's space program) a monkey, dog, and rabbit rode aloft in a test of the spacecraft's life support systems. The SZ2 reentry module landed in Inner Mongolia on January 16. No images of the recovered capsule appeared in the press, leading to the widespread inference that the flight ended in failure. According to press reports citing an unnamed source, a parachute connection malfunction caused a hard landing.
Iran
On January 28, 2013, AFP and Sky News reported that Iran had sent a monkey in a "Pishgam" rocket to a height of and retrieved "shipment". Iranian media gave no details on the timing or location of the launch, while details that were reported raised questions about the claim. Pre-flight and post-flight photos clearly showed different monkeys. The confusion was due to the publishing of an archive photo from 2011 by the Iranian Student News Agency (ISNA). According to Jonathan McDowell, a Harvard astronomer, "They just mixed that footage with the footage of the 2013 successful launch."
On December 14, 2013, AFP and BBC reported that Iran again sent a monkey to space and safely returned it. Rhesus macaques Aftab (2013.01.28) and Fargam (2013.12.14) were each launched separately into space and safely returned. Researchers continue to study the effects of the space trip on their offspring.
In popular culture
The 2014 animated series All Hail King Julien: Exiled features a horde of highly intelligent chimpanzee cosmonauts, whom they claim the USSR abandoned on a Madagascar islet following the end of the Space Race. Although faithful to "Mother Russia", the chimpanzees vow to take revenge on humankind for declaring their obsolescence.
See also
Laika
Soviet space dogs
Ham (chimpanzee)
Human spaceflight
Animals in space
Space exploration
List of individual apes
List of individual monkeys
Alice King Chatham (sculptor who designed oxygen masks and safety gear for animals in the U.S. space program)
Captain Simian & the Space Monkeys (1996 television series)
Space Chimps (2008 film)
One Small Step: The Story of the Space Chimps (2008 documentary)
Animal testing on non-human primates
References
Further reading
Animals in Space: From Research Rockets to the Space Shuttle, Chris Dubbs and Colin Burgess, Springer-Praxis Books, 2007
External links
ape-o-naut
NPR article on the 50th anniversary of Able and Baker's flight
A humorous look at monkey astronaut names
Monkey astronauts
One Small Step: The Story of the Space Chimps Official Documentary Site
Argentina and the Conquest of Space (Spanish)
Animals in space
Monkeys
Collection of the Smithsonian Institution
Space | Monkeys and apes in space | [
"Chemistry",
"Biology"
] | 2,460 | [
"Animal testing",
"Space-flown life",
"Animals in space"
] |
592,897 | https://en.wikipedia.org/wiki/Hellinger%E2%80%93Toeplitz%20theorem | In functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product is bounded. By definition, an operator A is symmetric if
for all x, y in the domain of A. Note that symmetric everywhere-defined operators are necessarily self-adjoint, so this theorem can also be stated as follows: an everywhere-defined self-adjoint operator is bounded. The theorem is named after Ernst David Hellinger and Otto Toeplitz.
This theorem can be viewed as an immediate corollary of the closed graph theorem, as self-adjoint operators are closed. Alternatively, it can be argued using the uniform boundedness principle. One relies on the symmetric assumption, therefore the inner product structure, in proving the theorem. Also crucial is the fact that the given operator A is defined everywhere (and, in turn, the completeness of Hilbert spaces).
The Hellinger–Toeplitz theorem reveals certain technical difficulties in the mathematical formulation of quantum mechanics. Observables in quantum mechanics correspond to self-adjoint operators on some Hilbert space, but some observables (like energy) are unbounded. By Hellinger–Toeplitz, such operators cannot be everywhere defined (but they may be defined on a dense subset). Take for instance the quantum harmonic oscillator. Here the Hilbert space is L2(R), the space of square integrable functions on R, and the energy operator H is defined by (assuming the units are chosen such that ℏ = m = ω = 1)
This operator is self-adjoint and unbounded (its eigenvalues are 1/2, 3/2, 5/2, ...), so it cannot be defined on the whole of L2(R).
References
Reed, Michael and Simon, Barry: Methods of Mathematical Physics, Volume 1: Functional Analysis. Academic Press, 1980. See Section III.5.
Theorems in functional analysis
Hilbert spaces | Hellinger–Toeplitz theorem | [
"Physics",
"Mathematics"
] | 422 | [
"Hilbert spaces",
"Theorems in mathematical analysis",
"Theorems in functional analysis",
"Quantum mechanics"
] |
592,935 | https://en.wikipedia.org/wiki/Secure%20channel | In cryptography, a secure channel is a means of data transmission that is resistant to overhearing and tampering. A confidential channel is a means of data transmission that is resistant to overhearing, or eavesdropping (e.g., reading the content), but not necessarily resistant to tampering (i.e., manipulating the content). An authentic channel is a means of data transmission that is resistant to tampering but not necessarily resistant to overhearing.
In contrast to a secure channel, an insecure channel is unencrypted and may be subject to eavesdropping and tampering. Secure communications are possible over an insecure channel if the content to be communicated is encrypted prior to transmission.
Secure channels in the real world
There are no perfectly secure channels in the real world. There are, at best, only ways to make insecure channels (e.g., couriers, homing pigeons, diplomatic bags, etc.) less insecure: padlocks (between courier wrists and a briefcase), loyalty tests, security investigations, and guns for courier personnel, diplomatic immunity for diplomatic bags, and so forth.
In 1976, two researchers proposed a key exchange technique (now named after them)—Diffie–Hellman key exchange (D-H). This protocol allows two parties to generate a key only known to them, under the assumption that a certain mathematical problem (e.g., the Diffie–Hellman problem in their proposal) is computationally infeasible (i.e., very very hard) to solve, and that the two parties have access to an authentic channel. In short, that an eavesdropper—conventionally termed 'Eve', who can listen to all messages exchanged by the two parties, but who can not modify the messages—will not learn the exchanged key. Such a key exchange was impossible with any previously known cryptographic schemes based on symmetric ciphers, because with these schemes it is necessary that the two parties exchange a secret key at some prior time, hence they require a confidential channel at that time which is just what we are attempting to build.
Most cryptographic techniques are trivially breakable if keys are not exchanged securely or, if they actually were so exchanged, if those keys become known in some other way— burglary or extortion, for instance. An actually secure channel will not be required if an insecure channel can be used to securely exchange keys, and if burglary, bribery, or threat aren't used. The eternal problem has been and of course remains—even with modern key exchange protocols—how to know when an insecure channel worked securely (or alternatively, and perhaps more importantly, when it did not), and whether anyone has actually been bribed or threatened or simply lost a notebook (or a notebook computer) with key information in it. These are hard problems in the real world and no solutions are known—only expedients, jury rigs, and workarounds.
Future possibilities
Researchers have proposed and demonstrated quantum cryptography in order to create a secure channel.
It is not clear whether the special conditions under which it can be made to work are practical in the real world of noise, dirt, and imperfection in which most everything is required to function. Thus far, actual implementation of the technique is exquisitely finicky and expensive, limiting it to very special purpose applications. It may also be vulnerable to attacks specific to particular implementations and imperfections in the optical components of which the quantum cryptographic equipment is built. While implementations of classical cryptographic algorithms have received worldwide scrutiny over the years, only a limited amount of public research has been done to assess security of the present-day implementations of quantum cryptosystems, mostly because they are not in widespread use as of 2014.
Modeling a secure channel
Security definitions for a secure channel try to model its properties independently from its concrete instantiation. A good understanding of these properties is needed before designing a secure channel, and before being able to assess its appropriateness of employment in a cryptographic protocol. This is a topic of provable security. A definition of a secure channel that remains secure, even when used in arbitrary cryptographic protocols is an important building block for universally composable cryptography.
A universally composable authenticated channel can be built using digital signatures and a public key infrastructure.
Universally composable confidential channels are known to exist under computational hardness assumptions based on hybrid encryption and a public key infrastructure.
See also
Cryptochannel
Hybrid encryption
Secure communication
References
Secure communication
Cryptography | Secure channel | [
"Mathematics",
"Engineering"
] | 944 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
592,949 | https://en.wikipedia.org/wiki/Pharmacognosy | Pharmacognosy is the study of crude drugs obtained from medicinal plants, animals, fungi, and other natural sources. The American Society of Pharmacognosy defines pharmacognosy as "the study of the physical, chemical, biochemical, and biological properties of drugs, drug substances, or potential drugs or drug substances of natural origin as well as the search for new drugs from natural sources".
Description
The word "pharmacognosy" is derived from two Greek words: , (drug), and gnosis (knowledge) or the Latin verb cognosco (, 'with', and , 'know'; itself a cognate of the Greek verb , , meaning 'I know, perceive'), meaning 'to conceptualize' or 'to recognize'.
The term "pharmacognosy" was used for the first time by the German physician Johann Adam Schmidt (1759–1809) in his published book Lehrbuch der Materia Medica in 1811, and by Anotheus Seydler in 1815, in his Analecta Pharmacognostica.
Originally—during the 19th century and the beginning of the 20th century—"pharmacognosy" was used to define the branch of medicine or commodity sciences ( in German) which deals with drugs in their crude, or unprepared form. Crude drugs are the dried, unprepared material of plant, animal or mineral origin, used for medicine. The study of these materials under the name was first developed in German-speaking areas of Europe, while other language areas often used the older term materia medica taken from the works of Galen and Dioscorides. In German, the term ("science of crude drugs") is also used synonymously.
As late as the beginning of the 20th century, the subject had developed mainly on the botanical side, being particularly concerned with the description and identification of drugs both in their whole state and in powder form. Such branches of pharmacognosy are still of fundamental importance, particularly for botanical products (widely available as dietary supplements in the U.S. and Canada), quality control purposes, pharmacopoeial protocols and related health regulatory frameworks. At the same time, development in other areas of research has enormously expanded the subject. The advent of the 21st century brought a renaissance of pharmacognosy, and its conventional botanical approach has been broadened up to molecular and metabolomic levels.
In addition to the previously mentioned definition, the American Society of Pharmacognosy defines pharmacognosy as "the study of natural product molecules (typically secondary metabolites) that are useful for their medicinal, ecological, gustatory, or other functional properties." Similarly, the mission of the Pharmacognosy Institute at the University of Illinois at Chicago involves plant-based and plant-related health products for the benefit of human health. Other definitions are more encompassing, drawing on a broad spectrum of biological subjects, including botany, ethnobotany, marine biology, microbiology, herbal medicine, chemistry, biotechnology, phytochemistry, pharmacology, pharmaceutics, clinical pharmacy, and pharmacy practice.
medical ethnobotany: the study of traditional uses of plants for medicinal purposes;
ethnopharmacology: the study of pharmacological qualities of traditional medicinal substances;
phytotherapy: the study of medicinal use of plant extracts;
phytochemistry: the study of chemicals derived from plants (including the identification of new drug candidates derived from plant sources);
zoopharmacognosy: the process by which animals self-medicate, by selecting and using plants, soils, and insects to treat and prevent disease;
marine pharmacognosy: the study of chemicals derived from marine organisms.
Biological background
All plants produce chemical compounds as part of their normal metabolic activities. These phytochemicals are divided into (1) primary metabolites such as sugars and fats, which are found in all plants; and (2) secondary metabolites—compounds which are found in a smaller range of plants, serving more specific functions. For example, some secondary metabolites are toxins used by plants to deter predation and others are pheromones used to attract insects for pollination. It is these secondary metabolites and pigments that can have therapeutic actions in humans and which can be refined to produce drugs—examples are inulin from the roots of dahlias, quinine from the cinchona, THC and CBD from the flowers of cannabis, morphine and codeine from the poppy, and digoxin from the foxglove.
Plants synthesize a variety of phytochemicals, but most are derivatives:
Alkaloids are a class of chemical compounds containing a nitrogen ring. Alkaloids are produced by a large variety of organisms, including bacteria, fungi, plants, and animals, and are part of the group of natural products (also called secondary metabolites). Many alkaloids can be purified from crude extracts by acid-base extraction. Many alkaloids are toxic to other organisms.
Polyphenols ( phenolics) are compounds that contain phenol rings. The anthocyanins that give grapes their purple color, the isoflavones, the phytoestrogens from soy and the tannins that give tea its astringency are phenolics.
Glycosides are molecules in which a sugar is bound to a non-carbohydrate moiety, usually a small organic molecule. Glycosides play numerous important roles in living organisms. Many plants store chemicals in the form of inactive glycosides. These can be activated by enzyme hydrolysis, which causes the sugar part to be broken off, making the chemical available for use.
Terpenes are a large and diverse class of organic compounds, produced by a variety of plants, particularly conifers, which are often strong smelling and thus may have a protective function. They are the major components of resins, and of turpentine produced from resins. When terpenes are modified chemically, such as by oxidation or rearrangement of the carbon skeleton, the resulting compounds are generally referred to as terpenoids. Terpenes and terpenoids are the primary constituents of the essential oils of many types of plants and flowers. Essential oils are used widely as natural flavor additives for food, as fragrances in perfumery, and in traditional and alternative medicines such as aromatherapy. Synthetic variations and derivatives of natural terpenes and terpenoids also greatly expand the variety of aromas used in perfumery and flavors used in food additives. The fragrance of rose and lavender is due to monoterpenes. The carotenoids produce shades of red, yellow and orange in pumpkin, maize, and tomatoes.
Natural products chemistry
A typical protocol to isolate a pure chemical agent from natural origin is bioassay-guided fractionation, meaning step-by-step separation of extracted components based on differences in their physicochemical properties, and assessing the biological activity, followed by next round of separation and assaying. Typically, such work is initiated after a given crude drug formulation (typically prepared by solvent extraction of the natural material) is deemed "active" in a particular in vitro assay. If the end-goal of the work at hand is to identify which one(s) of the scores or hundreds of compounds are responsible for the observed in vitro activity, the path to that end is fairly straightforward:
fractionate the crude extract, e.g. by solvent partitioning or chromatography.
test the fractions thereby generated with in vitro assays.
repeat steps 1) and 2) until pure, active compounds are obtained.
determine structure(s) of active compound(s), typically by using spectroscopic methods.
In vitro activity does not necessarily translate to biological activity in humans or other living systems.
Herbal
In the past, in some countries in Asia and Africa, up to 80% of the population may rely on traditional medicine (including herbal medicine) for primary health care. Native American cultures have also relied on traditional medicine such as ceremonial smoking of tobacco, potlatch ceremonies, and herbalism, to name a few, prior to European colonization. Knowledge of traditional medicinal practices is disappearing in indigenous communities, particularly in the Amazon.
With worldwide research into pharmacology as well as medicine, traditional medicines or ancient herbal medicines are often translated into modern remedies, such as the anti-malarial group of drugs called artemisinin isolated from Artemisia annua herb, a herb that was known in Chinese medicine to treat fever. However, it was found that its plant extracts had antimalarial activity, leading to the Nobel Prize winning discovery of artemisinin.
Microscopical evaluation
Microscopic evaluation is essential for the initial identification of herbs, identifying small fragments of crude or powdered herbs, identifying adulterants (such as insects, animal feces, mold, fungi, etc.), and recognizing the plant by its characteristic tissue features. Techniques such as microscopic linear measurements, determination of leaf constants, and quantitative microscopy are also utilized in this evaluation. The determination of leaf constants includes stomatal number, stomatal index, vein islet number, vein termination number, and palisade ratio.
The stomatal index is the percentage formed by the number of stomata divided by the total number of epidermal cells, with each stoma being counted as one cell.
where:
S.I. is the stomatal index
S is the number of stomata per unit area
E is the number of epidermal cells in the same unit area.
See also
Bioprospecting
List of plants used in herbalism
Pharmacognosy Reviews
References
External links
Pharmacy | Pharmacognosy | [
"Chemistry"
] | 2,042 | [
"Pharmacology",
"Pharmacognosy",
"Pharmacy"
] |
593,115 | https://en.wikipedia.org/wiki/Hydrogen%20hypothesis | The hydrogen hypothesis is a model proposed by William F. Martin and Miklós Müller in 1998 that describes a possible way in which the mitochondrion arose as an endosymbiont within a prokaryotic host in the archaea, giving rise to a symbiotic association of two cells from which the first eukaryotic cell could have arisen (symbiogenesis).
According to the hydrogen hypothesis:
The hosts that acquired the mitochondria were hydrogen-dependent archaea, possibly similar in physiology to modern methanogenic archaea, which use hydrogen and carbon dioxide to produce methane;
The future mitochondrion was a facultatively anaerobic eubacterium which produced hydrogen and carbon dioxide as byproducts of anaerobic respiration;
A symbiotic relationship between the two started, based on the host's hydrogen dependence (anaerobic syntrophy).
Mechanism
The hypothesis differs from many alternative views within the endosymbiotic theory framework, which suggest that the first eukaryotic cells evolved a nucleus but lacked mitochondria, the latter arising as a eukaryote engulfed a primitive bacterium that eventually became the mitochondrion. The hypothesis attaches evolutionary significance to hydrogenosomes and provides a rationale for their common ancestry with mitochondria. Hydrogenosomes are anaerobic mitochondria that produce ATP by, as a rule, converting pyruvate into hydrogen, carbon dioxide and acetate. Examples from modern biology are known where methanogens cluster around hydrogenosomes within eukaryotic cells. Most theories within the endosymbiotic theory framework do not address the common ancestry of mitochondria and hydrogenosomes. The hypothesis provides a straightforward explanation for the observation that eukaryotes are genetic chimeras with genes of archaeal and eubacterial ancestry. Furthermore, it would imply that archaea and eukarya split after the modern groups of archaea appeared. Most theories within the endosymbiotic theory framework predict that some eukaryotes never possessed mitochondria. The hydrogen hypothesis predicts that no primitively mitochondrion-lacking eukaryotes ever existed. In the 15 years following the publication of the hydrogen hypothesis, this specific prediction has been tested many times and found to be in agreement with observation.
In 2015, the discovery and placement of the Lokiarchaeota (an archaeal lineage possessing an expanded genetic repertoire including genes involved in membrane remodeling and actin cytoskeletal structure) as the sister group to eukaryotes called into question particular tenets of the hydrogen hypothesis, as Lokiarchaeota appear to lack methanogenesis.
See also
Archezoa
Eocyte hypothesis
References
Hydrogen biology
Biological hypotheses | Hydrogen hypothesis | [
"Biology"
] | 585 | [
"Biological hypotheses"
] |
593,135 | https://en.wikipedia.org/wiki/Cult%20%28religious%20practice%29 | Cult is the care (Latin: cultus) owed to deities and their temples, shrines, or churches; cult is embodied in ritual and ceremony. Its presence or former presence is made concrete in temples, shrines and churches, and cult images, including votive offerings at votive sites.
Etymology
Cicero defined religio as cultus deorum, "the cultivation of the gods". The "cultivation" necessary to maintain a specific deity was that god's cultus, "cult", and required "the knowledge of giving the gods their due" (scientia colendorum deorum). The noun cultus originates from the past participle of the verb colo, colere, colui, cultus, "to tend, take care of, cultivate", originally meaning "to dwell in, inhabit" and thus "to tend, cultivate land (ager); to practice agriculture", an activity fundamental to Roman identity even when Rome as a political center had become fully urbanized.
Cultus is often translated as "cult" without the negative connotations the word may have in English, or with the Old English word "worship", but it implies the necessity of active maintenance beyond passive adoration. Cultus was expected to matter to the gods as a demonstration of respect, honor, and reverence; it was an aspect of the contractual nature of Roman religion (see do ut des). Augustine of Hippo echoes Cicero's formulation when he declares, "religion is nothing other than the cultus of God."
The term "cult" first appeared in English in 1617, derived from the French culte, meaning "worship" which in turn originated from the Latin word cultus meaning "care, cultivation, worship". The meaning "devotion to a person or thing" is from 1829. Starting about 1920, "cult" acquired an additional six or more positive and negative definitions. In French, for example, sections in newspapers giving the schedule of worship for Catholic services are headed Culte Catholique, while the section giving the schedule of Protestant services is headed culte réformé.
Outward religious practice
In the specific context of the Greek hero cult, Carla Antonaccio wrote:
In the Catholic Church, outward religious practice in is the technical term for Roman Catholic devotions or veneration extended to a particular saint, not to the worship of God. Catholicism and the Eastern Orthodox Church make a major distinction between latria, the worship that is offered to God alone, and dulia, which is veneration offered to the saints, including the veneration of Mary, whose veneration is often referred to as hyperdulia.
See also
History of religion
Mythology
Place of worship
Religious fanaticism
References
Further reading
Ancient roman
Religious practices
Rituals
Ancient Roman rituals | Cult (religious practice) | [
"Biology"
] | 573 | [
"Behavior",
"Religious practices",
"Human behavior"
] |
593,140 | https://en.wikipedia.org/wiki/Liisi%20Oterma | Liisi Oterma (; 6 January 1915 – 4 April 2001) was a Finnish astronomer, the first woman to get a Ph.D. degree in astronomy in Finland.
She studied mathematics and astronomy at the University of Turku, and soon became Yrjö Väisälä's assistant and worked on the search for minor planets. She obtained her master's degree in 1938. From 1941 to 1965, Oterma worked as an observer at the university's observatory. She obtained her PhD in 1955 with a dissertation on telescope optics. She was the first Finnish woman to obtain a PhD in astronomy.
In 1959, Oterma became a docent of astronomy and from 1965 to 1978 a professor in University of Turku. In 1971, she succeeded Väisälä as the director of the Tuorla Observatory. She was director of the astronomical-optical research institute at the University of Turku from 1971-1975.
Oterma was interested in languages and spoke German, French, Italian, Spanish, Esperanto, Hungarian, English and also Arabic, for example. Oterma's original plans were to study Sanskrit, but it was not offered at the University of Turku, and the choice was ultimately focused on astronomy.
Oterma was quiet, modest in nature, and fearful of publicity. Anders Reiz, a professor at the Copenhagen Observatory, among others, said Oterma was “silent in eleven languages”. Oterma avoided appearing in photographs, and there are only a handful of pictures of her.
She discovered or co-discovered several comets, including periodic comets 38P/Stephan-Oterma, 39P/Oterma and 139P/Väisälä–Oterma. She is also credited by the Minor Planet Center (MPC) with the discovery of 54 minor planets between 1938 and 1953, and ranks 153rd on MPC's all-time discovery chart.
The Hildian asteroid 1529 Oterma, discovered by Finnish astronomer Yrjö Väisälä in 1938, was named in her honour.
Minor planets discovered
References
1915 births
2001 deaths
20th-century women scientists
20th-century astronomers
Discoverers of asteroids
Discoverers of comets
Finnish astronomers
Women astronomers
Astronomy-optics society | Liisi Oterma | [
"Astronomy"
] | 456 | [
"Women astronomers",
"Astronomers"
] |
593,212 | https://en.wikipedia.org/wiki/Twip | A twip (abbreviating "twentieth of a point" or "twentieth of an inch point") is a typographical measurement, defined as of a typographical point. One twip is inch, or 17.64 μm.
In computing
Twips are screen-independent units to ensure that the proportion of screen elements are the same on all display systems. A twip is defined as being of an inch (approximately 17.64 μm).
A pixel is a screen-dependent unit, standing for 'picture element'. A pixel is a dot that represents the smallest graphical measurement on a screen.
Twips are the default unit of measurement in Visual Basic (version 6 and earlier, prior to VB.NET). Converting between twips and screen pixels is achieved using the TwipsPerPixelX and TwipsPerPixelY properties or the ScaleX and ScaleY methods.
Twips can be used with Symbian OS bitmap images for automatic scaling from bitmap pixels to device pixels. They are also used in Rich Text Format from Microsoft for platform-independent exchange and they are the base length unit in OpenOffice.org and its fork LibreOffice.
Flash internally specifies most sizes in units it calls twips, but which are really of a logical pixel, which is of an actual twip.
See also
Himetric
References
MSDN Library — com.ms.wfc.ui.CoordinateSystem.TWIP
Free On-Line Dictionary of Computing — twip
Foundation, ActionScript 3.0 Animation, Making Things Move! by Keith Peters (pbk)
Converting between twips and pixels - Ruby code
Typography
Units of length | Twip | [
"Mathematics"
] | 367 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
593,231 | https://en.wikipedia.org/wiki/Trunk%20%28botany%29 | In botany, the trunk (or bole) is the stem and main wooden axis of a tree, which is an important feature in tree identification, and which often differs markedly from the bottom of the trunk to the top, depending on the species.
The trunk is the most important part of the tree for timber production.
Occurrence
Trunks occur both in "true" woody plants and non-woody plants such as palms and other monocots, though the internal physiology is different in each case. In all plants, trunks thicken over time due to the formation of secondary growth, or, in monocots, pseudo-secondary growth. Trunks can be vulnerable to damage, including sunburn.
Vocabulary
Trunks which are cut down for making lumber are generally called logs; if they are cut to a specific length, called bolts. The term "log" is informally used in English to describe any felled trunk not rooted in the ground, whose roots are detached. A stump is the part of a trunk remaining in the ground after the tree has been felled, or the earth-end of an uprooted tree which retains its un-earthed roots.
Structure of the trunk
The trunk consists of five main parts: The outer bark, inner bark (phloem), cambium, sapwood (live xylem), and heartwood (dead xylem). From the outside of the tree working in:
The first layer is the outer bark; this is the protective outermost layer of the trunk.
Under this is the inner bark which is called the phloem. The phloem is how the tree transports nutrients from the roots to the shoots and vice versa.
The next layer is the cambium, a very thin layer of undifferentiated cells that divide to replenish the phloem cells on the outside and the xylem cells to the inside. The cambium contains the growth meristem of the trunk.
Directly inside of the cambium is the sapwood, or the live xylem cells. These cells transport the water through the tree. The xylem also stores starch inside the tree.
At the center of the tree is the heartwood. The heartwood is made up of dead xylem cells that have been filled with resins and minerals; these keep other organisms from infecting and growing in the center of the tree.
See also
Log (disambiguation)
Tree measurement
Tree volume measurement
References
External links
Plant morphology
de:Baum#Aufbau des Baumstammes | Trunk (botany) | [
"Biology"
] | 523 | [
"Plant morphology",
"Plants"
] |
593,233 | https://en.wikipedia.org/wiki/Trunking | In telecommunications, trunking is a technology for providing network access to multiple clients simultaneously by sharing a set of circuits, carriers, channels, or frequencies, instead of providing individual circuits or channels for each client. This is reminiscent to the structure of a tree with one trunk and many branches. Trunking in telecommunication originated in telegraphy, and later in telephone systems where a trunk line is a communications channel between telephone exchanges.
Other applications include the trunked radio systems commonly used by police agencies.
In the form of link aggregation and VLAN tagging, trunking has been applied in computer networking.
Telecommunications
A trunk line is a circuit connecting telephone switchboards (or other switching equipment), as distinguished from local loop circuit which extends from telephone exchange switching equipment to individual telephones or information origination/termination equipment.
Trunk lines are used for connecting a private branch exchange (PBX) to a telephone service provider. When needed they can be used by any telephone connected to the PBX, while the station lines to the extensions serve only one station’s telephones. Trunking saves cost, because there are usually fewer trunk lines than extension lines, since it is unusual in most offices to have all extension lines in use for external calls at once. Trunk lines transmit voice and data in formats such as analog, T1, E1, ISDN, PRI or SIP. The dial tone lines for outgoing calls are called DDCO (Direct Dial Central Office) trunks.
In the UK and the Commonwealth countries, a trunk call was the term for long-distance calling which traverses one or more trunk lines and involving more than one telephone exchange. This is in contrast to making a local call which involves a single exchange and typically no trunk lines.
Trunking also refers to the connection of switches and circuits within a telephone exchange. Trunking is closely related to the concept of grading. Trunking allows a group of inlet switches at the same time. Thus the service provider can provide a lesser number of circuits than might otherwise be required, allowing many users to "share" a smaller number of connections and achieve capacity savings.
Computer networks
Link aggregation
In computer networking, port trunking is the use of multiple concurrent network connections to aggregate the link speed of each participating port and cable, also called link aggregation. Such high-bandwidth link groups may be used to interconnect switches or to connect high-performance servers to a network.
VLAN
In the context of Ethernet VLANs, Cisco uses the term to mean carrying multiple VLANs through a single network link through the use of a trunking protocol. To allow for multiple VLANs on one link, frames from individual VLANs must be identified. The most common and preferred method, IEEE 802.1Q, adds a tag to the Ethernet frame labeling it as belonging to a certain VLAN. Since 802.1Q is an open standard it can work with equipment from any vendor. Cisco also has a (now deprecated) proprietary trunking protocol called Inter-Switch Link which encapsulates the Ethernet frame with its own container, which labels the frame as belonging to a specific VLAN. 3Com used proprietary Virtual LAN Trunking (VLT) before 802.1Q was defined.
Radio communications
In two-way radio communications, trunking refers to the ability of transmissions to be served by free channels whose availability is determined by algorithmic protocols. In conventional (i.e., not trunked) radio, users of a single service share one or more exclusive radio channels and must wait their turn to use them, analogous to the operation of a group of cashiers in a grocery store, where each cashier serves his/her own line of customers. The cashier represents each radio channel, and each customer represents a radio user transmitting on their radio.
Trunked radio systems (TRS) pool all of the cashiers (channels) into one group and use a store manager (site controller) that assigns incoming shoppers to free cashiers as determined by the store's policies (TRS protocols).
In a TRS, individual transmissions in any conversation may take place on several different channels. In the shopping analogy, this is as if a family of shoppers checks out all at once and are assigned different cashiers by the traffic manager. Similarly, if a single shopper checks out more than once, they may be assigned a different cashier each time.
Trunked radio systems provide greater efficiency at the cost of greater management overhead. The store manager's orders must be conveyed to all the shoppers. This is done by assigning one or more radio channels as the "control channel". The control channel transmits data from the site controller that runs the TRS, and is continuously monitored by all of the field radios in the system so that they know how to follow the various conversations between members of their talkgroups (families) and other talkgroups as they hop from radio channel to radio channel.
TRS's have grown massively in their complexity since their introduction, and now include multi-site systems that can cover entire states or groups of states. This is similar to the idea of a chain of grocery stores. The shopper generally goes to the nearest grocery store, but if there are complications or congestion, the shopper may opt to go to a neighboring store. Each store in the chain can talk to each other and pass messages between shoppers at different stores if necessary, and they provide backup to each other: if a store has to be closed for repair, then other stores pick up the customers.
TRS's have greater risks to overcome than conventional radio systems in that a loss of the store manager (site controller) would cause the system's traffic to no longer be managed. In this case, most of the time the TRS will automatically switch to an alternate control channel, or in more rare circumstances, conventional operation. In spite of these risks, TRS's usually maintain reasonable uptime.
TRS's are more difficult to monitor via radio scanner than conventional systems; however, larger manufacturers of radio scanners have introduced models that, with a little extra programming, are able to follow TRS's quite efficiently.
References
Communication circuits
Networks | Trunking | [
"Engineering"
] | 1,265 | [
"Telecommunications engineering",
"Communication circuits"
] |
593,255 | https://en.wikipedia.org/wiki/Venturi%20effect | The Venturi effect is the reduction in fluid pressure that results when a moving fluid speeds up as it flows from one section of a pipe to a smaller section. The Venturi effect is named after its discoverer, the 18th-century Italian physicist Giovanni Battista Venturi.
The effect has various engineering applications, as the reduction in pressure inside the constriction can be used both for measuring the fluid flow and for moving other fluids (e.g. in a vacuum ejector).
Background
In inviscid fluid dynamics, an incompressible fluid's velocity must increase as it passes through a constriction in accord with the principle of mass continuity, while its static pressure must decrease in accord with the principle of conservation of mechanical energy (Bernoulli's principle) or according to the Euler equations. Thus, any gain in kinetic energy a fluid may attain by its increased velocity through a constriction is balanced by a drop in pressure because of its loss in potential energy.
By measuring pressure, the flow rate can be determined, as in various flow measurement devices such as Venturi meters, Venturi nozzles and orifice plates.
Referring to the adjacent diagram, using Bernoulli's equation in the special case of steady, incompressible, inviscid flows (such as the flow of water or other liquid, or low-speed flow of gas) along a streamline, the theoretical static pressure drop at the constriction is given by
where is the density of the fluid, is the (slower) fluid velocity where the pipe is wider, and is the (faster) fluid velocity where the pipe is narrower (as seen in the figure). The static pressure at each position is measured using a small tube either outside and ending at the wall or into the pipe where the small tube is perpendicular to the flow direction.
Choked flow
The limiting case of the Venturi effect is when a fluid reaches the state of choked flow, where the fluid velocity approaches the local speed of sound. When a fluid system is in a state of choked flow, a further decrease in the downstream pressure environment will not lead to an increase in velocity, unless the fluid is compressed.
The mass flow rate for a compressible fluid will increase with increased upstream pressure, which will increase the density of the fluid through the constriction (though the velocity will remain constant). This is the principle of operation of a de Laval nozzle. Increasing source temperature will also increase the local sonic velocity, thus allowing increased mass flow rate, but only if the nozzle area is also increased to compensate for the resulting decrease in density.
Expansion of the section
The Bernoulli equation is invertible, and pressure should rise when a fluid slows down. Nevertheless, if there is an expansion of the tube section, turbulence will appear, and the theorem will not hold. In all experimental Venturi tubes, the pressure in the entrance is compared to the pressure in the middle section; the output section is never compared with them.
Experimental apparatus
Venturi tubes
The simplest apparatus is a tubular setup known as a Venturi tube or simply a Venturi (plural: "Venturis" or occasionally "Venturies"). Fluid flows through a length of pipe of varying diameter. To avoid undue aerodynamic drag, a Venturi tube typically has an entry cone of 30 degrees and an exit cone of 5 degrees.
Venturi tubes are often used in processes where permanent pressure loss is not tolerable and where maximum accuracy is needed in case of highly viscous liquids.
Orifice plate
Venturi tubes are more expensive to construct than simple orifice plates, and both function on the same basic principle. However, for any given differential pressure, orifice plates cause significantly more permanent energy loss.
Instrumentation and measurement
Both Venturi tubes and orifice plates are used in industrial applications and in scientific laboratories for measuring the flow rate of liquids.
Flow rate
A Venturi can be used to measure the volumetric flow rate, , using Bernoulli's principle.
Since
then
A Venturi can also be used to mix a liquid with a gas. If a pump forces the liquid through a tube connected to a system consisting of a Venturi to increase the liquid speed (the diameter decreases), a short piece of tube with a small hole in it, and last a Venturi that decreases speed (so the pipe gets wider again), the gas will be sucked in through the small hole because of changes in pressure. At the end of the system, a mixture of liquid and gas will appear. See aspirator and pressure head for discussion of this type of siphon.
Differential pressure
As fluid flows through a Venturi, the expansion and compression of the fluids cause the pressure inside the Venturi to change. This principle can be used in metrology for gauges calibrated for differential pressures. This type of pressure measurement may be more convenient, for example, to measure fuel or combustion pressures in jet or rocket engines.
The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel who used them to measure small and large flows of water and wastewater beginning at the end of the 19th century. While working for the Holyoke Water Power Company, Herschel would develop the means for measuring these flows to determine the water power consumption of different mills on the Holyoke Canal System, first beginning development of the device in 1886, two years later he would describe his invention of the Venturi meter to William Unwin in a letter dated June 5, 1888.
Compensation for temperature, pressure, and mass
Fundamentally, pressure-based meters measure kinetic energy density. Bernoulli's equation (used above) relates this to mass density and volumetric flow:
where constant terms are absorbed into k. Using the definitions of density (), molar concentration (), and molar mass (), one can also derive mass flow or molar flow (i.e. standard volume flow):
However, measurements outside the design point must compensate for the effects of temperature, pressure, and molar mass on density and concentration. The ideal gas law is used to relate actual values to design values:
Substituting these two relations into the pressure-flow equations above yields the fully compensated flows:
Q, m, or n are easily isolated by dividing and taking the square root. Note that pressure-, temperature-, and mass-compensation is required for every flow, regardless of the end units or dimensions. Also we see the relations:
Examples
The Venturi effect may be observed or used in the following:
Machines
During Underway replenishment the helmsman of each ship must constantly steer away from the other ship due to the Venturi effect, otherwise they will collide.
Cargo eductors on oil product and chemical ship tankers
Inspirators mix air and flammable gas in grills, gas stoves and Bunsen burners
Water aspirators produce a partial vacuum using the kinetic energy from the faucet water pressure
Steam siphons use the kinetic energy from the steam pressure to create a partial vacuum
Atomizers disperse perfume or spray paint (i.e. from a spray gun or airbrush)
Carburetors often use the effect to suck gasoline into an engine's intake air stream where the upstream air pressure is fed to the float bowl as am alternative to where ambient air pressure is in the float bowl in which the effect comes from Bernoulli's principle
Cylinder heads in piston engines have multiple Venturi areas like the valve seat and the port entrance, although these are not part of the design intent, merely a byproduct and any venturi effect is without specific function.
Wine aerators infuse air into wine as it is poured into a glass
Protein skimmers filter saltwater aquaria
Automated pool cleaners use pressure-side water flow to collect sediment and debris
Clarinets use a reverse taper to speed the air down the tube, enabling better tone, response and intonation
The leadpipe of a trombone, affecting the timbre
Industrial vacuum cleaners use compressed air
Venturi scrubbers are used to clean flue gas emissions
Injectors (also called ejectors) are used to add chlorine gas to water treatment chlorination systems
Steam injectors use the Venturi effect and the latent heat of evaporation to deliver feed water to a steam locomotive boiler.
Sandblasting nozzles accelerate and air and media mixture
Bilge water can be emptied from a moving boat through a small waste gate in the hull. The air pressure inside the moving boat is greater than the water sliding by beneath.
A scuba diving regulator uses the Venturi effect to assist maintaining the flow of gas once it starts flowing
In recoilless rifles to decrease the recoil of firing
The diffuser on an automobile
Race cars utilising ground effect to increase downforce and thus become capable of higher cornering speeds
Foam proportioners used to induct fire fighting foam concentrate into fire protection systems
Trompe air compressors entrain air into a falling column of water
The bolts in some brands of paintball markers
Low-speed wind tunnels can be considered very large Venturi because they take advantage of the Venturi effect to increase velocity and decrease pressure to simulate expected flight conditions.
Architecture
Hawa Mahal of Jaipur, also utilizes the Venturi effect, by allowing cool air to pass through, thus making the whole area more pleasant during the high temperatures in summer.
Large cities where wind is forced between buildings - the gap between the Twin Towers of the original World Trade Center was an extreme example of the phenomenon, which made the ground level plaza notoriously windswept. In fact, some gusts were so high that pedestrian travel had to be aided by ropes.
In the south of Iraq, near the modern town of Nasiriyah, a 4000-year-old flume structure has been discovered at the ancient site of Girsu. This construction by the ancient Sumerians forced the contents of a nineteen kilometre canal through a constriction to enable the side-channeling of water off to agricultural lands from a higher origin than would have been the case without the flume. A recent dig by archaeologists from the British museum confirmed the finding.
Nature
In windy mountain passes, resulting in erroneous pressure altimeter readings
The mistral wind in southern France increases in speed through the Rhone valley.
See also
Joule–Thomson effect
Venturi flume
Parshall flume
References
External links
3D animation of the Differential Pressure Flow Measuring Principle (Venturi meter)
Use of the Venturi effect for gas pumps to know when to turn off (video)
Fluid dynamics | Venturi effect | [
"Chemistry",
"Engineering"
] | 2,167 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
14,534,297 | https://en.wikipedia.org/wiki/Deviance%20%28sociology%29 | Deviance or the sociology of deviance explores the actions or behaviors that violate social norms across formally enacted rules (e.g., crime) as well as informal violations of social norms (e.g., rejecting folkways and mores). Although deviance may have a negative connotation, the violation of social norms is not always a negative action; positive deviation exists in some situations. Although a norm is violated, a behavior can still be classified as positive or acceptable.
Social norms differ throughout society and between cultures. A certain act or behaviour may be viewed as deviant and receive sanctions or punishments within one society and be seen as a normal behaviour in another society. Additionally, as a society's understanding of social norms changes over time, so too does the collective perception of deviance.
Deviance is relative to the place where it was committed or to the time the act took place. Killing another human is generally considered wrong for example, except when governments permit it during warfare or for self-defense. There are two types of major deviant actions: mala in se and mala prohibita.
Types of deviance
The violation of norms can be categorized as two forms, formal deviance and informal deviance. Formal deviance can be described as a crime, which violates laws in a society. Informal deviance are minor violations that break unwritten rules of social life. Norms that have great moral significance are mores. Under informal deviance, a more opposes societal taboos.
Taboo is a strong social form of behavior considered deviant by a majority. To speak of it publicly is condemned, and therefore, almost entirely avoided. The term “taboo” comes from the Tongan word “tapu” meaning "under prohibition", "not allowed", or "forbidden". Some forms of taboo are prohibited under law and transgressions may lead to severe penalties. Other forms of taboo result in shame, disrespect and humiliation. Taboo is not universal but does occur in the majority of societies. Some of the examples include murder, rape, incest, or child molestation.
Howard Becker, a labeling theorist, identified four different types of deviant behavior labels which are given as:
"Falsely accusing" an individual - others perceive the individual to be obtaining obedient or deviant behaviors.
"Pure deviance", others perceive the individual as participating in deviant and rule-breaking behavior.
"Conforming", others perceive the individual to be participating in the social norms that are distributed within societies.
"Secret deviance" which is when the individual is not perceived as deviant or participating in any rule-breaking behaviors.
Malicious compliance may furthermore pose a special case.
Theories of deviance
Deviant acts can be assertions of individuality and identity, and thus as rebellion against group norms of the dominant culture and in favor of a sub-culture. In a society, the behavior of an individual or a group determines how a deviant creates norms.
Three broad sociological classes exist that describe deviant behavior, namely, structural functionalism, symbolic interaction and conflict theory.
Structural functionalism
Structural functionalists are concerned with how various factors in a society come together and interact to form the whole. Most notable, the work of Émile Durkheim and Robert Merton have contributed to the Functionalist ideals.
Durkheim's normative theory of suicide
Émile Durkheim would claim that deviance was in fact a normal and necessary part of social organization. He would state four important functions of deviance:
"Deviance affirms cultural values and norms. Any definition of virtue rests on an opposing idea of vice: There can be no good without evil and no justice without crime."
Deviance defines moral boundaries, people learn right from wrong by defining people as deviant.
A serious form of deviance forces people to come together and react in the same way against it.
Deviance pushes society's moral boundaries which, in turn leads to social change.
When social deviance is committed, the collective conscience is offended. Durkheim (1897) describes the collective conscience as a set of social norms by which members of a society follow. Without the collective conscience, there would be no absolute morals followed in institutions or groups.
Social integration is the attachment to groups and institutions, while social regulation is the adherence to the norms and values of society. Durkheim's theory attributes social deviance to extremes of social integration and social regulation. He stated four different types of suicide from the relationship between social integration and social regulation:.
Altruistic suicide occurs when one is too socially integrated.
Egoistic suicide occurs when one is not very socially integrated.
Anomic suicide occurs when there is very little social regulation from a sense of aimlessness or despair.
Fatalistic suicide occurs when a person experiences too much social regulation.
Merton's strain theory
Robert K. Merton discussed deviance in terms of goals and means as part of his strain/anomie theory. Where Durkheim states that anomie is the confounding of social norms, Merton goes further and states that anomie is the state in which social goals and the legitimate means to achieve them do not correspond. He postulated that an individual's response to societal expectations and the means by which the individual pursued those goals were useful in understanding deviance. Specifically, he viewed collective action as motivated by strain, stress, or frustration in a body of individuals that arises from a disconnection between the society's goals and the popularly used means to achieve those goals. Often, non-routine collective behavior (rioting, rebellion, etc.) is said to map onto economic explanations and causes by way of strain. These two dimensions determine the adaptation to society according to the cultural goals, which are the society's perceptions about the ideal life, and to the institutionalized means, which are the legitimate means through which an individual may aspire to the cultural goals.
Merton described 5 types of deviance in terms of the acceptance or rejection of social goals and the institutionalized means of achieving them:
Innovation is a response due to the strain generated by our culture's emphasis on wealth and the lack of opportunities to get rich, which causes people to be "innovators" by engaging in stealing and selling drugs. Innovators accept society's goals, but reject socially acceptable means of achieving them. (e.g.: monetary success is gained through crime). Merton claims that innovators are mostly those who have been socialised with similar world views to conformists, but who have been denied the opportunities they need to be able to legitimately achieve society's goals.
Conformists accept society's goals and the socially acceptable means of achieving them (e.g.: monetary success is gained through hard work). Merton claims that conformists are mostly middle-class people in middle class jobs who have been able to access the opportunities in society such as a better education to achieve monetary success through hard work. According to Merton’s Strain Theory, only conformists accept societal goals. Societal goals are the desired economic, social, or classist achievements dictated by society.
Ritualism refers to the inability to reach a cultural goal thus embracing the rules to the point where the people in question lose sight of their larger goals in order to feel respectable. Ritualists reject society's goals, but accept society's institutionalized means. Ritualists are most commonly found in dead-end, repetitive jobs, where they are unable to achieve society's goals but still adhere to society's means of achievement and social norms.
Retreatism is the rejection of both cultural goals and means, letting the person in question "drop out". Retreatists reject the society's goals and the legitimate means to achieve them. Merton sees them as true deviants, as they commit acts of deviance to achieve things that do not always go along with society's values. Robert Merton’s Strain Theory dictates that deviance in lower economic classes oftentimes is characterized by retreatism deviance. Merton claims that homelessness and addiction in lower classes is a result of individuals rebelling against both work and the desire for economic progress.
Rebellion is somewhat similar to retreatism, because the people in question also reject both the cultural goals and means, but they go one step further to a "counterculture" that supports other social orders that already exist (rule breaking). Rebels reject society's goals and legitimate means to achieve them, and instead creates new goals and means to replace those of society, creating not only new goals to achieve but also new ways to achieve these goals that other rebels will find acceptable.
Symbolic interaction
Symbolic interaction refers to the patterns of communication, interpretation, and adjustment between individuals. Both the verbal and nonverbal responses that a listener then delivers are similarly constructed in expectation of how the original speaker will react. The ongoing process is like the game of charades, only it is a full-fledged conversation.
The term "symbolic interactionism" has come into use as a label for a relatively distinctive approach to the study of human life and human conduct. With symbolic interactionism, reality is seen as social, developed interaction with others. Most symbolic interactionists believe a physical reality does indeed exist by an individual's social definitions, and that social definitions do develop in part or relation to something “real.” People thus do not respond to this reality directly, but rather to the social understanding of reality. Humans therefore exist in three realities: a physical objective reality, a social reality, and a unique. A unique is described as a third reality created out of the social reality, a private interpretation of the reality that is shown to the person by others. Both individuals and society cannot be separated far from each other for two reasons. One, being that both are created through social interaction, and two, one cannot be understood in terms without the other. Behavior is not defined by forces from the environment such as drives, or instincts, but rather by a reflective, socially understood meaning of both the internal and external incentives that are currently presented.
Herbert Blumer (1969) set out three basic premises of the perspective:
"Humans act toward things on the basis of the meanings they ascribe to those things;"
"The meaning of such things is derived from, or arises out of, the social interaction that one has with others and the society;" and
"These meanings are handled in, and modified through, an interpretative process used by the person in dealing with the things he/she encounters;"
Sutherland's differential association
In his differential association theory, Edwin Sutherland posited that criminals learn criminal and deviant behaviors and that deviance is not inherently a part of a particular individual's nature. When an individual's significant others engage in deviant and/or criminal behavior, criminal behavior will be learned as a result to this exposure. He argues that criminal behavior is learned in the same way that all other behaviors are learned, meaning that the acquisition of criminal knowledge is not unique compared to the learning of other behaviors.
Sutherland outlined some very basic points in his theory, including the idea that the learning comes from the interactions between individuals and groups, using communication of symbols and ideas. When the symbols and ideas about deviation are much more favorable than unfavorable, the individual tends to take a favorable view upon deviance and will resort to more of these behaviors.
Criminal behavior (motivations and technical knowledge), as with any other sort of behavior, is learned. One example of this would be gang activity in inner city communities. Sutherland would feel that because a certain individual's primary influential peers are in a gang environment, it is through interaction with them that one may become involved in crime.
Neutralization theory
Gresham Sykes and David Matza's neutralization theory explains how deviants justify their deviant behaviors by providing alternative definitions of their actions and by providing explanations, to themselves and others, for the lack of guilt for actions in particular situations.
There are five types of neutralization:
Denial of responsibility: the deviant believes s/he was helplessly propelled into the deviance, and that under the same circumstances, any other person would resort to similar actions;
Denial of injury: the deviant believes that the action caused no harm to other individuals or to the society, and thus the deviance is not morally wrong;
Denial of the victim: the deviant believes that individuals on the receiving end of the deviance were deserving of the results due to the victim's lack of virtue or morals;
Condemnation of the condemners: the deviant believes enforcement figures or victims have the tendency to be equally deviant or otherwise corrupt, and as a result, are hypocrites to stand against; and
Appeal to higher loyalties: the deviant believes that there are loyalties and values that go beyond the confines of the law; morality, friendships, income, or traditions may be more important to the deviant than legal boundaries.
Labeling theory
Frank Tannenbaum and Howard S. Becker created and developed the labeling theory, which is a core facet of symbolic interactionism, and often referred to as Tannenbaum's "dramatization of evil." Becker believed that "social groups create deviance by making the rules whose infraction constitutes deviance".
Labeling is a process of social reaction by the "social audience," wherein people stereotype others, judging and accordingly defining (labeling) someone's behavior as deviant or otherwise. It has been characterized as the "invention, selection, manipulation of beliefs which define conduct in a negative way and the selection of people into these categories."
As such, labeling theory suggests that deviance is caused by the deviant's being labeled as morally inferior, the deviant's internalizing the label and finally the deviant's acting according to that specific label (i.e., an individual labelled as "deviant" will act accordingly). As time goes by, the "deviant" takes on traits that constitute deviance by committing such deviations as conform to the label (so the audience has the power to not label them and have the power to stop the deviance before it ever occurs by not labeling them). Individual and societal preoccupation with the label, in other words, leads the deviant individual to follow a self-fulfilling prophecy of abidance to the ascribed label.
This theory, while very much a symbolic interactionist theory, also has elements of conflict theory, as the dominant group has the power to decide what is deviant and acceptable and enjoys the power behind the labeling process. An example of this is a prison system that labels people convicted of theft, and because of this they start to view themselves as by definition thieves, incapable of changing. "From this point of view," Howard S. Becker writes:
Deviance is not a quality of the act the person commits, but rather a consequence of the application by others of rules and sanctions to an "offender". The deviant is one to whom the label has successfully been applied; deviant behavior is behavior that people so label.
In other words, "behavior only becomes deviant or criminal if defined and interfered as such by specific people in [a] specific situation." It is important to note the salient fact that society is not always correct in its labeling, often falsely identifying and misrepresenting people as deviants, or attributing to them characteristics which they do not have. In legal terms, people are often wrongly accused, yet many of them must live with the ensuant stigma (or conviction) for the rest of their lives.
On a similar note, society often employs double standards, with some sectors of society enjoying favoritism. Certain behaviors in one group are seen to be perfectly acceptable, or can be easily overlooked, but in another are seen, by the same audiences, as abominable.
The medicalization of deviance, the transformation of moral and legal deviance into a medical condition, is an important shift that has transformed the way society views deviance. The labelling theory helps to explain this shift, as behavior that used to be judged morally are now being transformed into an objective clinical diagnosis. For example, people with drug addictions are considered "sick" instead of "bad."
Primary and secondary deviation
Edwin Lemert developed the idea of primary and secondary deviation as a way to explain the process of labeling. Primary deviance is any general deviance before the deviant is labeled as such in a particular way. Secondary deviance is any action that takes place after primary deviance as a reaction to the institutional identification of the person as a deviant.
When an actor commits a crime (primary deviance), however mild, the institution will bring social penalties down on the actor. However, punishment does not necessarily stop crime, so the actor might commit the same primary deviance again, bringing even harsher reactions from the institutions. At this point, the actor will start to resent the institution, while the institution brings harsher and harsher repression. Eventually, the whole community will stigmatize the actor as a deviant and the actor will not be able to tolerate this, but will ultimately accept his or her role as a criminal, and will commit criminal acts that fit the role of a criminal.
Primary and secondary deviation is what causes people to become harder criminals. Primary deviance is the time when the person is labeled deviant through confession or reporting. Secondary deviance is deviance before and after the primary deviance. Retrospective labeling happens when the deviant recognizes his acts as deviant after the primary deviance, while prospective labeling is when the deviant recognizes future acts as deviant. The steps to becoming a criminal are:
Primary deviation;
Social penalties;
Secondary deviation;
Stronger penalties;
Further deviation with resentment and hostility towards punishers;
Community stigmatizes the deviant as a criminal;
Tolerance threshold passed;
Strengthening of deviant conduct because of stigmatizing penalties; and finally,
Acceptance as role of deviant or criminal actor.
Broken windows theory
Broken windows theory states that an increase in minor crimes such as graffiti, would eventually lead to and encourage an increase in larger transgressions. This suggests that greater policing on minor forms of deviance would lead to a decrease in major crimes. The theory has been tested in a variety of settings including New York City in the 90s. Compared to the country's average at the time, violent crime rates fell 28 percent as a result of the campaign. Critics of the theory question the direct causality of the policing and statistical changes that occurred.
Control theory
Control theory advances the proposition that weak bonds between the individual and society free people to deviate. By contrast, strong bonds make deviance costly. This theory asks why people refrain from deviant or criminal behavior, instead of why people commit deviant or criminal behavior, according to Travis Hirschi. The control theory developed when norms emerge to deter deviant behavior. Without this "control", deviant behavior would happen more often. This leads to conformity and groups. People will conform to a group when they believe they have more to gain from conformity than by deviance. If a strong bond is achieved there will be less chance of deviance than if a weak bond has occurred. Hirschi argued a person follows the norms because they have a bond to society. The bond consists of four positively correlated factors: opportunity, attachment, belief, and involvement. When any of these bonds are weakened or broken one is more likely to act in defiance. Michael Gottfredson and Travis Hirschi in 1990 founded their Self-Control Theory. It stated that acts of force and fraud are undertaken in the pursuit of self-interest and self-control. A deviant act is based on a criminals own self-control of themselves.
Containment theory is considered by researchers such as Walter C. Reckless to be part of the control theory because it also revolves around the thoughts that stop individuals from engaging in crime. Reckless studied the unfinished approaches meant to explain the reasoning behind delinquency and crime. He recognized that societal disorganization is included in the study of delinquency and crime under social deviance, leading him to claim that the majority of those who live in unstable areas tend not to have criminal tendencies in comparison those who live in middle-class areas. This claim opens up more possible approaches to social disorganization, and proves that the already implemented theories are in need or a deeper connection to further explore ideas of crime and delinquency. These observations brought Reckless to ask questions such as, "Why do some persons break through the tottering (social) controls and others do not? Why do rare cases in well-integrated society break through the lines of strong controls?" Reckless asserted that the intercommunication between self-control and social controls are partly responsible for the development of delinquent thoughts. Social disorganization was not related to a particular environment, but instead was involved in the deterioration of an individual's social controls. The containment theory is the idea that everyone possesses mental and social safeguards which protect the individual from committing acts of deviancy. Containment depends on the individuals ability to separate inner and outer controls for normative behavior.
More contemporary control theorists such as Robert Crutchfield take the theory into a new light, suggesting labor market experiences not only affect the attitudes and the "stakes" of individual workers, but can also affect the development of their children's views toward conformity and cause involvement in delinquency. This is an ongoing study as he has found a significant relationship between parental labor market involvement and children's delinquency, but has not empirically demonstrated the mediating role of parents' or children's attitude. In a study conducted by Tim Wadsworth, the relationship between parent's employment and children's delinquency, which was previously suggested by Crutchfield (1993), was shown empirically for the first time. The findings from this study supported the idea that the relationship between socioeconomic status and delinquency might be better understood if the quality of employment and its role as an informal social control is closely examined.
Conflict theory
In sociology, conflict theory states that society or an organization functions so that each individual participant and its groups struggle to maximize their benefits, which inevitably contributes to social change such as political changes and revolutions. Deviant behaviors are actions that do not go along with the social institutions as what cause deviance. The institution's ability to change norms, wealth or status comes into conflict with the individual. The legal rights of poor folks might be ignored, middle class are also accept; they side with the elites rather than the poor, thinking they might rise to the top by supporting the status quo. Conflict theory is based upon the view that the fundamental causes of crime are the social and economic forces operating within society. However, it explains white-collar crime less well.
This theory also states that the powerful define crime. This raises the question: for whom is this theory functional? In this theory, laws are instruments of oppression: tough on the powerless and less tough on the powerful.
Karl Marx
Marx did not write about deviant behavior but he wrote about alienation amongst the proletariat—as well as between the proletariat and the finished product—which causes conflict, and thus deviant behavior.
Many Marxist theorists have employed the theory of the capitalist state in their arguments. For example, Steven Spitzer utilized the theory of bourgeois control over social junk and social dynamite; and George Rusche was known to present analysis of different punishments correlated to the social capacity and infrastructure for labor. He theorized that throughout history, when more labor is needed, the severity of punishments decreases and the tolerance for deviant behavior increases. Jock Young, another Marxist writer, presented the idea that the modern world did not approve of diversity, but was not afraid of social conflict. The late modern world, however, is very tolerant of diversity. However, it is extremely afraid of social conflicts, which is an explanation given for the political correctness movement. The late modern society easily accepts difference, but it labels those that it does not want as deviant and relentlessly punishes and persecutes.
Michel Foucault
Michel Foucault believed that torture had been phased out from modern society due to the dispersion of power; there was no need any more for the wrath of the state on a deviant individual. Rather, the modern state receives praise for its fairness and dispersion of power which, instead of controlling each individual, controls the mass.
He also theorized that institutions control people through the use of discipline. For example, the modern prison (more specifically the panopticon) is a template for these institutions because it controls its inmates by the perfect use of discipline.
Foucault theorizes that, in a sense, the postmodern society is characterized by the lack of free will on the part of individuals. Institutions of knowledge, norms, and values, are simply in place to categorize and control humans.
Biological theories of deviance
The Italian school of criminology contends that biological factors may contribute to crime and deviance. Cesare Lombroso was among the first to research and develop the Theory of Biological Deviance which states that some people are genetically predisposed to criminal behavior. He believed that criminals were a product of earlier genetic forms. The main influence of his research was Charles Darwin and his Theory of Evolution. Lombroso theorized that people were born criminals or in other words, less evolved humans who were biologically more related to our more primitive and animalistic urges. From his research, Lombroso took Darwin's Theory and looked at primitive times himself in regards to deviant behaviors. He found that the skeletons that he studied mostly had low foreheads and protruding jaws. These characteristics resembled primitive beings such as Homo Neanderthalensis. He stated that little could be done to cure born criminals because their characteristics were biologically inherited. Over time, most of his research was disproved. His research was refuted by Pearson and Charles Goring. They discovered that Lombroso had not researched enough skeletons to make his research thorough enough. When Pearson and Goring researched skeletons on their own they tested many more and found that the bone structure had no relevance in deviant behavior. The statistical study that Charles Goring published on this research is called "The English Convict".
Other theories
The classical school of criminology comes from the works of Cesare Beccaria, Jeremy Bentham and John Howard. Beccaria assumed a utilitarian view of society along with a social contract theory of the state. He argued that the role of the state was to maximize the greatest possible utility to the maximum number of people and to minimize those actions that harm the society. He argued that deviants commit deviant acts (which are harmful to the society) because of the utility it gives to the private individual. If the state were to match the pain of punishments with the utility of various deviant behaviors, the deviant would no longer have any incentive to commit deviant acts. (Note that Beccaria argued for just punishment; as raising the severity of punishments without regard to logical measurement of utility would cause increasing degrees of social harm once it reached a certain point.)
The criminal justice system
There are three sections of the criminal justice system that function to enforce formal deviance:
Police: The police maintain public order by enforcing the law. Police use personal discretion in deciding whether and how to handle a situation. Research suggests that police are more likely to make an arrest if the offence is serious, if bystanders are present, or if the suspect is of a visible minority.
Courts: Courts rely on an adversarial process in which attorneys – one representing the defendant and one representing the Crown – present their cases in the presence of a judge who monitors legal procedures. In practice, courts resolve most cases through plea bargaining. Though efficient, this method puts less powerful people at a disadvantage.
Corrections system: Community-based corrections include probation and parole. These programs lower the cost of supervising people convicted of crimes and reduce prison overcrowding but have not been shown to reduce recidivism.
There are four jurisdictions for punishment (retribution, deterrence, rehabilitation, societal protection), which fall under one of two forms of justice that an offender will face:
Punitive justice (retribution & deterrence): This form of justice defines boundaries of acceptable behaviors, whereby an individual suffers the consequences of committing a crime and in which pain or suffering inflicted on the individual is hidden from the public.
Rehabilitative justice (rehabilitation & societal protection): This form of justice focuses on specific circumstances, whereby individuals are meant to be fixed.
See also
Notes
Further reading
Clinard, M. B., and R. F. Meier. 1968. Sociology of Deviant Behavior.
Dinitz, Simon, Russell R. Dynes, and Alfred C. Clarke. 1975. Deviance: Studies in Definition, Management, and Treatment.
Douglas, J. D., and F. C. Waksler. 1982. The Sociology of Deviance: An Introduction. Boston: Little, Brown & Co.
Gibbs, Jack P.; Erickson, Maynard L. (1975). "Major Developments in the Sociological Study of Deviance". Annual Review of Sociology. 1: 21–42.
Lauderdale, Pat. A Political Analysis of Deviance. Whitby, ON: De Sitter Publications, 2011.
MacNamara, Donal E. J., and Andrew Karmen. 1983. DEVIANTS: Victims or Victimizers? Beverly Hills, Calif.: Sage.
Pratt, Travis. n.d. "Reconsidering Gottfredson and Hirschi's General Theory of Crime: Linking the Micro- and Macro-level Sources of Self-control and Criminal Behavior Over the Life-course."
Bartel, Phil. 2012. "Deviance." Social Control and Responses to Variant Behaviour (module). Vancouver Community Network. Web. Accessed 7 April 2020.
"Types of Deviance." Criminal Justice. Acadia University. Archived from the original on 17 Oct 10. Retrieved on 23 Feb. 2012.
"Research at CSC." Correctional Service of Canada. Government of Canada. Web. Retrieved on 23 Feb 2012.
Macionis, John, and Linda Gerber. 2010. "Emile Durkheim"s Basic Insight" Sociology (7th ed.).
Macionis, John, and Linda Gerber. 2010. "The Criminal Justice System" Sociology (7th ed.).
External links
Criminology
Sociological theories | Deviance (sociology) | [
"Biology"
] | 6,265 | [
"Deviance (sociology)",
"Behavior",
"Human behavior"
] |
14,535,189 | https://en.wikipedia.org/wiki/Fisher%27s%20inequality | Fisher's inequality is a necessary condition for the existence of a balanced incomplete block design, that is, a system of subsets that satisfy certain prescribed conditions in combinatorial mathematics. Outlined by Ronald Fisher, a population geneticist and statistician, who was concerned with the design of experiments such as studying the differences among several different varieties of plants, under each of a number of different growing conditions, called blocks.
Let:
be the number of varieties of plants;
be the number of blocks.
To be a balanced incomplete block design it is required that:
different varieties are in each block, ; no variety occurs twice in any one block;
any two varieties occur together in exactly blocks;
each variety occurs in exactly blocks.
Fisher's inequality states simply that
.
Proof
Let the incidence matrix be a matrix defined so that is 1 if element is in block and 0 otherwise. Then is a matrix such that and for . Since , , so ; on the other hand, , so .
Generalization
Fisher's inequality is valid for more general classes of designs. A pairwise balanced design (or PBD) is a set together with a family of non-empty subsets of (which need not have the same size and may contain repeats) such that every pair of distinct elements of is contained in exactly (a positive integer) subsets. The set is allowed to be one of the subsets, and if all the subsets are copies of , the PBD is called "trivial". The size of is and the number of subsets in the family (counted with multiplicity) is .
Theorem: For any non-trivial PBD, .
This result also generalizes the Erdős–De Bruijn theorem:
For a PBD with having no blocks of size 1 or size , , with equality if and only if the PBD is a projective plane or a near-pencil (meaning that exactly of the points are collinear).
In another direction, Ray-Chaudhuri and Wilson proved in 1975 that in a design, the number of blocks is at least .
Notes
References
R. C. Bose, "A Note on Fisher's Inequality for Balanced Incomplete Block Designs", Annals of Mathematical Statistics, 1949, pages 619–620.
R. A. Fisher, "An examination of the different possible solutions of a problem in incomplete blocks", Annals of Eugenics, volume 10, 1940, pages 52–75.
Combinatorial design
Design of experiments
Families of sets
Statistical inequalities
Extremal combinatorics
Ronald Fisher | Fisher's inequality | [
"Mathematics"
] | 523 | [
"Theorems in statistics",
"Extremal combinatorics",
"Statistical inequalities",
"Combinatorial design",
"Combinatorics",
"Basic concepts in set theory",
"Families of sets",
"Inequalities (mathematics)"
] |
14,535,363 | https://en.wikipedia.org/wiki/List%20of%20Soviet%20human%20spaceflight%20missions | This is a list of the human spaceflight missions conducted by the Soviet space program. These missions belong to the Vostok, Voskhod, and Soyuz space programs.
The first patch from the Soviet Space Program was worn by Valentina Tereshkova, then the same patch for the Voskhod 2, Soyuz 4/5 and Soyuz 11, Soyuz 3 had an official insignia that wasn't worn during the flight, and then in the Apollo–Soyuz program. After that and until Soyuz TM-12 "Juno" flight mission patches had been designed only for international missions.
Vostok program
Voskhod program
Soyuz program
First Soyuz missions to Salyut 1 (1967–1971)
1973–1977
Salyut 6 to Salut 7 (1977–1986)
Crewed Soyuz-TM Mir missions (1987–1991)
For subsequent Soyuz missions conducted by the Russian Federal Space Agency, see List of Russian human spaceflight missions.
Notes
1 Commercially funded cosmonaut or other "spaceflight participant".
See also
List of Progress flights, with all flights of the Progress resupply craft that is based on the Soyuz spacecraft
References
Human spaceflight programs
Soviet human spaceflight missions
Vostok program
Voskhod program
Soyuz program
Soviet | List of Soviet human spaceflight missions | [
"Engineering"
] | 257 | [
"Space programs",
"Human spaceflight programs"
] |
14,535,721 | https://en.wikipedia.org/wiki/Drop%20structure | A drop structure, also known as a grade control, sill, or weir, is a manmade structure, typically small and built on minor streams, or as part of a dam's spillway, to pass water to a lower elevation while controlling the energy and velocity of the water as it passes over. Unlike most dams, drop structures are usually not built for water impoundment, diversion, or raising the water level. Mostly built on watercourses with steep gradients, they serve other purposes such as water oxygenation and erosion prevention.
Typical designs
Drop structures can be classified into three different basic types: vertical hard basin, grouted sloping boulder, and baffle chute. Each type is built depending on water flow, steepness of the site, and location.
Vertical hard basin
The vertical hard basin drop structure, also called a dissipation wall, is the basic type of drop structure. The vertical hard basin drop consists of a vertical cutoff wall, usually built of concrete, that is usually laid perpendicular to the stream flow, and an impact basin, not unlike a stream pool, to catch the discharged water. The purpose of the vertical hard basin drop is to force the water into a hydraulic jump (a small standing wave). Though the simplest type of drop structure, it is highest in maintenance needs and less safe, with most problems related to the impact basin. Sediment is often deposited in the basin, requiring frequent removal, and erosion is often exacerbated downstream of the base of the structure. Understanding the estuarine turbulent flow from dams, channels, and pipes, as well as the river flow, is very important due to the potential to cause damage to the bed of the river or channel and cause scouring of structures such as the saddles of bridges, because of the huge amount of the kinetic energy carried by the flow. One of the most efficient yet simple ways to dissipate this energy is to install a stilling basin at the discharge point to calm the flow.
Grouted sloping boulder
A grouted sloping boulder drop structure is the most versatile of drop structures. Able to accommodate both broad floodplains and narrow channels, they can also handle many different drop heights. Heights of these structures usually range from to . These structures are built by creating a slope of riprap, which consists of large boulders or less commonly, blocks of concrete. These are then cemented together ("grouted") to form the drop structure. Another less-common type of drop structure, the sculpted sloping boulder drop, is derived from this. The sculpted sloping boulder drop is used to create a more natural appearance to the drop structure. Both of these structures also tend to suffer from downstream erosion.
Baffle chute
The baffle chute drop is built entirely of concrete and is effective with low maintenance needs. It typically consists of a concrete chute lined with "baffle" teeth to slow the water as it passes over the structure. Despite these appeals, however, it has very "limited structural and aesthetic flexibility, which can cause [it] to be undesirable in most urban settings."
Environmental effects
Wildlife
Drop structures have been shown to either be beneficial or detrimental to habitat in the stream. They create complexity of habitat by breaking up a stretch of stream into a series of pools. Surface turbulence, eddies, and bubbles are generated by drop structures that provide hiding and cover for fish and other aquatic organisms. Water is aerated as it passes over drop structures. Sediment is collected and sorted in scour pools, which provide energy dissipation.
On the other hand, drop structures may also become barriers to fish. The downstream channel may erode and slowly and unexpectedly increase the height of the structure, to a point where migratory fish, such as salmon, cannot pass over the structure. Other causes may be that the plunge pool is obstructed or the water flow is too shallow. However, many properly functioning drop structures may themselves impede the upstream and downstream migration of fish.
Unless the structure is designed to maintain them, existing fish spawning pools will be impacted or lost.
Erosion control
Erosion is usually reduced by drop structures, and natural river channel processes, such as channel migration, meandering, and creation of stream pools and riffles, are also reduced. Drop structures can be used for flow control and to stabilize waterways and prevent the formation of gullies. They also have the potential to operate as inlets and outlets for other conservation structures, such as culverts.
See also
Aquatic sill
Check dam
Coastal management, to prevent coastal erosion and creation of beach
Knickpoint
Sill (geology)
Weir
References
Dams by type
Stormwater management
Soil erosion | Drop structure | [
"Chemistry",
"Environmental_science"
] | 956 | [
"Water treatment",
"Stormwater management",
"Water pollution"
] |
14,535,777 | https://en.wikipedia.org/wiki/Idaeovirus | Idaeovirus is a genus of positive-sense ssRNA viruses that contains two species: Raspberry bushy dwarf virus (RBDV) and Privet idaeovirus. RBDV has two host-dependent clades: one for raspberries; the other for grapevines. Infections are a significant agricultural burden, resulting in decreased yield and quality of crops. RBDV has a synergistic relation with Raspberry leaf mottle virus, with co-infection greatly amplifying the concentration of virions in infected plants. The virus is transmitted via pollination with RBDV-infected pollen grains that first infect the stigma before causing systemic infection.
Virology
RBDV is non-enveloped with an isometric protein coat about 33 nanometres in diameter. Inside the protein coat is the viral genome, which is bipartite, with the RNA strands referred to as RNA-1 and RNA-2. RNA-1 is 5,449 nucleotides in length and contains one open reading frame (ORF) that encodes for a combined protein that has methyltransferase, helicase, and an RNA-dependent RNA polymerase domains. RNA-2 is 2,231 nucleotides in length and contains two ORFs, one at the 5' end and the other at the 3' end. The first ORF encodes for a cell-to-cell movement protein, while the second ORF is expressed as a subgenomic RNA strand. This strand, RNA-3, is 946 nucleotides in length and encodes for the coat protein. Infection has been shown to not occur if RNA-3 is either not present or is sufficiently damaged.
Life cycle
Viral replication is cytoplasmic. Entry into the host cell is achieved by penetration into the host cell. Replication follows the positive stranded RNA virus replication model. Positive stranded RNA virus transcription is the method of transcription. The virus exits the host cell by tubule-guided viral movement. Plants serve as the natural host. Transmission routes are pollen associated.
Diagnosis
Single-step reverse transcription polymerase chain reactions has been developed to detect RBDV. Viruses are enriched by antibodies in the PCR microwells, followed by lysis of the virus particles, then reverse transcription of the viral RNA. By including reverse transcriptase and DNA polymerase in the whole process, the procedure can be conducted in a single step. These tests are sensitive enough to identify the specific strain of the virus.
Treatment
RBDV can be eradicated from infected plants by a procedure that first applies thermotherapy then cryotherapy to infected shoots. Applying heat to infected plants causes vacuoles in infected cells to enlarge, with these cells later being killed during cryotherapy. Adding either Fe-ethylenediaminetetraacetic acid or Fe-ethylenediaminedi(o)hydroxyphenylacetic acid after cryotherapy stimulates regrowth and prevents chlorosis from developing in plant shoots. Using this method, about 80% of shoots survive the initial heat therapy, with 33% surviving the cryotherapy and successfully regrowing.
References
External links
Viralzone: Idaeovirus
ICTV
Positive-sense single-stranded RNA viruses
Riboviria
Virus genera | Idaeovirus | [
"Biology"
] | 687 | [
"Viruses",
"Riboviria"
] |
14,536,459 | https://en.wikipedia.org/wiki/VIPR1 | Vasoactive intestinal polypeptide receptor 1 also known as VPAC1, is a protein, that in humans is encoded by the VIPR1 gene. VPAC1 is expressed in the brain (cerebral cortex, hippocampus, amygdala), lung, prostate, peripheral blood leukocytes, liver, small intestine, heart, spleen, placenta, kidney, thymus and testis.
Function
VPAC1 is a receptor for vasoactive intestinal peptide (VIP), a small neuropeptide. Vasoactive intestinal peptide is involved in smooth muscle relaxation, exocrine and endocrine secretion, and water and ion flux in lung and intestinal epithelia. Its actions are effected through integral membrane receptors associated with a guanine nucleotide binding protein which activates adenylate cyclase.
VIP acts in an autocrine fashion via VPAC11 to inhibit megakaryocyte proliferation and induce proplatelet formation.
Clinical significance
Patients with idiopathic achalasia show a significant difference in the distribution of SNPs affecting VIPR1.
VIP and PACAP levels were decreased in anterior vaginal wall of stress urinary incontinence and pelvic organ prolapse patients, they may participate in the pathophysiology of these diseases.
See also
Vasoactive intestinal peptide receptor
References
Further reading
G protein-coupled receptors | VIPR1 | [
"Chemistry"
] | 300 | [
"G protein-coupled receptors",
"Signal transduction"
] |
14,536,825 | https://en.wikipedia.org/wiki/Douglas%20Complex | The Douglas Complex is a high system of three linked platforms in the Irish Sea, off the North Wales coast. The Douglas oil field was discovered in 1990, and production commenced in 1996. Now operated by Eni, the complex consists of the wellhead platform, which drills into the seabed, a processing platform, which separates oil, gas and water, and thirdly an accommodation platform, which is composed of living quarters for the crew. This accommodation module was formerly the Morecambe Flame jack-up drilling rig.
The Douglas Complex is also the control hub for other platforms in the area and provides power for all platforms. It also offers recreational, catering and medical facilities for up to 80 personnel. Oil from the Lennox, Hamilton, and Hamilton North unmanned satellite platforms is received and blended at the complex.
Fluids from the Lennox installation via the gas pipeline are treated on the Douglas installation in the 3-phase (oil, gas and produced water) Lennox Production Separator. Following separation, gas flows to the Offgas Compressor suction manifold. Oil is directed to the Oil Stripper where the liquid is stripped of sour gas using a counter-current flow of stripping gas. Produced water from the separator is directed to the Produced Water Hydrocyclones where hydrocarbon liquids are removed prior to overboard disposal. Well fluids from the Douglas Wellhead tower are treated in the 3-phase Douglas Production Separator. Gas flows to the Offgas Compressor suction manifold and hydrocarbon liquids are directed to the Oil Stripper, and water to hydrocyclones as described above. Oil from the Oil Stripper is pumped by the Oil Transfer Pumps via Fiscal metering to the Main Oil Transfer Pumps to tanker loading. Gas from the Oil Stripper is compressed and sent to the Offgas Compressor.
Gas is sent through a pipeline long to a processing plant at Point of Ayr, in Flintshire, North Wales. After processing, almost the entire output is sold to E.ON to fire the combined cycle gas turbine power station at Connah's Quay, on Deeside, in Flintshire. Oil produced in Liverpool Bay is sent through another pipeline, 17 km long, to the Offshore Storage Installation, a permanently anchored barge which acts as a floating oil terminal, capable of holding of oil. From the floating terminal oil is transferred to tankers approximately once every month.
See also
List of oil fields
Geology of the United Kingdom
Oil platform
Point of Ayr
References
External links
Douglas Complex on AIS Liverpool
Petroleum Act 1998 giving location of Douglas Complex
Oil platforms
Oil fields of the United Kingdom
Irish Sea
1996 establishments in the United Kingdom | Douglas Complex | [
"Chemistry",
"Engineering"
] | 527 | [
"Oil platforms",
"Petroleum technology",
"Natural gas technology",
"Structural engineering"
] |
14,537,059 | https://en.wikipedia.org/wiki/Achlys%20%28plant%29 | Achlys is a small genus of flowering plants in the barberry family (Berberidaceae), which it shares with genera such as Berberis and Vancouveria. It is named after the Greek legendary figure associated with shade and mist, Achlys, because the plants grow in the shade.
Species
There are either two or three species, depending on the authority. Achlys triphylla and Achlys californica are both native to western North America. Another Achlys is found in Japan: some authorities treat this as a subspecies of A. triphylla, while others, especially in older treatments, call this Achlys japonica. Still others consider A. triphylla and A. californica too similar to be separate species. The common names for these plants include vanilla leaf (sometimes written as vanilla-leaf or vanillaleaf, depending on the taxonomist or flora), deer's foot and sweet after death, referring to the vanilla scent of its crushed leaves.
The Plant List recognizes two species, with A. californica regarded as a subspecies of A. triphylla
Description
Achlys triphylla, known in western North America as vanillaleaf, is an erect perennial plant that sprouts from a creeping rhizome. Leaves are long-petioled and palmately divided into three leaflets. Flowers are small and lack sepals and petals, but instead have long showy white stamens that form single erect spikes. The leaflets give a great hint to the identification of the plant: bend back the middle leaflet and you have an upside-down set of moose antlers. Alternatively, bend back the two side leaflets and you have a goose or deer foot (hence the common name). In the Pacific Northwest, Achlys triphylla is ubiquitous in moist shady forests west of the Cascades at low to middle elevations from Vancouver island and southern British Columbia south to northern California.
The plants are spaced widely on the rhizomes, but often overlap in large networks that result in carpets of Achlys that dominate the near-surface understory. Achlys seems to prefer moist soil, so at middle to higher elevations it is easier to find them along streambanks or well-shaded ravines.
Insect repellent
When dried properly, the plants are strongly aromatic and smell of vanilla. Besides serving as an excellent tent air freshener, Achlys has been used by native tribes of southern British Columbia as an insect repellent. The dried leaves were hung in bunches in doorways to ward off flies and mosquitoes, and it's not unheard of for naturalists to rub the dried or even fresh leaves on exposed skin when hiking the Olympics or Cascades during the summer mosquito season.
Japanese Achlys are quite similar to those found in western North America.
References
External links
Jepson Manual eFlora (TJM2) treatment of Achlys
Berberidaceae
Berberidaceae genera
Flora of British Columbia
Flora of California
Flora of the West Coast of the United States
Insect repellents
Plant toxin insecticides | Achlys (plant) | [
"Chemistry"
] | 639 | [
"Plant toxin insecticides",
"Chemical ecology"
] |
14,537,541 | https://en.wikipedia.org/wiki/Internet%20Protocol%20Options | There are a number of optional parameters that may be present in an Internet Protocol version 4 datagram. They typically configure a number of behaviors such as for the method to be used during source routing, some control and probing facilities and a number of experimental features.
Available options
The possible options that can be put in the IPv4 header are as follows:
The table below shows the defined options for IPv4. The Option Type column is derived from the Copied, Option Class, and Option Number bits as defined above.
Loose source routing
Loose Source Routing is an IP option which can be used for address translation. LSR is also used to implement mobility in IP networks.
Loose source routing uses a source routing option in IP to record the set of routers a packet must visit. The destination of the packet is replaced with the next router the packet must visit. By setting the forwarding agent (FA) to one of the routers that the packet must visit, LSR is equivalent to tunneling. If the corresponding node stores the LSR options and reverses it, it is equivalent to the functionality in mobile IPv6.
The name loose source routing comes from the fact that only part of the path is set in advance.
Strict source routing
Strict source routing is in contrast with loose source routing, in which every step of the route is decided in advance where the packet is sent.
Restrictions and considerations
The following two options are discouraged because they create security concerns: Loose Source and Record Route (LSRR) and Strict Source and Record Route (SSRR). Many routers block packets containing these options.
See also
Dynamic Source Routing
Source routing
Internet Protocol
References
Routing
Internet architecture
Network protocols | Internet Protocol Options | [
"Technology"
] | 343 | [
"Internet architecture",
"IT infrastructure"
] |
14,537,991 | https://en.wikipedia.org/wiki/Glass%20casting | Glass casting is the process in which glass objects are cast by directing molten glass into a mould where it solidifies. The technique has been used since the 15th century BCE in both Ancient Egypt and Mesopotamia. Modern cast glass is formed by a variety of processes such as kiln casting or casting into sand, graphite or metal moulds.
History
Roman period
During the Roman period, moulds consisting of two or more interlocking parts were used to create blank glass dishes. Glass could be added to the mould either by frit casting, where the mould was filled with chips of glass (called frit) and then heated to melt the glass, or by pouring molten glass into the mould. Evidence from Pompeii suggests that molten hot glass may have been introduced as early as the mid-1st century CE. Blank vessels were then annealed, fixed to lathes and cut and polished on all surfaces to achieve their final shape. Pliny the Elder indicates in his Natural History (36.193) that lathes were used in the production of most glass of the mid-1st century.
Italy is believed to have been the source of the majority of early Imperial polychrome cast glass, whereas monochrome cast glasses are more predominant elsewhere in the Mediterranean. Forms produced show clear inspiration from the Roman bronze and silver industries, and in the case of carinated bowls and dishes, from the ceramic industry. Cast vessel forms became more limited during the late 1st century, but continued in production into the second or third decade of the 2nd century. Colourless cast bowls were widespread throughout the Roman world in the late 1st and early 2nd century CE, and may have been produced at more than one centre. Some revival of the casting technique appears in the 3rd or 4th century, but appears to have produced relatively small numbers of vessels
Modern techniques
Sand casting
Sand casting involves the use of hot molten glass poured directly into a preformed mould. It is a process similar to casting metal into a mould. The sand mould is typically prepared by using a mixture of clean sand and a small proportion of the water-absorbing clay bentonite. Bentonite acts as a binding material. In the process, a small amount of water is added to the sand-bentonite mixture, and this is well mixed and sifted before addition to an open topped container. A template is prepared (typically made of wood, or a found object or even a body part such as a hand or fist) which is tightly pressed into the sand to make a clean impression. This impression then forms the mould.
The surface of the mould can be covered in coloured glass powders or frits to give a surface colour to the sand cast glass object. When the mould preparation is complete hot glass is ladled from the furnace at temperatures of about to allow it to freely pour. The hot glass is poured directly into the mould. During the pouring process, glass or compatible objects may be placed to later give the appearance of floating in the solid glass object. This very immediate and dynamic method was pioneered and perfected in the 1980s by the Swedish artist Bertil Vallien.
Kiln casting
Kiln casting involves the preparation of a mould which is often made of a mixture of plaster and refractory materials such as silica. A model can be made from any solid material, such as wax, wood, or metal, and after taking a cast of the model (a process called investment) the model is removed from the mould. One method of forming a mould is by the Cire perdue or "lost wax" method. Using this method, a model can be made from wax and after investment the wax can be steamed or burned away in a kiln, forming a cavity. The mould is equipped with a funnel-like reservoir filled with solid glass granules or lumps. The heat resistant mould is then placed in a kiln and heated to between and to melt the glass. As the glass melts it runs into and fills the mould.
Such kiln cast work can be of very large dimensions, as in the work of Czech artists Stanislav Libenský and Jaroslava Brychtová. Kiln cast glass has become an important material for contemporary artists such as Clifford Rainey, Karen LaMonte and Tomasz Urbanowicz, author of the "United Earth" glass sculpture in the European Parliament in Strasbourg.
Pâte de verre
Pâte de verre is a form of kiln casting and literally translated means glass paste. In this process, finely crushed glass is mixed with a binding material, such as a mixture of gum arabic and water, and often with colourants and enamels. The resultant paste is applied to the inner surface of a negative mould forming a coating. When the coated mould is fired at the appropriate temperature the glass is fused creating a hollow object that can have thick or thin walls depending on the thickness of the pate de verre layers. Daum, a French commercial crystal manufacturer, produce highly sculptural pieces in pate de verre.
Graphite casting
Graphite is also used in the hot forming of glass. Graphite moulds are prepared by carving into them, machining them into curved forms, or stacking them into shapes. Molten glass is poured into a mould where it is cooled until hard enough to be removed and placed into an annealing kiln to cool slowly.
See also
References
Further reading
Glass art
Glass production
Casting (manufacturing)
Casting | Glass casting | [
"Materials_science",
"Engineering"
] | 1,120 | [
"Glass engineering and science",
"Glass production"
] |
14,538,050 | https://en.wikipedia.org/wiki/MediaLib | mediaLib (from "multimedia library") is a portable low level library for accelerating multimedia applications, with interfaces in C. It was developed by Sun Microsystems and open-sourced under the CDDL license as part of the OpenSolaris project.
It is implemented in ANSI C, but can take advantage of SIMD multimedia instructions on various processors to gain a significant performance boost. It was originally designed to leverage VIS on SPARC processors and later added support for MMX/SSE/SSE2 on Intel/AMD processors.
Since mediaLib is written in C and SIMD multimedia compiler intrinsics, it should be usable on any system that has an ANSI C compiler that supports SIMD multimedia intrinsics. Systems without SIMD intrinsics support can also use it as pure ANSI C, forgoing any extra acceleration provided by SIMD multimedia instructions. It is also included as part of Solaris 10.
mediaLib 2.5 contains about 4000 files and 2.4 million lines of code, and contains more than 3000 functions for different areas:
algebra
matrix
image
graphics
signal processing
video
audio
speech
volume rendering
Open source applications that use mediaLib include Java, JDS for Solaris, mplayer, and ogle.
There are several mediaLib versions targeting different platforms, but all share the same API, so users can switch from one platform to another without changing source code:
Standard C: written in pure ANSI C, with some general code optimization for performance
VIS/VIS2/VIS3: optimized for SPARC chips with VIS/VIS2/VIS3 multimedia instruction sets
MMX/SSE/SSE2: optimized for Intel/AMD chips with MMX/SSE/SSE2 multimedia instruction sets
Integer: optimized for chips that have no or limited floating point capabilities, such as UltraSPARC T1 and some embedded chips
Multi-threaded version: A thin wrapper layer built with OpenMP on top of mediaLib, providing flexible multithreading multimedia acceleration for applications
External links
mediaLib source code
C (programming language) libraries
Free software programmed in C
Multimedia frameworks
Multimedia software
Sun Microsystems software | MediaLib | [
"Technology"
] | 439 | [
"Multimedia",
"Multimedia software"
] |
14,538,123 | https://en.wikipedia.org/wiki/Afqa | Afqa (; also spelled Afka) is a village and municipality located in the Byblos District of the Keserwan-Jbeil Governorate, northeast of Beirut in Lebanon. It has an average elevation of 1,200 meters above sea level and a total land area of 934 hectares. Its inhabitants are predominantly Shia Muslims.
Known in ancient times as Aphaca (), the word can be interpreted as "source", is located in the mountains of Lebanon, about 20 kilometres from the ancient city of Byblos, which still stands just east of the town of Qartaba. It is the site of one of the finest waterfalls in the mountains of the Middle East, which feeds into the Adonis River (known today as Abraham River or Nahr Ibrahim in Arabic), and forms Lake Yammoune, with which it is also associated by legend.
In Greek mythology, Adonis was born and died at the foot of the falls in Afqa. The ruins of the celebrated temple of Aphrodite Aphakitis— the Aphrodite particular to this site— are located there. Sir Richard Francis Burton and Sir James Frazer further attribute the temple at Afqa to the honouring of Astarte or Ishtar (Ashtaroth). Afqa is aligned centrally between Baalbek and Byblos, pointing to the summer solstice sunset over the Mediterranean. It is from Byblos that the myth was told of a mystical ark that came ashore containing the bones of Osiris. The ark became stuck in a swamp until Isis found it and carried it back to Ancient Egypt.
History
Ottoman tax records, which did not differentiate different Muslim groups from each other, indicate Afqa, or "Ifqi", had 20 Muslim households and six bachelors in 1523, 38 Muslim households and five bachelors in 1530, and 25 Muslim households and 15 bachelors in 1543.
The Afaq archaeological sites were amongst 34 cultural heritage properties given enhanced protection by UNESCO to safeguard them during the Israeli invasion of Lebanon in 2024.
Physical description
The waterfall at Afqa is the source for the River Adonis and is located on a bluff that forms an immense natural amphitheater. The river emerges from a large limestone cave in the cliff wall which stores and channels water from the melted snow of the mountains before releasing it into springs and streams below. At Afqa, several watery threads flow from the cave to form numerous cataracts, a scene of great beauty. The cave has over two miles (three km) of known passageways inside.
A great and ancient temple is located here, where the goddess Aphrodite was worshipped. Eusebius, the biographer of emperor Constantine I, wrote that the emperor ordered to demolish the Temple. Frazer attributes its construction to the legendary forebear of King Cinyras, who was said to have founded a sanctuary for Aphrodite (i.e. Astarte). Reconstructed on a grander scale in Hellenistic times, then destroyed by the Emperor Constantine the Great in the fourth century, it was partially rebuilt by the later fourth-century emperor, Julian the Apostate. The site was finally abandoned during the reign of Theodosius I Massive hewn blocks and a fine column of Syenite granite still mark the site, on a terrace facing the source of the river.
The remains of a Roman aqueduct that carried the waters of the River Adonis to the inhabitants of ancient Byblos are also located here.
Edward Robinson and Eli Smith camped at the site in 1852, merely remarking on its "shapeless ruins" and the difficulty of transport of two massive columns of Syenite granite. Frazer describes the village at Afqa in his 1922 book, The Golden Bough as:
"...the miserable village which still bears the name of Afqa at the head of the wild, romantic, wooded gorge of the Adonis. The hamlet stands among groves of noble walnut trees on the brink of the lyn. A little way off the river rushes from a cavern at the foot of a mighty amphitheater of towering cliffs to plunge in a series of cascades into the awful depths of the glen. The deeper it descends, the ranker and denser grows the vegetation, which, sprouting from the crannies and fissures of the rocks, spreads a green veil over the roaring or murmuring stream in the tremendous chasm below. There is something delicious, almost intoxicating, in the freshness of these tumbling waters, in the sweetness and purity of the mountain air, in the vivid green of the vegetation.
Possible early sanctuary of El
Marvin H. Pope (Yale University) identified the home of El in the Ugaritic texts of ca. 1200 BCE, described as "at the source[s] of the [two] rivers, amid the fountains of the [two] deeps", with this famous source of the river Adonis and Yammoune, an intermittent lake on the other side of the mountain, which Pope asserted was closely associated with it in legend.
Mythology
In classical Greek mythology, Afqa is associated with the cult of Aphrodite and Adonis. According to the myth, Cinyras, the King of Cyprus seduced his daughter Myrrha who was transformed into a tree that bears her name (see:Myrrh). After several months, the tree split open and the child Adonis emerged. He was reared by Aphrodite, who became enamored of him, causing her lover Ares to grow jealous. Ares sent a vicious boar to kill Adonis. At the pool at the foot of the falls of Afqa, Adonis bled to death from a deep wound in the groin. Aphrodite despaired at his death and out of pity for her the gods allowed Adonis to ascend from Hades for a short period each year.
Each spring at Afqa, the melting snows flood the river, bringing a reddish mud into the stream from the steep mountain slopes. The red stain can be seen feeding into the river and far out to the Mediterranean Sea. Legend held this to be the blood of Adonis, renewed each year, at the time of his death. Lucian of Samosata, a Syrian by birth, describes how a local man of Byblos debunked the legend:
"'This river, my friend and guest, passes through the Libanus: now this Libanus abounds in the red earth. The violent winds which blow regularly on those days bring down into the river a quantity of earth resembling vermilion. It is this earth that turns the river to red. And thus the change in the river's colour is due, not to blood as they affirm, but to the nature of the soil.' This was the story of the man of Byblos. But even assuming that he spoke the truth, yet there certainly seems to me something supernatural in the regular coincidence of the wind and the colouring of the river."
Lucian also describes practices by the Byblians of worship which some told him centered not on Adonis, but Osiris. He writes that he mastered the secret rites of Adonis at the temple at Afqa and that the locals there asserted that the legend about Adonis was true and occurred in their country. Lucian describes the rites, annually performed, that involved the beating of breasts and wailing, and the "perform[ing] [of] their secret ritual amid signs of mourning through the whole countryside. When they have finished their mourning and wailing, they sacrifice in the first place to Adonis, as to one who has departed this life: after this they allege that he is alive again, and exhibit his effigy to the sky."
Also in the fertile valley surrounding the river, millions of scarlet anemones bloom. Known as Adonis' flowers, according to legend, they spring from his blood, spilled as he lay dying beneath the trees at Afqa, and return each year in remembrance.
In his "Terminal Essay" in the 1885 translation of The Arabian Nights, Burton describes the temple at Afqa as a place of pilgrimage for the Metawali sect of Shia Islam, where vows are addressed to the Sayyidat al-Kabirah or "the Great Lady". In the early 20th century, strips of white cloth were still being attached to the ancient fig that shadows the source, and Metawalis and Christians alike were bringing the sick to be cured at "the abode of Sa’īdat Afkā, i.e. a feminine spirit of the same name as the place. Her husband built this temple. He was killed by a wild beast, and she searched among the mountains until she found his mangled body. This is evidently a modified view of the ancient myth of Astarte and Adonis," Lewis Bayles Paton reported in 1919, with a photograph of the cloth-hung fig tree. W. F. Albright noted this survival of this "female saint" as the most remarkable among "very few direct reflections of paganism in the names and legends of modern welis."
2006 Lebanon War
During the 2006 Lebanon War, the Afqa bridge that connects Mount Lebanon with the Beqaa valley was one of five bridges destroyed by Israeli jets.
References
Bibliography
External links
A letter from Gertrude Bell describing Afqa in 1900
Map of Lebanon and geographical coordinates for Afqa
Afqa on www.geographic.org
3D Google Map of the Afqa Grotto on gmap3d
Afqa on Tageo.com
Afqa on Localiban
Populated places in Byblos District
Shia Muslim communities in Lebanon
Archaeological sites in Lebanon
Levantine mythology
Ancient Roman temples
Roman sites in Lebanon
Tourist attractions in Lebanon
Summer solstice
Temples of Aphrodite
Astarte
El (deity) | Afqa | [
"Astronomy"
] | 2,018 | [
"Time in astronomy",
"Summer solstice"
] |
14,538,226 | https://en.wikipedia.org/wiki/Software%20Patent%20Institute | Software Patent Institute (established 1992 in Ann Arbor) is an American non-profit corporation established to assist in the correct assignment of software patent. It originally had the name University of Michigan Software Patent Institute, as it was established by the Industrial Technology Institute, represented by professor Bernard Galler.
It maintains a Database of software technologies, since 1995 open to the public, covering the folklore of software industry and provides courses, the SPI reporter bulletin, and other educational materials. The institute was disputed on its arrival (in particular by League for Programming Freedom), but has nevertheless received financial support from Usenix as well as commercial software companies as Oracle Corporation, IBM, Apple Inc. and Microsoft. Its executive director is Roland J. Cole, based in Indianapolis.
References
Institutes based in the United States
Non-profit technology
Organizations established in 1992
University of Michigan
Information technology organizations based in North America
Software patent law
Intellectual property organizations
1992 establishments in Michigan
Organizations based in Ann Arbor, Michigan | Software Patent Institute | [
"Technology"
] | 195 | [
"Information technology",
"Non-profit technology"
] |
14,539,370 | https://en.wikipedia.org/wiki/Segregation%20%28materials%20science%29 | In materials science, segregation is the enrichment of atoms, ions, or molecules at a microscopic region in a materials system. While the terms segregation and adsorption are essentially synonymous, in practice, segregation is often used to describe the partitioning of molecular constituents to defects from solid solutions, whereas adsorption is generally used to describe such partitioning from liquids and gases to surfaces. The molecular-level segregation discussed in this article is distinct from other types of materials phenomena that are often called segregation, such as particle segregation in granular materials, and phase separation or precipitation, wherein molecules are segregated in to macroscopic regions of different compositions. Segregation has many practical consequences, ranging from the formation of soap bubbles, to microstructural engineering in materials science, to the stabilization of colloidal suspensions.
Segregation can occur in various materials classes. In polycrystalline solids, segregation occurs at defects, such as dislocations, grain boundaries, stacking faults, or the interface between two phases. In liquid solutions, chemical gradients exist near second phases and surfaces due to combinations of chemical and electrical effects.
Segregation which occurs in well-equilibrated systems due to the instrinsic chemical properties of the system is termed equilibrium segregation. Segregation that occurs due to the processing history of the sample (but that would disappear at long times) is termed non-equilibrium segregation.
History
Equilibrium segregation is associated with the lattice disorder at interfaces, where there are sites of energy different from those within the lattice at which the solute atoms can deposit themselves. The equilibrium segregation is so termed because the solute atoms segregate themselves to the interface or surface in accordance with the statistics of thermodynamics in order to minimize the overall free energy of the system. This sort of partitioning of solute atoms between the grain boundary and the lattice was predicted by McLean in 1957.
Non-equilibrium segregation, first theorized by Westbrook in 1964, occurs as a result of solutes coupling to vacancies which are moving to grain boundary sources or sinks during quenching or application of stress. It can also occur as a result of solute pile-up at a moving interface.
There are two main features of non-equilibrium segregation, by which it is most easily distinguished from equilibrium segregation. In the non-equilibrium effect, the magnitude of the segregation increases with increasing temperature and the alloy can be homogenized without further quenching because its lowest energy state corresponds to a uniform solute distribution. In contrast, the equilibrium segregated state, by definition, is the lowest energy state in a system that exhibits equilibrium segregation, and the extent of the segregation effect decreases with increasing temperature. The details of non-equilibrium segregation are not going to be discussed here, but can be found in the review by Harries and Marwick.
Importance
Segregation of a solute to surfaces and grain boundaries in a solid produces a section of material with a discrete composition and its own set of properties that can have important (and often deleterious) effects on the overall properties of the material. These 'zones' with an increased concentration of solute can be thought of as the cement between the bricks of a building. The structural integrity of the building depends not only on the material properties of the brick, but also greatly on the properties of the long lines of mortar in between.
Segregation to grain boundaries, for example, can lead to grain boundary fracture as a result of temper brittleness, creep embrittlement, stress relief cracking of weldments, hydrogen embrittlement, environmentally assisted fatigue, grain boundary corrosion, and some kinds of intergranular stress corrosion cracking. A very interesting and important field of study of impurity segregation processes involves AES of grain boundaries of materials. This technique includes tensile fracturing of special specimens directly inside the UHV chamber of the Auger Electron Spectrometer that was developed by Ilyin.
Segregation to grain boundaries can also affect their respective migration rates, and so affects sinterability, as well as the grain boundary diffusivity (although sometimes these effects can be used advantageously).
Segregation to free surfaces also has important consequences involving the purity of metallurgical samples. Because of the favorable segregation of some impurities to the surface of the material, a very small concentration of impurity in the bulk of the sample can lead to a very significant coverage of the impurity on a cleaved surface of the sample. In applications where an ultra-pure surface is needed (for example, in some nanotechnology applications), the segregation of impurities to surfaces requires a much higher purity of bulk material than would be needed if segregation effects did not exist. The following figure illustrates this concept with two cases in which the total fraction of impurity atoms is 0.25 (25 impurity atoms in 100 total). In the representation on the left, these impurities are equally distributed throughout the sample, and so the fractional surface coverage of impurity atoms is also approximately 0.25. In the representation to the right, however, the same number of impurity atoms are shown segregated on the surface, so that an observation of the surface composition would yield a much higher impurity fraction (in this case, about 0.69). In fact, in this example, were impurities to completely segregate to the surface, an impurity fraction of just 0.36 could completely cover the surface of the material. In an application where surface interactions are important, this result could be disastrous.
While the intergranular failure problems noted above are sometimes severe, they are rarely the cause of major service failures (in structural steels, for example), as suitable safety margins are included in the designs. Perhaps the greater concern is that with the development of new technologies and materials with new and more extensive mechanical property requirements, and with the increasing impurity contents as a result of the increased recycling of materials, we may see intergranular failure in materials and situations not seen currently. Thus, a greater understanding of all of the mechanisms surrounding segregation might lead to being able to control these effects in the future. Modeling potentials, experimental work, and related theories are still being developed to explain these segregation mechanisms for increasingly complex systems.
Theories of Segregation
Several theories describe the equilibrium segregation activity in materials. The adsorption theories for the solid-solid interface and the solid-vacuum surface are direct analogues of theories well known in the field of gas adsorption on the free surfaces of solids.
Langmuir–McLean theory for surface and grain boundary segregation in binary systems
This is the earliest theory specifically for grain boundaries, in which McLean uses a model of P solute atoms distributed at random amongst N lattice sites and p solute atoms distributed at random amongst n independent grain boundary sites. The total free energy due to the solute atoms is then:
where E and e are energies of the solute atom in the lattice and in the grain boundary, respectively and the kln term represents the configurational entropy of the arrangement of the solute atoms in the bulk and grain boundary. McLean used basic statistical mechanics to find the fractional monolayer of segregant, , at which the system energy was minimized (at the equilibrium state), differentiating G with respect to p, noting that the sum of p and P is constant. Here the grain boundary analogue of Langmuir adsorption at free surfaces becomes:
Here, is the fraction of the grain boundary monolayer available for segregated atoms at saturation, is the actual fraction covered with segregant, is the bulk solute molar fraction, and is the free energy of segregation per mole of solute.
Values of were estimated by McLean using the elastic strain energy, , released by the segregation of solute atoms. The solute atom is represented by an elastic sphere fitted into a spherical hole in an elastic matrix continuum. The elastic energy associated with the solute atom is given by:
where is the solute bulk modulus, is the matrix shear modulus, and and are the atomic radii of the matrix and impurity atoms, respectively. This method gives values correct to within a factor of two (as compared with experimental data for grain boundary segregation), but a greater accuracy is obtained using the method of Seah and Hondros, described in the following section.
Free energy of grain boundary segregation in binary systems
Using truncated BET theory (the gas adsorption theory developed by Brunauer, Emmett, and Teller), Seah and Hondros write the solid-state analogue as:
where
is the solid solubility, which is known for many elements (and can be found in metallurgical handbooks). In the dilute limit, a slightly soluble substance has , so the above equation reduces to that found with the Langmuir-McLean theory. This equation is only valid for . If there is an excess of solute such that a second phase appears, the solute content is limited to and the equation becomes
This theory for grain boundary segregation, derived from truncated BET theory, provides excellent agreement with experimental data obtained by Auger electron spectroscopy and other techniques.
More complex systems
Other models exist to model more complex binary systems. The above theories operate on the assumption that the segregated atoms are non-interacting. If, in a binary system, adjacent adsorbate atoms are allowed an interaction energy , such that they can attract (when is negative) or repel (when is positive) each other, the solid-state analogue of the Fowler adsorption theory is developed as
When is zero, this theory reduces to that of Langmuir and McLean. However, as becomes more negative, the segregation shows progressively sharper rises as the temperature falls until eventually the rise in segregation is discontinuous at a certain temperature, as shown in the following figure.
Guttman, in 1975, extended the Fowler theory to allow for interactions between two co-segregating species in multicomponent systems. This modification is vital to explaining the segregation behavior that results in the intergranular failures of engineering materials. More complex theories are detailed in the work by Guttmann and McLean and Guttmann.
The free energy of surface segregation in binary systems
The Langmuir–McLean equation for segregation, when using the regular solution model for a binary system, is valid for surface segregation (although sometimes the equation will be written replacing with ). The free energy of surface segregation is . The enthalpy is given by
where and are matrix surface energies without and with solute, is their heat of mixing, Z and are the coordination numbers in the matrix and at the surface, and is the coordination number for surface atoms to the layer below. The last term in this equation is the elastic strain energy , given above, and is governed by the mismatch between the solute and the matrix atoms. For solid metals, the surface energies scale with the melting points. The surface segregation enrichment ratio increases when the solute atom size is larger than the matrix atom size and when the melting point of the solute is lower than that of the matrix.
A chemisorbed gaseous species on the surface can also have an effect on the surface composition of a binary alloy. In the presence of a coverage of a chemisorbed species theta, it is proposed that the Langmuir-McLean model is valid with the free energy of surface segregation given by , where
and are the chemisorption energies of the gas on solute A and matrix B and is the fractional coverage. At high temperatures, evaporation from the surface can take place, causing a deviation from the McLean equation. At lower temperatures, both grain boundary and surface segregation can be limited by the diffusion of atoms from the bulk to the surface or interface.
Kinetics of segregation
In some situations where segregation is important, the segregant atoms do not have sufficient time to reach their equilibrium level as defined by the above adsorption theories. The kinetics of segregation become a limiting factor and must be analyzed as well. Most existing models of segregation kinetics follow the McLean approach. In the model for equilibrium monolayer segregation, the solute atoms are assumed to segregate to a grain boundary from two infinite half-crystals or to a surface from one infinite half-crystal. The diffusion in the crystals is described by Fick's laws. The ratio of the solute concentration in the grain boundary to that in the adjacent atomic layer of the bulk is given by an enrichment ratio, . Most models assume to be a constant, but in practice this is only true for dilute systems with low segregation levels. In this dilute limit, if is one monolayer, is given as .
The kinetics of segregation can be described by the following equation:
where for grain boundaries and 1 for the free surface, is the boundary content at time , is the solute bulk diffusivity, is related to the atomic sizes of the solute and the matrix, and , respectively, by . For short times, this equation is approximated by:
In practice, is not a constant but generally falls as segregation proceeds due to saturation. If starts high and falls rapidly as the segregation saturates, the above equation is valid until the point of saturation.
In metal castings
All metal castings experience segregation to some extent, and a distinction is made between macrosegregation and microsegregation. Microsegregation refers to localized differences in composition between dendrite arms, and can be significantly reduced by a homogenizing heat treatment. This is possible because the distances involved (typically on the order of 10 to 100 μm) are sufficiently small for diffusion to be a significant mechanism. This is not the case in macrosegregation. Therefore, macrosegregation in metal castings cannot be remedied or removed using heat treatment.
Further reading
See also
Adsorption
Absorption (chemistry)
BET theory
Freundlich equation
Langmuir equation
Reactions on surfaces
Wetting
Henry adsorption constant
References
Materials science
Surface science | Segregation (materials science) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,861 | [
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"Condensed matter physics",
"nan"
] |
14,539,614 | https://en.wikipedia.org/wiki/Water%20on%20Earth | Water on Earth may refer to:
Origin of water on Earth
Water distribution on Earth
Water | Water on Earth | [
"Environmental_science"
] | 19 | [
"Water",
"Hydrology"
] |
14,541,032 | https://en.wikipedia.org/wiki/Xindy | xindy is a flexible program for sorting and formatting book indexes. It was written by Joachim Schrod as a successor to MakeIndex. xindy supports indexing for a variety of programs, including especially LaTeX and troff, and produces complex indices of the data.
xindy is cited as one of the most widely used indexing programs for LaTeX. Unlike MakeIndex, xindy features strong support for many languages in addition to English, and many standard character encodings including Unicode.
xindy is licensed under the GNU GPL.
References
External links
Free TeX software
Troff
Index (publishing)
Software using the GNU General Public License | Xindy | [
"Mathematics"
] | 139 | [
"Troff",
"Mathematical markup languages"
] |
14,541,266 | https://en.wikipedia.org/wiki/List%20of%20scorewriters | This is a list of music notation programs (excluding discontinued products) which have articles on Wikipedia.
For programs specifically for writing guitar tablature, see the list of guitar tablature software. For discontinued products, see list of discontinued scorewriters.
Free software
Denemo, a scorewriter primarily providing a front-end for LilyPond
Frescobaldi, a GUI front-end for LilyPond [Linux, FreeBSD, OS X and Microsoft Windows]
Impro-Visor, a GUI- and text-based scorewriter for constructing lead sheets and jazz solos on Linux, OS X, and Windows
LilyPond, a text-based scorewriter with several backends including PS, PDF and SVG
MuseScore, a WYSIWYG scorewriter for Linux, Windows, and OS X
MusiXTeX, a set of macros and fonts that allow music typesetting in TeX
NoteEdit, a KDE scorewriter
Rosegarden, a scorewriter for Linux
Philip's Music Writer, a text-based scorewriter originally written for Acorn RISC OS (released as a commercial program in the 1990s), later ported to POSIX and licensed under the GNU GPL
Proprietary
Windows
Capella
Cubase Score V1-5 (first run on version numbers)
Cubase SX
Cubase V4-9.5 (second run on version numbers)
Dorico
Encore
Forte (notation program)
Guitar Pro (primarily for guitars and bands, but also notates other instruments including drums)
Igor Engraver
MagicScore, plus Music Notation for MS Word and lite version MagicScore School and free versions MagicScore onLine and MagicScore Note
Mozart
Mus2
MusEdit
MusiCAD
MusicEase, notates standard music, shaped notes and tablature; transposes and imports abc music
Music Write
MusicTime Deluxe
Musink Lite, a WYSIWYM scorewriter and publication tool for Windows
Notion
Notation Composer
NoteWorthy Composer
Overture, plus lite version Score Writer
SCORE, one of the earliest scorewriters to be used for commercial publishing, no longer developed or sold
ScoreCloud – Audio, manual or MIDI input analysis to musical notation, and editor
Sibelius, Sibelius First, Sibelius Artist, and Sibelius Ultimate
SmartScore Pro (music scanning and scorewriting. Lite versions: SmartScore Songbook, MIDI, Piano and Guitar Editions)
Mac
ConcertWare (obsolete)
Cubase Score V1-5 (first run of version numbers)
Cubase SX
Cubase V4-9.5 (second run of version numbers)
Dorico
Emagic, makers of Notator (bought by Apple in 2002; Windows version no longer developed or supported)
Encore
Guitar Pro (primarily for guitars and bands, but also notates other instruments including drums)
Igor Engraver
Logic Pro, Logic Express (successor to Notator and Notator Logic)
Mosaic (Mac OS 9 only)
Mus2
MusicEase, notates standard music, shaped notes and tablature; transposes and imports abc music
MusicTime Deluxe
Notion
Overture plus lite version Score Writer
ScoreCloud – Audio, manual or MIDI input analysis to musical notation, and editor
Sibelius, Sibelius First, Sibelius Artist, and Sibelius Ultimate
SmartScore Pro (music-scanning and music-scoring. Lesser versions: SmartScore Songbook, MIDI, Piano and Guitar Editions)
Other
Aegis Sonix (Amiga)
Cubase Score V1-2 (Atari ST)
Deluxe Music Construction Set (Amiga)
Music Construction Set (IBM PC, Apple II, Atari 8-bit, C64)
MusicPrinter Plus (MS-DOS)
ScoreCloud Express (iOS)
See also
Comparison of scorewriters
Comparison of MIDI editors and sequencers
List of guitar tablature software
List of music software
References
Lists of software
Scorewriters | List of scorewriters | [
"Technology"
] | 801 | [
"Computing-related lists",
"Lists of software"
] |
14,541,675 | https://en.wikipedia.org/wiki/ATP%20citrate%20synthase | ATP citrate synthase (also ATP citrate lyase (ACLY)) is an enzyme that in animals catalyzes an important step in fatty acid biosynthesis. By converting citrate to acetyl-CoA, the enzyme links carbohydrate metabolism, which yields citrate as an intermediate, with fatty acid biosynthesis, which consumes acetyl-CoA. In plants, ATP citrate lyase generates cytosolic acetyl-CoA precursors of thousands of specialized metabolites, including waxes, sterols, and polyketides.
Function
ATP citrate lyase is the primary enzyme responsible for the synthesis of cytosolic acetyl-CoA in many tissues. The enzyme is a tetramer of apparently identical subunits. In animals, the product, acetyl-CoA, is used in several important biosynthetic pathways, including lipogenesis and cholesterogenesis. It is activated by insulin.
In plants, ATP citrate lyase generates acetyl-CoA for cytosolically-synthesized metabolites; Acetyl-CoA is not transported across subcellular membranes of plants. Such metabolites include: elongated fatty acids (used in seed oils, membrane phospholipids, the ceramide moieties of sphingolipids, cuticle, cutin, and suberin); flavonoids; malonic acid; acetylated phenolics, alkaloids, isoprenoids, anthocyanins, and sugars; and, mevalonate-derived isoprenoids (e.g., sesquiterpenes, sterols, brassinosteroids); malonyl and acyl-derivatives (d-amino acids, malonylated flavonoids, acylated, prenylated and malonated proteins). De novo fatty acid biosynthesis in plants occurs in plastids; thus, ATP citrate lyase is not relevant to this pathway.
Reaction
ATP citrate lyase is responsible for catalyzing the conversion of citrate and Coenzyme A (CoA) to acetyl-CoA and oxaloacetate, driven by hydrolysis of ATP. In the presence of ATP and CoA, citrate lyase catalyzes the cleavage of citrate to yield acetyl CoA, oxaloacetate, adenosine diphosphate (ADP), and orthophosphate (Pi):
citrate + ATP + CoA → oxaloacetate + Acetyl-CoA + ADP + Pi
This enzyme was formerly given the EC number 4.1.3.8.
Location
The enzyme is cytosolic in plants and animals.
Structure
The enzyme is composed of two subunits in green plants (including Chlorophyceae, Marchantimorpha, Bryopsida, Pinaceae, monocotyledons, and eudicots), species of fungi, glaucophytes, Chlamydomonas, and prokaryotes.
Animal ACL enzymes are homomeric; a fusion of the ACLA and ACLB genes probably occurred early in the evolutionary history of this kingdom.
The mammalian ATP citrate lyase has a N-terminal citrate-binding domain that adopts a Rossmann fold, followed by a CoA binding domain and CoA-ligase domain and finally a C-terminal citrate synthase domain. The cleft between the CoA binding and citrate synthase domains forms the active site of the enzyme, where both citrate and acetyl-coenzyme A bind.
In 2010, a structure of truncated human ATP citrate lyase was determined using X-ray diffraction to a resolution of 2.10 Å. In 2019, a full length structure of human ACLY in complex with the substrates coenzyme A, citrate and Mg.ADP was determined by X-ray crystallography to a resolution of 3.2 Å. Moreover, in 2019 a full length structure of ACLY in complex with an inhibitor was determined by cryo-EM methods to a resolution of 3.7 Å. Additional structures of heteromeric ACLY-A/B from the green sulfur bacteria Chlorobium limicola and the archaeon Methanosaeta concilii show that the architecture of ACLY is evolutionarily conserved. Full length ACLY structures showed that the tetrameric protein oligomerizes via its C-terminal domain. The C-terminal domain had not been observed in the previously determined truncated crystal structures. The C-terminal region of ACLY assembles in a tetrameric module that is structurally similar to citryl-CoA lyase (CCL) found in deep branching bacteria. This CCL module catalyses the cleavage of the citryl-CoA intermediate into the products acetyl-CoA and oxaloacetate.
In 2019, cryo-EM structures of human ACLY, alone or bound to substrates or products were reported as well. ACLY forms a homotetramer with a rigid citrate synthase homology (CSH) module, flanked by four flexible acetyl-CoA synthetase homology (ASH) domains; CoA is bound at the CSH–ASH interface in mutually exclusive productive or unproductive conformations. The structure of a catalytic mutant of ACLY in the presence of ATP, citrate and CoA substrates reveals a CoA and phosphor-citrate intermediate in the N-terminal domain. Cryo-EM structures of products bound ACLY and substrates bound ACLY were also determined at 3.0 Å and 3.1 Å. An EM structure of mutant E599Q in complex with CoA and phospho-citrate intermediate was determined at resolution of 2.9 Å. Comparison between these structures of apo-ACLY and ligands bound ACLY demonstrated conformational changes on ASH domain (N-terminal domain) when different ligands bind.
Pharmacology
The enzyme's action can be inhibited by the coenzyme A-conjugate of bempedoic acid, a compound which lowers LDL cholesterol in humans. The drug was approved by the Food and Drug Administration in February 2020 for use in the United States.
References
Further reading
External links
EC 2.3.3
Citric acid cycle | ATP citrate synthase | [
"Chemistry"
] | 1,363 | [
"Carbohydrate metabolism",
"Citric acid cycle"
] |
14,542,297 | https://en.wikipedia.org/wiki/Formylmethanofuran%E2%80%94tetrahydromethanopterin%20N-formyltransferase | In enzymology, a formylmethanofuran-tetrahydromethanopterin N-formyltransferase () is an enzyme that catalyzes the chemical reaction
formylmethanofuran + 5,6,7,8-tetrahydromethanopterin methanofuran + 5-formyl-5,6,7,8-tetrahydromethanopterin
Thus, the two substrates of this enzyme are formylmethanofuran and 5,6,7,8-tetrahydromethanopterin, whereas its two products are methanofuran and 5-formyl-5,6,7,8-tetrahydromethanopterin.
This enzyme belongs to the family of transferases, specifically those acyltransferases transferring groups other than aminoacyl groups. The systematic name of this enzyme class is formylmethanofuran:5,6,7,8-tetrahydromethanopterin 5-formyltransferase. Other names in common use include formylmethanofuran-tetrahydromethanopterin formyltransferase, formylmethanofuran:tetrahydromethanopterin formyltransferase, N-formylmethanofuran(CHO-MFR):tetrahydromethanopterin(H4MPT), formyltransferase, FTR, formylmethanofuran:5,6,7,8-tetrahydromethanopterin, and N5-formyltransferase. This enzyme participates in folate biosynthesis.
Ftr from the thermophilic methanogen Methanopyrus kandleri (which has an optimum growth temperature 98 degrees C) is a hyperthermophilic enzyme that is absolutely dependent on the presence of lyotropic salts for activity and thermostability. The crystal structure of Ftr, determined to a reveals a homotetramer composed essentially of two dimers. Each subunit is subdivided into two tightly associated lobes both consisting of a predominantly antiparallel beta sheet flanked by alpha helices forming an alpha/beta sandwich structure. The approximate location of the active site was detected in a region close to the dimer interface. Ftr from the mesophilic methanogen Methanosarcina barkeri and the sulphate-reducing archaeon Archaeoglobus fulgidus have a similar structure.
In the methylotrophic bacterium Methylobacterium extorquens, Ftr interacts with three other polypeptides to form an Ftr/hydrolase complex which catalyses the hydrolysis of formyl-tetrahydromethanopterin to formate during growth on C1 substrates.
Structural studies
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes , , , , and .
References
Further reading
Protein domains
EC 2.3.1
Enzymes of known structure | Formylmethanofuran—tetrahydromethanopterin N-formyltransferase | [
"Biology"
] | 656 | [
"Protein domains",
"Protein classification"
] |
17,324,908 | https://en.wikipedia.org/wiki/LongPen | The LongPen is a remote type of autopen. This signing device was conceived of by writer Margaret Atwood in 2004 and debuted in 2006. It allows a person to write remotely in ink anywhere connected to the Internet, via a touchscreen device operating a robotic hand. It can also support an audio and video conversation between the endpoints, such as a fan and author, while a book is being signed.
The system was used by Conrad Black, who was under arrest, to "attend" a book signing event without leaving his home.
See also
List of Canadian inventions and discoveries
Interactive whiteboard
Polygraph (duplicating device)
Autopen
Telautograph, another remote signing device, patented by Elisha Gray in 1888
References
Pointing-device text input
Computer output devices
Margaret Atwood | LongPen | [
"Technology"
] | 161 | [
"Computing stubs",
"Computer hardware stubs"
] |
17,325,007 | https://en.wikipedia.org/wiki/GLITS | Graham's Line Identification Tone System (GLITS) is a test signal for stereo systems devised by BBC TV Sound Supervisor and Fellow of the IPS Graham Haines in the mid-1980s. It comprises a 1 kHz tone at 0 dBu (-18 dBFS) on both channels, with interruptions which identify the channels.
The left channel is interrupted once for 250 ms every 4 seconds. 250 ms later the right channel has two interruptions of 250 ms spaced by 250 ms.
This arrangement has an advantage over the EBU stereo ident tone in that each channel is explicitly identified as belonging to a stereo pair. The EBU Technical Document Multichannel Audio Line-up Tone (Tech 3304) defines stereo lineup tone as having an interruption in the left channel only, lasting 250 ms every 3 s.
Multichannel GLITS
There is now an official EBU standard for a multichannel BLITS 5.1 channel ident tone which is also described in the Tech 3304 paper, along with an alternative film-style multichannel ident tone system for systems larger than 5.1 arrays.
Blits plays a sequence of tones (based on the musical notes A and E) at -18dBFS on each channel in the AES channel format order (L, R, C, LFE, Ls, Rs), followed by an EBU-style ident on just the front left and right channels, again at -18dBFS and with four interruptions on the left channel. The four interruptions provides a unique confirmation that the stereo or mono downmix came from a 5.1 source and avoids any possible confusion with stereo EBU or GLITS downmixes. The final BLITS tone sequence is a 2 kHz tone at -24dBFS on all six channels – the lower source signal level ensuring that any derived downmixes remain close to -18dBFS.
The alternative EBU multichannel ident tone follows a format more closely associated with the film industry. A sustained 80 Hz runs on the LFE channel throughout the sequence. After a 3-second period of constant 1 kHz, -18dBFS tone on all main channels, each channel is identified in turn with a 0.5s pulse of 1 kHz tone, separated from its neighbours by 0.5s silence. The ident sequence starts at Front Left and continues clockwise through each available channel. The amount of time between the 3 second constant tone periods indicates the total number of channels in the system—e.g. a 7.1 system will have an ident sequence lasting 8 seconds.
Snell & Wilcox have used the following on the embedded audio in their VALID8 (Video Audio Line-up & IDentification) equipment:
Channel 1 (L) 980 Hz one 250 ms interruption every 4 seconds
Channel 2 (R) 980 Hz two 250 ms interruptions every 4 seconds
Channel 3 (C) 432 Hz one 250 ms interruption every 4 seconds
Channel 4 (Lfe) 432 Hz two 250 ms interruptions every 4 seconds (probably not audible from a subwoofer)
Channel 5 (Ls) 990 Hz one 250 ms interruption every 4 seconds
Channel 6 (Rs) 990 Hz two 250 ms interruptions every 4 seconds
Channel 7 (Lo) 436 Hz one 250 ms interruption every 4 seconds
Channel 8 (Ro) 436 Hz two 250 ms interruptions every 4 seconds
References
Broadcast engineering
Test items
British inventions | GLITS | [
"Engineering"
] | 700 | [
"Broadcast engineering",
"Electronic engineering"
] |
17,325,025 | https://en.wikipedia.org/wiki/Great%20Observatories%20Origins%20Deep%20Survey | The Great Observatories Origins Deep Survey, or GOODS, is an astronomical survey combining deep observations from three of NASA's Great Observatories: the Hubble Space Telescope, the Spitzer Space Telescope, and the Chandra X-ray Observatory, along with data from other space-based telescopes, such as XMM Newton, and some of the world's most powerful ground-based telescopes.
GOODS is intended to enable astronomers to study the formation and evolution of galaxies in the distant, early universe.
The Great Observatories Origins Deep Survey consists of optical and near-infrared imaging taken with the Advanced Camera for Surveys on the Hubble Space Telescope, the Very Large Telescope and the 4-m telescope at Kitt Peak National Observatory; infrared data from the Spitzer Space Telescope. These are added to pre-existing x-ray data from the Chandra X-ray Observatory and ESAs XMM-Newton, two fields of 10' by 16'; one centered on the Hubble Deep Field North (12h 36m 55s, +62° 14m 15s) and the other on the Chandra Deep Field South (3h 32m 30s, −27° 48m 20s).
The two GOODS fields are the most data-rich areas of the sky in terms of depth and wavelength coverage.
Instruments
GOODS consists of data from the following space-based observatories:
The Hubble Space Telescope (optical imaging with the Advanced Camera for Surveys)
The Spitzer Space Telescope (infrared imaging)
The Chandra X-ray Observatory (X-ray)
XMM-Newton (an X-ray telescope belonging to the European Space Agency)
The Herschel Space Observatory (an infrared telescope belonging to the ESA)
Hubble Space Telescope images
GOODs used the Hubble Space Telescope's Advanced Camera for Surveys with four filters, centered at 435, 606, 775 and 850 nm. The resulting map covers 30 times the area of the Hubble Deep Field to a photometric magnitude less sensitivity, and has enough resolution to allow the study of 1 kpc-scale objects at redshifts up to 6. It also provides photometric redshifts for over 60,000 galaxies within the field, providing an excellent sample for studying bright galaxies at high redshifts.
Herschel
In May 2010, scientists announced that the infrared data from the Herschel Space Observatory was joining the GOODS dataset, after initial analysis of data using Herschel's PACS and SPIRE instruments. In October 2009, Herschel observed the GOODS-North field, and in January 2010 the GOODS-South field. In so doing, Herschel identified sources for the Cosmic Infrared Background.
Findings
Direct collapse black holes
Two objects studied in the GOODS survey, GOODS-S 29323 and GOODS-S 33160, show evidence of being seeds for direct collapse black holes, a potential mechanism for the formation of black holes in the early universe involving the cloud of gas directly collapsing into a black hole. GOODS-S 29323 has a redshift of 9.73 (13.2 billion light years away from Earth), and GOODS-S 33160 has a redshift of 6.06. This distance portrays interest into the early universe, where matter was in large, dense, quantities. This distance leads to a possible conclusion that due to matter particles exerting gravity on themselves, they would instantly collapse, forming the earliest supermassive black holes that we know of in the center of many galaxies. High infrared radiation in the spectrum of these two objects would imply extremely high star-formation rates, but fits the model of a direct-collapse black hole. Additionally, X-ray radiation is present in these objects, thought to be originating from the hot accretion disk of a collapsing black hole.
GOODS-S 29323 is located in the constellation Fornax, at right ascension 03h 32m 28s and declination –27° 48′ 30″.
Gallery
References
External links
Astronomical surveys
Extragalactic astronomy
Hubble Space Telescope images
Great Observatories program | Great Observatories Origins Deep Survey | [
"Astronomy"
] | 825 | [
"Astronomical surveys",
"Works about astronomy",
"Great Observatories program",
"Space telescopes",
"Extragalactic astronomy",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
17,325,370 | https://en.wikipedia.org/wiki/Serbia%20Zijin%20Bor%20Copper | Serbia Zijin Bor Copper, formerly known as RTB Bor, is a copper mining and smelting complex located in Bor, Serbia.
History
Formation and expansion
The first geological explorations of copper ore in Bor area were conducted in 1897 and covered the area at the time called "Tilva Roš". The explorations were performed by the Serbian industrialist Đorđe Vajfert, who later provided investments of capital from France and set up a company called the "French Society of the Bor Mines, the Concession St. George". The company, with its headquarters in Paris, started operations on 1 June 1904. The French capital remained in Bor until the end of the World War II.
1951–1988: SFR Yugoslavia
In 1951, the company's assets were nationalized by the Government of SFR Yugoslavia. Since then, the company Bor was in the state ownership.
From 1951 until 1988, the company has changed its organizational structure, from the "organization of associated labor" to state-owned enterprise "RTB Bor".
1990s–2000s
During 1993, following the breakup of SFR Yugoslavia and the outbreak of the Yugoslav Wars, RTB Bor made various investments which further initiated opening of the new mining operations such as new pit mine called "Cerovo".
Since the mid-1990s and during the time of sanctions on the Federal Republic of Yugoslavia, production in the RTB Bor dropped significantly from the very prosperous 1970s and 1980s. This has been due to both diminishing reserves and the inability to obtain new equipment that would most efficiently gather the remaining ore, which is no longer of such a high grade. Copper mining as the key basis of Bor's economy had significant effects on Bor's inhabitants due to decreased production during the 1990s and 2000s.
2007–2008 failed purchases
In March 2007, the Government of Serbia sold RTB Bor to the Romanian Cuprom for a sum of US$400 million. Cuprom pledged to modernize the production facilities in RTB Bor and Majdanpek mine, in order to improve the productivity levels. However, due to Cuprom's failure to meet a deadline regarding the financing, the Government of Serbia had cut the deal and the complex was put up for privatization once again.
In February 2008, following the second tender, RTB Bor was sold to the Austrian A-TEC for a sum of $466 million plus obligation to invest $180.4 million in facilities.
After the signing of the contract was made, the first $150 million was delivered by A-TEC. However, the problems arose after A-TEC missed its deadline for the second payment at $230 million, due to A-TEC's trouble to secure bank guarantees due to the global recession caused by the financial crisis of 2007–2008. A-TEC was not permitted to see returned the $150 million it had already paid. The Government of Serbia later voted to scrap the contract and offer Oleg Deripaska's Strikeforce Mining and Resources (SMR) as the second ranked bidder a chance to purchase RTB Bor. However, after a set of negotiations, SMR decided not to increase their first offer and the second tender had officially failed.
2008–2017
For more than two decades, RTB Bor has been among the most unprofitable Serbian companies, with the accumulated debt of more than 1 billion euros. However, the Government of Serbia kept investing hundreds of millions euros in new production facilities, and even wrote off company's debts worth 1 billion euros to the government-owned companies such as Elektroprivreda Srbije.
Even with high copper prices on global markets, RTB Bor continued with financial losses. For calendar year 2015 net loss was around 110 million euros and for 2016 it amounted to 42 million euros.
In 2017, Greek Mytilineos Holdings won a multi-year trial against RTB Bor before the Geneva Arbitration Tribunal, seeking $40 million for failure to fulfill the contract and subsequent financial losses. During the 1990s, RTB Bor imported the copper concentrate from Mytilineos, processed it, but never sent back 4,000 tonnes of processed copper to the Greek company. Mytilineos has also launched several other lawsuits against RTB Bor over the non-fulfilled contracts signed during the 1990s.
In 2017, according to the general director Spaskovski, RTB Bor had a positive net result after years of net losses, with $306 million (€255 million) of revenues and $73 million (€61 million) of EBITDA. For 2017, around 18 million tonnes of ore was mined, of which 235,000 tonnes of concentrate was processed and finally, 43,000 tonnes of copper, 5 tonnes of silver and 700 kilograms of gold was obtained. Around 75% of the processed copper is exported, while the rest is being further processed by domestic copper companies "Valjaonica bakra Sevojno" and "Pometon".
2017–present
In 2017, the Government of Serbia was obliged to find a strategic partner or buyer by March 2018, in a memorandum with the International Monetary Fund (IMF). The sale was later postponed until June 2018. Three companies – Zijin Mining from China, Diamond Fields International from Canada and U Gold from Russia – placed bids in a tender for a strategic partner. The Serbian government has chosen the Chinese Zijin Mining Group as its strategic partner for the copper mining and smelting complex, RTB Bor.
On 31 August 2018, Chinese mining company Zijin Mining took over 63% of shares of the company, in a $1.26 billion deal with the Government of Serbia. On 18 December 2018, Zijin Mining formally took over the company under new name "Zijin Bor Copper". Later, it was announced that suffix "Serbia" will be added. For 2018 calendar year, Zijin Bor Copper had net income of around 760 million euros, with most of it coming from debts conversion into shares.
Organization
RTB Bor Group is composed of the following subsidies:
RBB – Copper Mine Bor
RBM – Copper Mine Majdanpek
TIR – Smelter and Refinery
The ledges of the Zijin Bor Copper are located in the southwestern part of the Carpathian Mountains and are mostly of porphyry type within the Upper Bor District eruptive area. The currently undeveloped underground site "Borska Reka", located within the Jama mine, represents a very significant potential mineral resource.
The overview of total resources:
Criticism
Air pollution
Several protests has been held in Bor in eastern Serbia over excessive air pollution that has been intensified since Zijin took over copper miner Rudarsko-Topioničarski Basen (RTB) in late 2018. Since January 2019, Bor has been struggling with excessive air pollution, with sulfur dioxide (SO2) levels topping 2,000 micrograms per cubic meter, up from the maximum allowed 350. Protesters demanded that the city government urgently adopt a plan so that the line ministry and state inspectorates can react to the alarming pollution levels in Bor. As early as April 2019, the inspector had ordered the company to take action against air pollution of the environment, human health and the environment, because it emitted excessive SO2. Zijin then explained in a letter to the Ministry of Environment that the power outage had caused pollution. However, control a few months later, in August, showed another omission – Zijin did not have a system for wet dust removal during the transportation of tailings on the Bor mine, which also threatened human health and the environment. Zijin was ordered to solve the problem, and the company later told the Ministry that a dust suppression system had been installed, which was put to trial. In November 2019, CINS sought an interview with Zijin on the topic of air pollution, to which the company responded with a press release. It says that by the end of the year, the company will have a total of five SO2-neutralized dust spray machines. Documentation obtained by CINS shows that by that time, two of the machines purchased had been in operation for about two months, but pollution data showed that it had no significant effect on the reduction of sulfur dioxide.
Gallery
See also
List of copper production by company
Valjaonica bakra Sevojno
Bor mine
Borska Reka mine
Dumitru Potok mine
Mali Krivelj mine
Majdanpek mine
Veliki Krivelj mine
References
External links
Rudnik dugova at insajder.net
Bor, Serbia
1904 establishments in Serbia
2003 mergers and acquisitions
2018 mergers and acquisitions
Companies based in Bor
Copper mining companies of Serbia
D.o.o. companies in Serbia
Energy companies of Serbia
Metal companies of Serbia
Non-renewable resource companies established in 1904
Serbian brands
Smelting
Companies of Yugoslavia
Smelters of Yugoslavia
Smelters of Serbia
Copper smelters | Serbia Zijin Bor Copper | [
"Chemistry"
] | 1,860 | [
"Metallurgical processes",
"Smelting"
] |
17,325,657 | https://en.wikipedia.org/wiki/Officinalis | Officinalis, or officinale, is a Medieval Latin epithet denoting organisms—mainly plants—with uses in medicine, herbalism and cookery. It commonly occurs as a specific epithet, the second term of a two-part botanical name. Officinalis is used to modify masculine and feminine nouns, while officinale is used for neuter nouns.
Etymology
The word literally means 'of or belonging to an ', the storeroom of a monastery, where medicines and other necessaries were kept. was a contraction of , from (gen. ) 'worker, maker, doer' (from 'work') + , , 'one who does', from 'do, perform'. When Linnaeus invented the binomial system of nomenclature, he gave the specific name officinalis, in the 1735 (1st Edition) of his , to plants (and sometimes animals) with an established medicinal, culinary, or other use.
Species
Althaea officinalis (marshmallow)
Anchusa officinalis (bugloss)
Asparagus officinalis (asparagus)
Avicennia officinalis (mangrove)
Bistorta officinalis (European bistort)
Borago officinalis (borage)
Buddleja officinalis (pale butterflybush)
Calendula officinalis (pot marigold)
Cinchona officinalis (quinine)
Cochlearia officinalis (scurvygrass)
Corallina officinalis (a seaweed)
Cornus officinalis (cornelian cherry)
Cyathula officinalis (ox knee)
Cynoglossum officinale (houndstongue)
Euphrasia officinalis (eyebright)
Fumaria officinalis (fumitory)
Galega officinalis (goat's rue)
Gratiola officinalis (hedge hyssop)
Guaiacum officinale (lignum vitae)
Hyssopus officinalis (hyssop)
Jasminum officinale (jasmine)
Laricifomes officinalis (a wood fungus)
Levisticum officinale (lovage)
Lithospermum officinale (gromwell)
Magnolia officinalis
Melilotus officinalis (ribbed melilot)
Melissa officinalis (lemon balm)
Morinda officinalis (Indian mulberry)
Nasturtium officinale (watercress)
Paeonia officinalis (common paeony)
Parietaria officinalis (upright pellitory)
Pulmonaria officinalis (lungwort)
Rheum officinale (a rhubarb)
Rosa gallica 'Officinalis' (apothecary rose)
Salvia officinalis (sage)
Sanguisorba officinalis (great burnet)
Saponaria officinalis (soapwort)
Scindapsus officinalis (long pepper)
Sepia officinalis (cuttlefish)
Sisymbrium officinale (hedge mustard)
Spongia officinalis (bath sponge)
Stachys officinalis (betony)
Styrax officinalis (drug snowbell)
Symphytum officinale (comfrey)
Taraxacum officinale (dandelion)
Valeriana officinalis (valerian)
Verbena officinalis (vervain)
Veronica officinalis (speedwell)
Zingiber officinale (ginger)
See also
Sativum or Sativa, the Medieval Latin epithet denoting certain cultivated plants
References
Taxonomy (biology)
Latin biological phrases | Officinalis | [
"Biology"
] | 810 | [
"Latin biological phrases",
"Taxonomy (biology)"
] |
17,326,023 | https://en.wikipedia.org/wiki/International%20Arctic%20Buoy%20Program | The International Arctic Buoy Program is headquartered at the Polar Science Center, Applied Physics Laboratory, University of Washington, in Seattle, Washington, United States. The program's objectives include to provide meteorological and oceanographic data in order to support operations and research for UNESCO's World Climate Research Programme and the World Weather Watch Programme of the United Nations' World Meteorological Organization.
IABP participating countries include Canada, China, France, Germany, Japan, Norway, Russia, and the United States. Together, they share the costs of the program.
The IABP has deployed more than 700 buoys since it began operations in 1991, succeeding the Arctic Ocean Buoy Program (operational since 1979-01-19). Commonly, 25 to 40 buoys operate at any given time and provide real-time position, pressure, temperature, and interpolated ice velocity. In support of the International Polar Year, the IABP will deploy over 120 buoys, at over 80 different locations, during the period of April–August 2008.
The organization's annual meeting provides discussion on instrumentation, forecasting, observations, and outlook.
References
External links
Official website
Slilde show, PBS, February 6, 2008
Buoyage
Organizations established in 1991
International environmental organizations
Meteorological research institutes
Hydrology organizations
Arctic research
1991 establishments in Washington (state)
University of Washington organizations | International Arctic Buoy Program | [
"Environmental_science"
] | 272 | [
"Hydrology",
"Hydrology organizations"
] |
17,326,215 | https://en.wikipedia.org/wiki/Logical%20spreadsheet | A logical spreadsheet is a spreadsheet in which formulas take the form of logical constraints rather than function definitions.
In traditional spreadsheet systems, such as Excel, cells are partitioned into "directly specified" cells and "computed" cells and the formulas used to specify the values of computed cells are "functional", i.e. for every combination of values of the directly specified cells, the formulas specify unique values for the computed cells. Logical Spreadsheets relax these restrictions by dispensing with the distinction between directly specified cells and computed cells and generalizing from functional definitions to logical constraints.
As an illustration of the difference between traditional spreadsheets and logical spreadsheets, consider a simple numerical spreadsheet with three cells a, b, and c. Each cell accepts a single integer as value; and there is a formula stating that the value of the third cell is the sum of the values of the other two cells.
Implemented as a traditional spreadsheet, this spreadsheet would allow the user to enter values into cells a and b, and it would automatically compute cell c. For example, if the user were to type 1 into a and 2 into b, it would compute the value 3 for c.
Implemented as a logical spreadsheet, the user would be able to enter values into any of the cells. The user could type 1 into a and 2 into b, and the spreadsheet would compute the value 3 for c. Alternatively, the user could type 2 into b and 3 into c, and the spreadsheet would compute the value 1 for a. And so forth.
In this case, the formula is functional, and the function is invertible. In general, the formulas need not be functional and the functions need not be invertible. For example, in this case, we could write formulas involving inequalities and non-invertible functions (such as square root). More generally, we could build spreadsheets with symbolic, rather than numeric data, and write arbitrary logical constraints on this data.
References
J. Bongard et al.: Reports on the 2006 AAAI Fall Symposia, AI Magazine 28(1), 88-92, 2007.
I. Cervesato: NEXCEL, A Deductive Spreadsheet, The Knowledge Engineering Review, Vol. 00:0, 1-24, Cambridge University Press, 2004.
G. Fischer, C. Rathke: Knowledge-Based Spreadsheets, in Proceedings of the 7th National Conference on Artificial Intelligence, St. Paul Minnesota, 21–26 August 1988, AAAI Press, Menl Park, California, 802-807, 1988.
D. Gunning: Deductive Spreadsheets, Defense Advanced Research Projects Agency Small Business Innovation Research, 2004.3-Topic SB043-040, 2004.
M. Kassoff, L. Zen, A. Garg, M. Genesereth: Predicalc: A Logical Spreadsheet Management System, in Proceedings of the 31st INternational Conference on Very Large Databases, Trondheim, NOrway, 30 August - 2 September 2005, ACM, New York, New York, 1247-1250, 2005.
M. Kassoff, M. Genesereth: Predicalc, A Logical Spreadsheet Management System, The Knowledge Engineering Review, Vol. 22:3, 281-295, Cambridge University Press, 2007.
M. Spenke, C. Beilken: A Spreadsheet Interface for Logic Programming, in K. Bice and C. H. Lewis (eds), Proceedings of ACM CHI 89 Human Factors in Computing Systems, Austin, Texas, 30 April - 4 June 1989, ACM Press, New York, New York, 75-80, 1989.
M. van Emden, M. Ohki, A. Takeuchi: Spreadsheets with Incremental Queries as a User Interface for Logic Programming, New Generation Computing 4(3), 287-304, 1986.
http://news.stanford.edu/news/2007/april25/logic-042507.html
https://dbgroup.ncsu.edu/?p=9
http://logic.stanford.edu/spreadsheet/
Spreadsheet software | Logical spreadsheet | [
"Mathematics"
] | 891 | [
"Spreadsheet software",
"Mathematical software"
] |
17,326,228 | https://en.wikipedia.org/wiki/History%20of%20structural%20engineering | The history of structural engineering dates back to at least 2700 BC when the step pyramid for Pharaoh Djoser was built by Imhotep, the first architect in history known by name. Pyramids were the most common major structures built by ancient civilizations because it is a structural form which is inherently stable and can be almost infinitely scaled (as opposed to most other structural forms, which cannot be linearly increased in size in proportion to increased loads).
Another notable engineering feat from antiquity still in use today is the qanat water management system.
Qanat technology developed in the time of the Medes, the predecessors of the Persian Empire (modern-day Iran which has the oldest and longest Qanat (older than 3000 years and longer than 71 km) that also spread to other cultures having had contact with the Persian.
Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stone masons and carpenters, rising to the role of master builder. No theory of structures existed and understanding of how structures stood up was extremely limited, and based almost entirely on empirical evidence of 'what had worked before'. Knowledge was retained by guilds and seldom supplanted by advances. Structures were repetitive, and increases in scale were incremental.
No record exists of the first calculations of the strength of structural members or the behaviour of structural material, but the profession of structural engineer only really took shape with the Industrial Revolution and the re-invention of concrete (see History of concrete). The physical sciences underlying structural engineering began to be understood in the Renaissance and have been developing ever since.
Early structural engineering
The recorded history of structural engineering starts with the ancient Egyptians. In the 27th century BC, Imhotep was the first structural engineer known by name and constructed the first known step pyramid in Egypt. In the 26th century BC, the Great Pyramid of Giza was constructed in Egypt. It remained the largest man-made structure for millennia and was considered an unsurpassed feat in architecture until the 19th century AD.
The understanding of the physical laws that underpin structural engineering in the Western world dates back to the 3rd century BC, when Archimedes published his work On the Equilibrium of Planes in two volumes, in which he sets out the Law of the Lever, stating:
Archimedes used the principles derived to calculate the areas and centers of gravity of various geometric figures including triangles, paraboloids, and hemispheres. Archimedes's work on this and his work on calculus and geometry, together with Euclidean geometry, underpin much of the mathematics and understanding of structures in modern structural engineering.
The ancient Romans made great bounds in structural engineering, pioneering large structures in masonry and concrete, many of which are still standing today. They include aqueducts, thermae, columns, lighthouses, defensive walls and harbours. Their methods are recorded by Vitruvius in his De Architectura written in 25 BC, a manual of civil and structural engineering with extensive sections on materials and machines used in construction. One reason for their success is their accurate surveying techniques based on the dioptra, groma and chorobates.
During the High Middle Ages (11th to 14th centuries) builders were able to balance the side thrust of vaults with that of flying buttresses and side vaults, to build tall spacious structures, some of which were built entirely of stone (with iron pins only securing the ends of stones) and have lasted for centuries.
In the 15th and 16th centuries and despite lacking beam theory and calculus, Leonardo da Vinci produced many engineering designs based on scientific observations and rigour, including a design for a bridge to span the Golden Horn. Though dismissed at the time, the design has since been judged to be both feasible and structurally valid
The foundations of modern structural engineering were laid in the 17th century by Galileo Galilei, Robert Hooke and Isaac Newton with the publication of three great scientific works. In 1638 Galileo published Dialogues Relating to Two New Sciences, outlining the sciences of the strength of materials and the motion of objects (essentially defining gravity as a force giving rise to a constant acceleration). It was the first establishment of a scientific approach to structural engineering, including the first attempts to develop a theory for beams. This is also regarded as the beginning of structural analysis, the mathematical representation and design of building structures.
This was followed in 1676 by Robert Hooke's first statement of Hooke's Law, providing a scientific understanding of elasticity of materials and their behaviour under load.
Eleven years later, in 1687, Sir Isaac Newton published Philosophiae Naturalis Principia Mathematica, setting out his Laws of Motion, providing for the first time an understanding of the fundamental laws governing structures.
Also in the 17th century, Sir Isaac Newton and Gottfried Leibniz both independently developed the Fundamental theorem of calculus, providing one of the most important mathematical tools in engineering.
Further advances in the mathematics needed to allow structural engineers to apply the understanding of structures gained through the work of Galileo, Hooke and Newton during the 17th century came in the 18th century when Leonhard Euler pioneered much of the mathematics and many of the methods which allow structural engineers to model and analyse structures. Specifically, he developed the Euler–Bernoulli beam equation with Daniel Bernoulli (1700–1782) circa 1750 - the fundamental theory underlying most structural engineering design.
Daniel Bernoulli, with Johann (Jean) Bernoulli (1667–1748), is also credited with formulating the theory of virtual work, providing a tool using equilibrium of forces and compatibility of geometry to solve structural problems. In 1717 Jean Bernoulli wrote to Pierre Varignon explaining the principle of virtual work, while in 1726 Daniel Bernoulli wrote of the "composition of forces".
In 1757 Leonhard Euler went on to derive the Euler buckling formula, greatly advancing the ability of engineers to design compression elements.
Modern developments in structural engineering
Throughout the late 19th and early 20th centuries, materials science and structural analysis underwent development at a tremendous pace.
Though elasticity was understood in theory well before the 19th century, it was not until 1821 that Claude-Louis Navier formulated the general theory of elasticity in a mathematically usable form. In his leçons of 1826 he explored a great range of different structural theory, and was the first to highlight that the role of a structural engineer is not to understand the final, failed state of a structure, but to prevent that failure in the first place. In 1826 he also established the elastic modulus as a property of materials independent of the second moment of area, allowing engineers for the first time to both understand structural behaviour and structural materials.
Towards the end of the 19th century, in 1873, Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as partial derivative of the strain energy.
In 1824, Portland cement was patented by the engineer Joseph Aspdin as "a superior cement resembling Portland Stone", British Patent no. 5022. Although different forms of cement already existed (Pozzolanic cement was used by the Romans as early as 100 B.C. and even earlier by the ancient Greek and Chinese civilizations) and were in common usage in Europe from the 1750s, the discovery made by Aspdin used commonly available, cheap materials, making concrete construction an economical possibility.
Developments in concrete continued with the construction in 1848 of a rowing boat built of ferrocement - the forerunner of modern reinforced concrete - by Joseph-Louis Lambot. He patented his system of mesh reinforcement and concrete in 1855, one year after W.B. Wilkinson also patented a similar system. This was followed in 1867 when a reinforced concrete planting tub was patented by Joseph Monier in Paris, using steel mesh reinforcement similar to that used by Lambot and Wilkinson. Monier took the idea forward, filing several patents for tubs, slabs and beams, leading eventually to the Monier system of reinforced structures, the first use of steel reinforcement bars located in areas of tension in the structure.
Steel construction was first made possible in the 1850s when Henry Bessemer developed the Bessemer process to produce steel. He gained patents for the process in 1855 and 1856 and successfully completed the conversion of cast iron into cast steel in 1858. Eventually mild steel would replace both wrought iron and cast iron as the preferred metal for construction.
During the late 19th century, great advancements were made in the use of cast iron, gradually replacing wrought iron as a material of choice. Ditherington Flax Mill in Shrewsbury, designed by Charles Bage, was the first building in the world with an interior iron frame. It was built in 1797. In 1792 William Strutt had attempted to build a fireproof mill at Belper in Derby (Belper West Mill), using cast iron columns and timber beams within the depths of brick arches that formed the floors. The exposed beam soffits were protected against fire by plaster. This mill at Belper was the world's first attempt to construct fireproof buildings, and is the first example of fire engineering. This was later improved upon with the construction of Belper North Mill, a collaboration between Strutt and Bage, which by using a full cast iron frame represented the world's first "fire proofed" building.
The Forth Bridge was built by Benjamin Baker, Sir John Fowler and William Arrol in 1889, using steel, after the original design for the bridge by Thomas Bouch was rejected following the collapse of his Tay Rail Bridge. The Forth Bridge was one of the first major uses of steel, and a landmark in bridge design. Also in 1889, the wrought-iron Eiffel Tower was built by Gustave Eiffel and Maurice Koechlin, demonstrating the potential of construction using iron, despite the fact that steel construction was already being used elsewhere.
During the late 19th century, Russian structural engineer Vladimir Shukhov developed analysis methods for tensile structures, thin-shell structures, lattice shell structures and new structural geometries such as hyperboloid structures. Pipeline transport was pioneered by Vladimir Shukhov and the Branobel company in the late 19th century.
Again taking reinforced concrete design forwards, from 1892 onwards François Hennebique's firm used his patented reinforced concrete system to build thousands of structures throughout Europe. Thaddeus Hyatt in the US and Wayss & Freitag in Germany also patented systems. The firm AG für Monierbauten constructed 200 reinforced concrete bridges in Germany between 1890 and 1897 The great pioneering uses of reinforced concrete however came during the first third of the 20th century, with Robert Maillart and others furthering of the understanding of its behaviour. Maillart noticed that many concrete bridge structures were significantly cracked, and as a result left the cracked areas out of his next bridge design - correctly believing that if the concrete was cracked, it was not contributing to the strength. This resulted in the revolutionary Salginatobel Bridge design. Wilhelm Ritter formulated the truss theory for the shear design of reinforced concrete beams in 1899, and Emil Mörsch improved this in 1902. He went on to demonstrate that treating concrete in compression as a linear-elastic material was a conservative approximation of its behaviour. Concrete design and analysis has been progressing ever since, with the development of analysis methods such as yield line theory, based on plastic analysis of concrete (as opposed to linear-elastic), and many different variations on the model for stress distributions in concrete in compression
Prestressed concrete, pioneered by Eugène Freyssinet with a patent in 1928, gave a novel approach in overcoming the weakness of concrete structures in tension. Freyssinet constructed an experimental prestressed arch in 1908 and later used the technology in a limited form in the Plougastel Bridge in France in 1930. He went on to build six prestressed concrete bridges across the Marne River, firmly establishing the technology.
Structural engineering theory was again advanced in 1930 when Professor Hardy Cross developed his Moment distribution method, allowing the real stresses of many complex structures to be approximated quickly and accurately.
In the mid 20th century John Fleetwood Baker went on to develop the plasticity theory of structures, providing a powerful tool for the safe design of steel structures. The possibility of creating structures with complex geometries, beyond analysis by hand calculation methods, first arose in 1941 when Alexander Hrennikoff submitted his D.Sc thesis at MIT on the topic of discretization of plane elasticity problems using a lattice framework. This was the forerunner to the development of finite element analysis. In 1942, Richard Courant developed a mathematical basis for finite element analysis. This led in 1956 to the publication by J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's of a paper on the "Stiffness and Deflection of Complex Structures". This paper introduced the name "finite-element method" and is widely recognised as the first comprehensive treatment of the method as it is known today.
High-rise construction, though possible from the late 19th century onwards, was greatly advanced during the second half of the 20th century. Fazlur Khan designed structural systems that remain fundamental to many modern high rise constructions and which he employed in his structural designs for the John Hancock Center in 1969 and Sears Tower in 1973. Khan's central innovation in skyscraper design and construction was the idea of the "tube" and "bundled tube" structural systems for tall buildings. He defined the framed tube structure as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation." Closely spaced interconnected exterior columns form the tube. Horizontal loads, for example wind, are supported by the structure as a whole. About half the exterior surface is available for windows. Framed tubes allow fewer interior columns, and so create more usable floor space. Where larger openings like garage doors are required, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. The first building to apply the tube-frame construction was in the DeWitt-Chestnut Apartment Building which Khan designed in Chicago. This laid the foundations for the tube structures used in most later skyscraper constructions, including the construction of the World Trade Center.
Another innovation that Fazlur Khan developed was the concept of X-bracing, which reduced the lateral load on the building by transferring the load into the exterior columns. This allowed for a reduced need for interior columns thus creating more floor space, and can be seen in the John Hancock Center. The first sky lobby was also designed by Khan for the John Hancock Center in 1969. Later buildings with sky lobbies include the World Trade Center, Petronas Twin Towers and Taipei 101.
In 1987 Jörg Schlaich and Kurt Schafer published the culmination of almost ten years of work on the strut and tie method for concrete analysis - a tool to design structures with discontinuities such as corners and joints, providing another powerful tool for the analysis of complex concrete geometries.
In the late 20th and early 21st centuries the development of powerful computers has allowed finite element analysis to become a significant tool for structural analysis and design. The development of finite element programs has led to the ability to accurately predict the stresses in complex structures, and allowed great advances in structural engineering design and architecture. In the 1960s and 70s computational analysis was used in a significant way for the first time on the design of the Sydney Opera House roof. Many modern structures could not be understood and designed without the use of computational analysis.
Developments in the understanding of materials and structural behaviour in the latter part of the 20th century have been significant, with detailed understanding being developed of topics such as fracture mechanics, earthquake engineering, composite materials, temperature effects on materials, dynamics and vibration control, fatigue, creep and others. The depth and breadth of knowledge now available in structural engineering, and the increasing range of different structures and the increasing complexity of those structures has led to increasing specialisation of structural engineers.
See also
Base isolation
History of construction
History of architecture
History of sanitation and water supply
Qanat water management system
References
External links
"World Expos. A history of structures". Isaac López César. A history of architectural structures over the last 150 years.
3rd-millennium BC introductions
Structural engineering | History of structural engineering | [
"Engineering"
] | 3,343 | [
"Construction",
"History of construction",
"History of structural engineering",
"Structural engineering"
] |
17,326,435 | https://en.wikipedia.org/wiki/HyperStudio | HyperStudio is a creativity tool software program distributed by Software MacKiev. It was originally created by Roger Wagner in 1989 as "HyperStudio 1.0 for the Apple IIGS", later versions introduced support for Mac and Windows.
It can be described as a multimedia authoring tool, and it provides relatively simple methods for combining varied media. It has been available for purchase off and on over the years, and is now being marketed by Software MacKiev as "Version 5.1", which is aimed mostly at an educational market.
References
External links
Evan Trent, About This Particular Macintosh
Indiana University, "Indiana University Knowledge Base"
1988 software
HyperCard products | HyperStudio | [
"Technology"
] | 139 | [
"Hypermedia",
"HyperCard products"
] |
17,327,236 | https://en.wikipedia.org/wiki/Secnidazole | Secnidazole (trade names Flagentyl, Sindose, Secnil, Solosec) is a nitroimidazole anti-infective. Structurally it actually methyl-metronidazole. Effectiveness in the treatment of dientamoebiasis has been reported. It has also been tested against Atopobium vaginae.
In the United States, secnidazole is FDA approved for the treatment of bacterial vaginosis and trichomoniasis in adult women.
References
Further reading
Nitroimidazole antibiotics
Antiprotozoal agents | Secnidazole | [
"Biology"
] | 125 | [
"Antiprotozoal agents",
"Biocides"
] |
17,327,394 | https://en.wikipedia.org/wiki/Emmy%20Noether%20bibliography | Emmy Noether was a German mathematician. This article lists the publications upon which her reputation is built (in part).
First epoch (1908–1919)
Second epoch (1920–1926)
In the second epoch, Noether turned her attention to the theory of rings. With her paper Moduln in nichtkommutativen Bereichen, insbesondere aus Differential- und Differenzenausdrücken, Hermann Weyl states, "It is here for the first time that the Emmy Noether appears whom we all know, and who changed the face of algebra by her work."
| Jahresbericht der Deutschen Mathematiker-Vereinigung, 34 (Abt. 2), 101 ||
|-
| 28 || 1926 || Ableitung der Elementarteilertheorie aus der Gruppentheorie]
| Jahresbericht der Deutschen Mathematiker-Vereinigung, 34 (Abt. 2), 104 ||
|-
| 29 || 1925 || Gruppencharaktere und Idealtheorie
| Jahresbericht der Deutschen Mathematiker-Vereinigung, 34 (Abt. 2), 144 || Group representations, modules and ideals. First of four papers showing the close connection between these three subjects. See also publications #32, #33, and #35.
|-
| 30 || 1926 || Der Endlichkeitssatz der Invarianten endlicher linearer Gruppen der Charakteristik p
| Nachrichten der Königlichen Gesellschaft der Wissenschaften zu Göttingen, Math.-phys. Klasse, 1926, 28–35 || By applying ascending and descending chain conditions to finite extensions of a ring, Noether shows that the algebraic invariants of a finite group are finitely generated even in positive characteristic.
|-
| 31 || 1926 || Abstrakter Aufbau der Idealtheorie in algebraischen Zahl- und Funktionenkörpern
| Mathematische Annalen, 96, 26–61 || Ideals. Seminal paper in which Noether determined the minimal set of conditions required that a primary ideal be representable as a power of prime ideals, as Richard Dedekind had done for algebraic numbers. Three conditions were required: an ascending chain condition, a dimension condition, and the condition that the ring be integrally closed.
|}
Third epoch (1927–1935)
In the third epoch, Emmy Noether focused on non-commutative algebras, and unified much earlier work on the representation theory of groups.
References
Bibliography
.
External links
List of Emmy Noether's publications by Dr. Cordula Tollmien
List of Emmy Noether's publications in the eulogy by Bartel Leendert van der Waerden
Partial listing of important works at the Contributions of 20th century Women to Physics at UCLA
MacTutor biography of Emmy Noether
Abstract algebra
Bibliographies by writer
Bibliographies of German writers
Science bibliographies | Emmy Noether bibliography | [
"Mathematics"
] | 656 | [
"Abstract algebra",
"Algebra"
] |
17,328,425 | https://en.wikipedia.org/wiki/Viscoplasticity | Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.
The elastic response of viscoplastic materials can be represented in one-dimension by Hookean spring elements. Rate-dependence can be represented by nonlinear dashpot elements in a manner similar to viscoelasticity. Plasticity can be accounted for by adding sliding frictional elements as shown in Figure 1. In the figure is the modulus of elasticity, is the viscosity parameter and is a power-law type parameter that represents non-linear dashpot . The sliding element can have a yield stress () that is strain rate dependent, or even constant, as shown in Figure 1c.
Viscoplasticity is usually modeled in three-dimensions using overstress models of the Perzyna or Duvaut-Lions types. In these models, the stress is allowed to increase beyond the rate-independent yield surface upon application of a load and then allowed to relax back to the yield surface over time. The yield surface is usually assumed not to be rate-dependent in such models. An alternative approach is to add a strain rate dependence to the yield stress and use the techniques of rate independent plasticity to calculate the response of a material.
For metals and alloys, viscoplasticity is the macroscopic behavior caused by a mechanism linked to the movement of dislocations in grains, with superposed effects of inter-crystalline gliding. The mechanism usually becomes dominant at temperatures greater than approximately one third of the absolute melting temperature. However, certain alloys exhibit viscoplasticity at room temperature (300 K). For polymers, wood, and bitumen, the theory of viscoplasticity is required to describe behavior beyond the limit of elasticity or viscoelasticity.
In general, viscoplasticity theories are useful in areas such as:
the calculation of permanent deformations,
the prediction of the plastic collapse of structures,
the investigation of stability,
crash simulations,
systems exposed to high temperatures such as turbines in engines, e.g. a power plant,
dynamic problems and systems exposed to high strain rates.
History
Research on plasticity theories started in 1864 with the work of Henri Tresca, Saint Venant (1870) and Levy (1871) on the maximum shear criterion. An improved plasticity model was presented in 1913 by Von Mises which is now referred to as the von Mises yield criterion. In viscoplasticity, the development of a mathematical model heads back to 1910 with the representation of primary creep by Andrade's law. In 1929, Norton developed a one-dimensional dashpot model which linked the rate of secondary creep to the stress. In 1934, Odqvist generalized Norton's law to the multi-axial case.
Concepts such as the normality of plastic flow to the yield surface and flow rules for plasticity were introduced by Prandtl (1924) and Reuss (1930). In 1932, Hohenemser and Prager proposed the first model for slow viscoplastic flow. This model provided a relation between the deviatoric stress and the strain rate for an incompressible Bingham solid However, the application of these theories did not begin before 1950, where limit theorems were discovered.
In 1960, the first IUTAM Symposium "Creep in Structures" organized by Hoff provided a major development in viscoplasticity with the works of Hoff, Rabotnov, Perzyna, Hult, and Lemaitre for the isotropic hardening laws, and those of Kratochvil, Malinini and Khadjinsky, Ponter and Leckie, and Chaboche for the kinematic hardening laws. Perzyna, in 1963, introduced a viscosity coefficient that is temperature and time dependent. The formulated models were supported by the thermodynamics of irreversible processes and the phenomenological standpoint. The ideas presented in these works have been the basis for most subsequent research into rate-dependent plasticity.
Phenomenology
For a qualitative analysis, several characteristic tests are performed to describe the phenomenology of viscoplastic materials. Some examples of these tests are
hardening tests at constant stress or strain rate,
creep tests at constant force, and
stress relaxation at constant elongation.
Strain hardening test
One consequence of yielding is that as plastic deformation proceeds, an increase in stress is required to produce additional strain. This phenomenon is known as Strain/Work hardening. For a viscoplastic material the hardening curves are not significantly different from those of rate-independent plastic material. Nevertheless, three essential differences can be observed.
At the same strain, the higher the rate of strain the higher the stress
A change in the rate of strain during the test results in an immediate change in the stress–strain curve.
The concept of a plastic yield limit is no longer strictly applicable.
The hypothesis of partitioning the strains by decoupling the elastic and plastic parts is still applicable where the strains are small, i.e.,
where is the elastic strain and is the viscoplastic strain. To obtain the stress–strain behavior shown in blue in the figure, the material is initially loaded at a strain rate of 0.1/s. The strain rate is then instantaneously raised to 100/s and held constant at that value for some time. At the end of that time period the strain rate is dropped instantaneously back to 0.1/s and the cycle is continued for increasing values of strain. There is clearly a lag between the strain-rate change and the stress response. This lag is modeled quite accurately by overstress models (such as the Perzyna model) but not by models of rate-independent plasticity that have a rate-dependent yield stress.
Creep test
Creep is the tendency of a solid material to slowly move or deform permanently under constant stresses. Creep tests measure the strain response due to a constant stress as shown in Figure 3. The classical creep curve represents the evolution of strain as a function of time in a material subjected to uniaxial stress at a constant temperature. The creep test, for instance, is performed by applying a constant force/stress and analyzing the strain response of the system. In general, as shown in Figure 3b this curve usually shows three phases or periods of behavior:
A primary creep stage, also known as transient creep, is the starting stage during which hardening of the material leads to a decrease in the rate of flow which is initially very high. .
The secondary creep stage, also known as the steady state, is where the strain rate is constant. .
A tertiary creep phase in which there is an increase in the strain rate up to the fracture strain. .
Relaxation test
As shown in Figure 4, the relaxation test is defined as the stress response due to a constant strain for a period of time. In viscoplastic materials, relaxation tests demonstrate the stress relaxation in uniaxial loading at a constant strain. In fact, these tests characterize the viscosity and can be used to determine the relation which exists between the stress and the rate of viscoplastic strain. The decomposition of strain rate is
The elastic part of the strain rate is given by
For the flat region of the strain–time curve, the total strain rate is zero. Hence we have,
Therefore, the relaxation curve can be used to determine rate of viscoplastic strain and hence the viscosity of the dashpot in a one-dimensional viscoplastic material model. The residual value that is reached when the stress has plateaued at the end of a relaxation test corresponds to the upper limit of elasticity. For some materials such as rock salt such an upper limit of elasticity occurs at a very small value of stress and relaxation tests can be continued for more than a year without any observable plateau in the stress.
It is important to note that relaxation tests are extremely difficult to perform because maintaining the condition in a test requires considerable delicacy.
Rheological models of viscoplasticity
One-dimensional constitutive models for viscoplasticity based on spring-dashpot-slider elements include the perfectly viscoplastic solid, the elastic perfectly viscoplastic solid, and the elastoviscoplastic hardening solid. The elements may be connected in series or in parallel. In models where the elements are connected in series the strain is additive while the stress is equal in each element. In parallel connections, the stress is additive while the strain is equal in each element. Many of these one-dimensional models can be generalized to three dimensions for the small strain regime. In the subsequent discussion, time rates strain and stress are written as and , respectively.
Perfectly viscoplastic solid (Norton-Hoff model)
In a perfectly viscoplastic solid, also called the Norton-Hoff model of viscoplasticity, the stress (as for viscous fluids) is a function of the rate of permanent strain. The effect of elasticity is neglected in the model, i.e., and hence there is no initial yield stress, i.e., . The viscous dashpot has a response given by
where is the viscosity of the dashpot. In the Norton-Hoff model the viscosity is a nonlinear function of the applied stress and is given by
where is a fitting parameter, λ is the kinematic viscosity of the material and . Then the viscoplastic strain rate is given by the relation
In one-dimensional form, the Norton-Hoff model can be expressed as
When the solid is viscoelastic.
If we assume that plastic flow is isochoric (volume preserving), then the above relation can be expressed in the more familiar form
where is the deviatoric stress tensor, is the von Mises equivalent strain rate, and are material parameters. The equivalent strain rate is defined as
These models can be applied in metals and alloys at temperatures higher than two thirds of their absolute melting point (in kelvins) and polymers/asphalt at elevated temperature. The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 6.
Elastic perfectly viscoplastic solid (Bingham–Norton model)
Two types of elementary approaches can be used to build up an elastic-perfectly viscoplastic mode. In the first situation, the sliding friction element and the dashpot are arranged in parallel and then connected in series to the elastic spring as shown in Figure 7. This model is called the Bingham–Maxwell model (by analogy with the Maxwell model and the Bingham model) or the Bingham–Norton model. In the second situation, all three elements are arranged in parallel. Such a model is called a Bingham–Kelvin model by analogy with the Kelvin model.
For elastic-perfectly viscoplastic materials, the elastic strain is no longer considered negligible but the rate of plastic strain is only a function of the initial yield stress and there is no influence of hardening. The sliding element represents a constant yielding stress when the elastic limit is exceeded irrespective of the strain. The model can be expressed as
where is the viscosity of the dashpot element. If the dashpot element has a response that is of the Norton form
we get the Bingham–Norton model
Other expressions for the strain rate can also be observed in the literature with the general form
The responses for strain hardening, creep, and relaxation tests of such material are shown in Figure 8.
Elastoviscoplastic hardening solid
An elastic-viscoplastic material with strain hardening is described by equations similar to those for an elastic-viscoplastic material with perfect plasticity. However, in this case the stress depends both on the plastic strain rate and on the plastic strain itself. For an elastoviscoplastic material the stress, after exceeding the yield stress, continues to increase beyond the initial yielding point. This implies that the yield stress in the sliding element increases with strain and the model may be expressed in generic terms as
This model is adopted when metals and alloys are at medium and higher temperatures and wood under high loads. The responses for strain hardening, creep, and relaxation tests of such a material are shown in Figure 9.
Strain-rate dependent plasticity models
Classical phenomenological viscoplasticity models for small strains are usually categorized into two types:
the Perzyna formulation
the Duvaut–Lions formulation
Perzyna formulation
In the Perzyna formulation the plastic strain rate is assumed to be given by a constitutive relation of the form
where is a yield function, is the Cauchy stress, is a set of internal variables (such as the plastic strain ), is a relaxation time. The notation denotes the Macaulay brackets. The flow rule used in various versions of the Chaboche model is a special case of Perzyna's flow rule and has the form
where is the quasistatic value of and is a backstress. Several models for the backstress also go by the name Chaboche model.
Duvaut–Lions formulation
The Duvaut–Lions formulation is equivalent to the Perzyna formulation and may be expressed as
where is the elastic stiffness tensor, is the closest point projection of the stress state on to the boundary of the region that bounds all possible elastic stress states. The quantity is typically found from the rate-independent solution to a plasticity problem.
Flow stress models
The quantity represents the evolution of the yield surface. The yield function is often expressed as an equation consisting of some invariant of stress and a model for the yield stress (or plastic flow stress). An example is von Mises or plasticity. In those situations the plastic strain rate is calculated in the same manner as in rate-independent plasticity. In other situations, the yield stress model provides a direct means of computing the plastic strain rate.
Numerous empirical and semi-empirical flow stress models are used the computational plasticity. The following temperature and strain-rate dependent models provide a sampling of the models in current use:
the Johnson–Cook model
the Steinberg–Cochran–Guinan–Lund model.
the Zerilli–Armstrong model.
the Mechanical threshold stress model.
the Preston–Tonks–Wallace model.
The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 107/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 107/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models.
Johnson–Cook flow stress model
The Johnson–Cook (JC) model is purely empirical and gives the following relation for the flow stress ()
where is the equivalent plastic strain, is the
plastic strain-rate, and are material constants.
The normalized strain-rate and temperature in equation (1) are defined as
where is the effective plastic strain-rate of the quasi-static test used to determine the yield and hardening parameters A,B and n. This is not as it is often thought just a parameter to make non-dimensional. is a reference temperature, and is a reference melt temperature. For conditions where , we assume that .
Steinberg–Cochran–Guinan–Lund flow stress model
The Steinberg–Cochran–Guinan–Lund (SCGL) model is a semi-empirical model that was developed by Steinberg et al. for high strain-rate situations and extended to low strain-rates and bcc materials by Steinberg and Lund. The flow stress in this model is given by
where is the athermal component of the flow stress, is a function that represents strain hardening, is the thermally activated component of the flow stress, is the pressure- and temperature-dependent shear modulus, and is the shear modulus at standard temperature and pressure. The saturation value of the athermal stress is . The saturation of the thermally activated stress is the Peierls stress (). The shear modulus for this model is usually computed with the Steinberg–Cochran–Guinan shear modulus model.
The strain hardening function () has the form
where are work hardening parameters, and is the initial equivalent plastic strain.
The thermal component () is computed using a bisection algorithm from the following equation.
where is the energy to form a kink-pair in a dislocation segment of length , is the Boltzmann constant, is the Peierls stress. The constants are given by the relations
where is the dislocation density, is the length of a dislocation segment, is the distance between Peierls valleys, is the magnitude of the Burgers vector, is the Debye frequency, is the width of a kink loop, and is the drag coefficient.
Zerilli–Armstrong flow stress model
The Zerilli–Armstrong (ZA) model is based on simplified dislocation mechanics. The general form of the equation for the flow stress is
In this model, is the athermal component of the flow stress given by
where is the contribution due to solutes and initial dislocation density, is the microstructural stress intensity, is the average grain diameter, is zero for fcc materials, are material constants.
In the thermally activated terms, the functional forms of the exponents and are
where are material parameters that depend on the type of material (fcc, bcc, hcp, alloys). The Zerilli–Armstrong model has been modified by for better performance at high temperatures.
Mechanical threshold stress flow stress model
The Mechanical Threshold Stress (MTS) model) has the form
where is the athermal component of mechanical threshold stress, is the component of the flow stress due to intrinsic barriers to thermally activated dislocation motion and dislocation-dislocation interactions, is the component of the flow stress due to microstructural evolution with increasing deformation (strain hardening), () are temperature and strain-rate dependent scaling factors, and is the shear modulus at 0 K and ambient pressure.
The scaling factors take the Arrhenius form
where is the Boltzmann constant, is the magnitude of the Burgers' vector, () are normalized activation energies, () are the strain-rate and reference strain-rate, and () are constants.
The strain hardening component of the mechanical threshold stress () is given by an empirical modified Voce law
where
and is the hardening due to dislocation accumulation, is the contribution due to stage-IV hardening, () are constants, is the stress at zero strain hardening rate, is the saturation threshold stress for deformation at 0 K, is a constant, and is the maximum strain-rate. Note that the maximum strain-rate is usually limited to about /s.
Preston–Tonks–Wallace flow stress model
The Preston–Tonks–Wallace (PTW) model attempts to provide a model for the flow stress for extreme strain-rates (up to 1011/s) and temperatures up to melt. A linear Voce hardening law is used in the model. The PTW flow stress is given by
with
where is a normalized work-hardening saturation stress, is the value of at 0K, is a normalized yield stress, is the hardening constant in the Voce hardening law, and is a dimensionless material parameter that modifies the Voce hardening law.
The saturation stress and the yield stress are given by
where is the value of close to the melt temperature, () are the values of at 0 K and close to melt, respectively, are material constants, , () are material parameters for the high strain-rate regime, and
where is the density, and is the atomic mass.
See also
Viscoelasticity
Bingham plastic
Dashpot
Creep (deformation)
Plasticity (physics)
Continuum mechanics
Quasi-solid
References
Continuum mechanics
Plasticity (physics) | Viscoplasticity | [
"Physics",
"Materials_science"
] | 4,446 | [
"Deformation (mechanics)",
"Classical mechanics",
"Plasticity (physics)",
"Continuum mechanics"
] |
17,328,563 | https://en.wikipedia.org/wiki/Modulating%20retro-reflector | A modulating retro-reflector (MRR) system combines an optical retro-reflector and an optical modulator to allow optical communications and sometimes other functions such as programmable signage.
Free space optical communication technology has emerged in recent years as an attractive alternative to the conventional radio frequency (RF) systems. This emergence is due in large part to the increasing maturity of lasers and compact optical systems that enable exploitation of the inherent advantages (over RF) of the much shorter wavelengths characteristic of optical and near-infrared carriers:
Larger bandwidth
Low probability of intercept
Immunity from interference or jamming
Frequency spectrum allocation issue relief
Smaller, lighter, lower power
Technology
An MRR couples or combines an optical retroreflector with a modulator to reflect modulated optical signals directly back to an optical receiver or transceiver, allowing the MRR to function as an optical communications device without emitting its own optical power. This can allow the MRR to communicate optically over long distances without needing substantial on-board power supplies. The function of the retroreflection component is to direct the reflection back to or near to the source of the light. The modulation component changes the intensity of the reflection. The idea applies to optical communication in a broad sense including not only laser-based data communications but also human observers and road signs. A number of technologies have been proposed, investigated, and developed for the modulation component, including actuated micromirrors, frustrated total internal reflection, electro-optic modulators (EOMs), piezo-actuated deflectors, multiple quantum well (MQW) devices, and liquid crystal modulators, though any one of numerous known optical modulation technologies could be used in theory. These approaches have many advantages and disadvantages relative to one another with respect to such features as power use, speed, modulation range, compactness, retroreflection divergence, cost, and many others.
In a typical optical communications arrangement, the MRR with its related electronics is mounted on a convenient platform and connected to a host computer which has the data that are to be transferred. A remotely located optical transmitter/receiver system usually consisting of a laser, telescope, and detector provides an optical signal to the modulating retro-reflector. The incident light from the transmitter system is both modulated by the MRR and reflected directly back toward the transmitter (via the retroreflection property). Figure 1 illustrates the concept.
One modulating retro-reflector at the Naval Research Laboratory (NRL) in the United States uses a semiconductor based MQW shutter capable of modulation rates up to 10 Mbit/s, depending on link characteristics. (See "Modulating Retro-reflector Using Multiple Quantum Well Technology", U.S. Patent No. 6,154,299, awarded November, 2000.)
The optical nature of the technology provides communications that are not susceptible to issues related to electromagnetic frequency allocation. The multiple quantum well modulating retro-reflector has the added advantages of being compact, lightweight, and requires very little power. The small-array MRR provides up to an order of magnitude in consumed power savings over an equivalent RF system. However, MQW modulators also have relatively small modulation ranges compared to other technologies.
The concept of a modulating retro-reflector is not new, dating back to the 1940s. Various demonstrations of such devices have been built over the years, though the demonstration of the first MQW MRR in 1993 was notable in achieving significant data rates. However, MRRs are still not widely used, and most research and development in that area is confined to rather exploratory military applications, as free-space optical communications in general tends to be a rather specialized niche technology.
Qualities often considered desirable in MRRs (obviously depending on the application) include a high switching speed, low power consumption, large area, wide field-of-view, and high optical quality. It should also function at certain wavelengths where appropriate laser sources are available, be radiation-tolerant (for non-terrestrial applications), and be rugged. Mechanical shutters and ferroelectric liquid crystal (FLC) devices, for example, are too slow, heavy, or are not robust enough for many applications. Some modulating retro-reflector systems are desired to operate at data rates of megabits per second (Mbit/s) and higher and over large temperature ranges characteristic of installation out-of-doors and in space.
Multiple Quantum Well Modulators
Semiconductor MQW modulators are one of the few technologies that meet all the requirements need for United States Navy applications, and consequently the Naval Research Laboratory is particularly active in developing and promoting that approach. When used as a shutter, MQW technology offers many advantages: it is robust solid state, operates at low voltages (less than 20 mV) and low power (tens of milliWatts), and is capable of very high switching speeds. MQW modulators have been run at Gbit/s data rates in fiber optic applications.
When a moderate (~15V) voltage is placed across the shutter in reverse bias, the absorption feature changes, shifting to longer wavelengths and dropping in magnitude. Thus, the transmission of the device near this absorption feature changes dramatically, allowing a signal can be encoded in an on-off-keying format onto the carrier interrogation beam.
This modulator consists of 75 periods of InGaAs wells surrounded by AlGaAs barriers. The device is grown on an n-type GaAs wafer and is capped by a p-type contact layer, thus forming a PIN diode. This device is a transmissive modulator designed to work at a wavelength of 980 nm, compatible with many good laser diode sources. These materials have very good performance operating in reflection architectures. Choice of modulator type and configuration architecture is application-dependent.
Once grown, the wafer is fabricated into discrete devices using a multi-step photolithography process consisting of etching and metallization steps. The NRL experimental devices have a 5 mm aperture, though larger devices are possible and are being designed and developed. It is important to point out that while MQW modulators have been used in many applications to date, modulators of such a large size are uncommon and require special fabrication techniques.
MQW modulators are inherently quiet devices, accurately reproducing the applied voltage as a modulated waveform. An important parameter is contrast ratio, defined as Imax/Imin. This parameter affects the overall signal-to-noise ratio. Its magnitude depends on the drive voltage applied to the device and the wavelength of the interrogating laser relative to the exciton peak. The contrast ratio increases as the voltage goes up until a saturation value is reached. Typically, the modulators fabricated at NRL have had contrast ratios between 1.75:1 to 4:1 for applied voltages between 10 V and 25 V, depending on the structure.
There are three important considerations in the manufacture and fabrication of a given device: inherent maximum modulation rate vs. aperture size; electrical power consumption vs. aperture size; and yield.
Inherent Maximum Modulation Rate vs. Aperture Size
The fundamental limit in the switching speed of the modulator is the resistance-capacitance limit. A key tradeoff is area of the modulator vs. area of the clear aperture. If the modulator area is small, the capacitance is small, hence the modulation rate can be faster. However, for longer application ranges on the order of several hundred meters, larger apertures are needed to close the link. For a given modulator, the speed of the shutter scales inversely as the square of the modulator diameter.
Electrical Power Consumption vs. Aperture Size
When the drive voltage waveform is optimized, the electrical power consumption of a MQW modulating retro-reflector varies as:
Dmod4 * V2 B2 Rs
Where Dmod is the diameter of the modulator, V is the voltage applied to the modulator (fixed by the required optical contrast ratio), B is the maximum data rate of the device, and RS is the sheet resistance of the device. Thus a large power penalty may be paid for increasing the diameter of the MQW shutter.
Yield
MQW devices must be operated at high reverse bias fields to achieve good contrast ratios. In perfect quantum well material this is not a problem, but the presence of a defect in the semiconductor crystal can cause the device to break down at voltages below those necessary for operation. Specifically, a defect will cause an electrical short that prevents development of the necessary electrical field across the intrinsic region of the PIN diode. The larger the device the higher the probability of such a defect. Thus, If a defect occurs in the manufacture of a large monolithic device, the whole shutter is lost.
To address these issues, NRL has designed and fabricated segmented devices as well as monolithic modulators. That is, a given modulator might be "pixellated" into several segments, each driven with the same signal. This technique means that speed can be achieved as well as larger apertures. The "pixellization" inherently reduces the sheet resistance of the device, decreasing the resistance-capacitance time and reducing electrical power consumption. For example, a one centimeter monolithic device might require 400 mW to support a one Mbit/s link. A similar nine segmented device would require 45 mW to support the same link with the same overall effective aperture. A transmissive device with nine "pixels" with an overall diameter of 0.5 cm was shown to support over 10 Mbit/s.
This fabrication technique allows for higher speeds, larger apertures, and increased yield. If a single "pixel" is lost due to defects but is one of nine or sixteen, the contrast ratio necessary to provide the requisite signal-to-noise to close a link is still high. There are considerations that make fabrication of a segmented device more complicated, including bond wire management on the device, driving multiple segments, and temperature stabilization.
An additional important characteristic of the modulator is its optical wavefront quality. If the modulator causes aberrations in the beam, the returned optical signal will be attenuated and insufficient light may be present to close the link.
Applications
MMR systems are used in:
Ground-to-Air Communications
Ground-to-Satellite Communications
Internal Electronics Bus Interaction/Communication
Inter, Intra-Office Communications
Vehicle-to-Vehicle Communications
Industrial Manufacturing
See also
Free space optical communications
Optical Communications
Retro-reflector
References
Optical communications
Optical devices | Modulating retro-reflector | [
"Materials_science",
"Engineering"
] | 2,169 | [
"Optical communications",
"Glass engineering and science",
"Telecommunications engineering",
"Optical devices"
] |
17,329,587 | https://en.wikipedia.org/wiki/Nucleotide%20universal%20IDentifier | The nucleotide universal IDentifier (nuID) in molecular biology, is designed to uniquely and globally identify oligonucleotide microarray probes.
Background
Oligonucleotide probes of microarrays that are sequence identical may have different identifiers between manufacturers and even between different versions of the same company's microarray; and sometimes the same identifier is reused and represents a completely different oligonucleotide, resulting in ambiguity and potentially mis-identification of the genes hybridizing to that probe. This also makes data interpretation and integration of different batches of data difficult. nuID was designed to solve these problems. It is a unique, non-degenerate encoding scheme that can be used as a universal representation to identify an oligonucleotide across manufacturers. The design of nuID was inspired by the fact that the raw sequence of the oligonucleotide is the true definition of identity for a probe, the encoding algorithm uniquely and non-degenerately transforms the sequence itself into a compact identifier (a lossless compression). In addition, a redundancy check (checksum) was added to validate the integrity of the identifier. These two steps, encoding plus checksum, result in an nuID, which is a unique, non-degenerate, permanent, robust and efficient representation of the probe sequence. For commercial applications that require the sequence identity to be confidential, encryption schema can also be added for nuID. The utility of nuIDs has been implemented for the annotation of Illumina microarrays, which can be downloaded from Bioconductor website . It also has universal applicability as a source-independent naming convention for oligomers.
The nuID schema has three significant advantages over using the oligo sequence directly as an identifier: first it is more compact due to the base-64 encoding; second, it has a built-in error detection and self-identification; and third, it can be encrypted in cases where the sequences are preferred not to be disclosed. For more details, please refer to the nuID paper. The implementation nuID encoding and decoding algorithms can be found in the lumi package or at
See also
Illumina Inc. and its beadArray technology
lumi Bioconductor package of processing Illumina expression microarray
References
External links
nuID annotation website
Official Lumi Website
Official Bioconductor Website
Microarrays | Nucleotide universal IDentifier | [
"Chemistry",
"Materials_science",
"Biology"
] | 519 | [
"Biochemistry methods",
"Genetics techniques",
"Bioinformatics stubs",
"Microtechnology",
"Microarrays",
"Biotechnology stubs",
"Biochemistry stubs",
"Bioinformatics",
"Molecular biology techniques"
] |
17,330,825 | https://en.wikipedia.org/wiki/Perveance | Perveance is a notion used in the description of charged particle beams. The value of perveance indicates how significant the space charge effect is on the beam's motion. The term is used primarily for electron beams, in which motion is often dominated by the space charge.
Origin of the word
The word was probably created from Latin pervenio–to attain.
Definition
For an electron gun, the gun perveance is determined as a coefficient of proportionality between a space-charge limited current, , and the gun anode voltage, , in three-half power in the Child-Langmuir law
The same notion is used for non-relativistic beams propagating through a vacuum chamber. In this case, the beam is assumed to have been accelerated in a stationary electric field so that is the potential difference between the emitter and the vacuum chamber, and the ratio of
is referred to as a beam perveance.
In equations describing motion of relativistic beams, contribution of the space charge appears as a dimensionless parameter called the generalized perveance defined as
,
where (for electrons) is the Budker (or Alfven) current; and are the relativistic factors, and is the neutralization factor.
Examples
The 6S4A is an example of a high perveance triode. The triode section of a 6AU8A becomes a high-perveance diode when its control grid is employed as the anode. Each section of a 6AL5 is a high-perveance diode as opposed to a 1J3 which requires over 100 V to reach only 2 mA.
Perveance does not relate directly to current handling. Another high-perveance diode, the diode section of a 33GY7, shows similar perveance to a 6AL5, but handles 15 times greater current, at almost 13 times maximum peak inverse voltage.
References
Accelerator physics
Experimental particle physics | Perveance | [
"Physics"
] | 400 | [
"Applied and interdisciplinary physics",
"Experimental physics",
"Particle physics",
"Experimental particle physics",
"Accelerator physics"
] |
17,332,785 | https://en.wikipedia.org/wiki/WaveMaker | WaveMaker is a Java-based low-code development platform designed for building software applications and platforms. The company, WaveMaker Inc., is based in Mountain View, California. The platform is intended to assist enterprises in speeding up their application development and IT modernization initiatives through low-code capabilities. Additionally, for independent software vendors (ISVs), WaveMaker serves as a customizable low-code component that integrates into their products.
The WaveMaker Platform is a licensed software platform allowing organizations to establish their own end-to-application platform-as-a-service (PaaS) for the creation and operation of custom apps. It allows developers and business users to create apps that are customizable. These applications can seamlessly consume APIs, visualize data, and automatically adapt to multi-device responsive interfaces.
WaveMaker's low-code platform allows organizations to deploy applications on either public or private cloud infrastructure. Containers can be deployed on top of virtual machines or directly on bare metal. The software features a graphical user interface (GUI) console for managing IT app infrastructure, leveraging the capabilities of Docker containerization.
The solution offers functionalities for automating application deployment, managing the application lifecycle, overseeing release management, and controlling deployment workflows and access permissions:
Apps for web, tablet, and smartphone interfaces
Enterprise technologies like Java, Hibernate, Spring, AngularJS, JQuery
Docker-provided APIs and CLI
Software stack packaging, container provisioning, stack and app upgrading, replication, and fault tolerance
WaveMaker Studio
WaveMaker RAD Platform is built around WaveMaker Studio, a WYSIWYG rapid development tool that allows business users to compose an application using a drag-and-drop method. WaveMaker Studio supports rapid application development (RAD) for the web, similar to what products like PowerBuilder and Lotus Notes provided for client-server computing.
WaveMaker Studio allows developers to produce an application once, then automatically adjust it for a particular target platform, whether a PC, mobile phone, or tablet. Applications created using the WaveMaker Studio follow a model–view–controller architecture.
WaveMaker Studio has been downloaded more than two million times. The Studio community consists of 30,000 registered users. Applications generated by WaveMaker Studio are licensed under the Apache license.
Studio 8 was released on September 25, 2015. The prior version, Studio 7, has some notable development milestones. It was based on AngularJS framework, previous Studio versions (6.7, 6.6, 6.5) use the Dojo Toolkit. Some of the features WaveMaker Studio 7 include:
Automatic generation of Hibernate mapping, and Hibernate queries from database schema import.
Automatic creation of Enterprise Data Widgets based on schema import. Each widget can display data from a database table as a grid or edit form. Edit form implements create, update, and delete functions automatically.
WYSIWYG Ajax development studio runs in a browser.
Deployment to Tomcat, IBM WebSphere, Weblogic, JBoss.
Mashup tool to assemble web applications based on SOAP, REST and RSS web services, Java Services and databases.
Supports existing CSS, HTML and Java code.
The ability to deploy a standard Java .war file.
Technologies and frameworks
WaveMaker allows users to build applications that run on "Open Systems Stack" based on the following technologies and frameworks: AngularJS, Bootstrap, NVD3, HTML, CSS, Apache Cordova, Hibernate, Spring, Spring Security, Java. The various supported integrations include:
Databases: Oracle, MySQL, Microsoft SQL Server, PostgreSQL, IBM DB2, HSQLDB
Authentication: LDAP, Active Directory, CAS, Custom Java Service, Database
Version Control: Bitbucket (or Stash), GitHub, Apache Subversion
Deployment: Amazon AWS, Microsoft Azure, WaveMaker Private Cloud (Docker containerization), IBM Web Sphere, Apache Tomcat, SpringSource tcServer, Oracle WebLogic Server, JBoss(WildFly), GlassFish
App Stores: Google Play, Apple App Store, Windows Store
History
In 2003, WaveMaker was founded as ActiveGrid. Then, in 2007, it was rebranded as Wavemaker. It was acquired by VMware in 2011. In March 2013, support for the WaveMaker project was discontinued.
In May 2013, Pramati Technologies acquired the assets of WaveMaker. In February 2014, Wavemaker Studio 6.7 was released, which was the last open source version of Studio. In September 2014 WaveMaker Inc. launched the WaveMaker RAD Platform, which allowed organizations to run their own application platform for building and running apps.
In March 2023, WaveMaker released version 11.5, which includes enhanced low-code development capabilities and new AI-driven tools to streamline the application development process.
References
External links
JavaScript libraries
Ajax (programming)
Web frameworks
Linux integrated development environments
Java development tools
Unix programming tools
User interface builders
Java platform software
Cloud computing providers
Cloud platforms
Web applications
Rich web application frameworks
JavaScript
JavaScript web frameworks
Self-hosting software
Web development software
IOS development software
Android (operating system) development software
Mobile software programming tools | WaveMaker | [
"Technology"
] | 1,087 | [
"Cloud platforms",
"Computing platforms"
] |
17,333,662 | https://en.wikipedia.org/wiki/Stern | The stern is the back or aft-most part of a ship or boat, technically defined as the area built up over the sternpost, extending upwards from the counter rail to the taffrail. The stern lies opposite the bow, the foremost part of a ship. Originally, the term only referred to the aft port section of the ship, but eventually came to refer to the entire back of a vessel. The stern end of a ship is indicated with a white navigation light at night.
Sterns on European and American wooden sailing ships began with two principal forms: the square or transom stern and the elliptical, fantail, or merchant stern, and were developed in that order. The hull sections of a sailing ship located before the stern were composed of a series of U-shaped rib-like frames set in a sloped or "cant" arrangement, with the last frame before the stern being called the fashion timber(s) or fashion piece(s), so called for "fashioning" the after part of the ship. This frame is designed to support the various beams that make up the stern.
In 1817 the British naval architect Sir Robert Seppings introduced the concept of a rounded stern. The square stern had been an easy target for enemy cannon, and could not support the weight of heavy stern chase guns. But Seppings' design left the rudder head exposed, and was regarded by many as simply ugly—no American warships were designed with such sterns, and the round stern was quickly superseded by the elliptical stern. The United States began building the first elliptical stern warship in 1820, a decade before the British. became the first sailing ship to sport such a stern. Though a great improvement over the transom stern in terms of its vulnerability to attack when under fire, elliptical sterns still had obvious weaknesses which the next major stern development—the iron-hulled cruiser stern—addressed far better and with significantly different materials.
Types
Transom
In naval architecture, the term transom has two meanings. First, it can be any of the individual beams that run side-to-side or "athwart" the hull at any point abaft the second, it can refer specifically to the flat or slightly curved surface that is the very back panel of a transom stern. In this sense, a transom stern is the product of the use of a series of transoms, and hence the two terms have blended.
The stern of a traditional sailing ship housed the captain's quarters and became increasingly large and elaborate between the 15th and 18th centuries, especially in the baroque era, when wedding-cake-like structures became so heavy that crews sometimes threw the decoration overboard rather than be burdened with its useless weight. Until a new form of stern appeared in the 19th century, the transom stern was a floating house—and required just as many timbers, walls, windows, and frames. The stern frame provided the foundational structure of the transom stern, and was composed of the sternpost, wing transom, and fashion piece.
Abaft the fashion timber, the transom stern was composed of two different kinds of timbers:
Transoms – These timbers extend across the low parts of the hull near the rudder, and are secured (notched and/or bolted) to the sternpost. The transom located at the base of the stern, and the uppermost of the main transoms, was typically called the wing transom; the principal transom below this and level with the lower deck was called the deck transom; between these two were a series of filling transoms. If the stern had transoms above the wing transom, they would no longer be affixed to the sternpost. The first of these might be called a counter transom; next up was the window sill transom; above that, the spar deck transom. The larger the vessel, the more numerous and wider the transoms required to support its stern.
Stern timbers (also called stern frames) – These timbers are mounted vertically in a series; each timber typically rests or "steps" on the wing transom and then stretches out (aft) and upward. Those not reaching all the way to the taffrail are called short stern timbers, while those that do are called long stern timbers. The two outermost of these timbers, located at the corners of the stern, are called the side-counter timbers or outer stern timbers. It is the stern timbers collectively which determine the backward slope of the square stern, called its rake – that is, if the stern timbers end up producing a final transom that falls vertically to the water, this is considered a transom with no rake; if the stern timbers produce a stern with some degree of slope; such a stern is considered a raked stern.
The flat surface of any transom stern may begin either at or above the waterline of the vessel. The geometric line which stretches from the wing transom to the archboard is called the counter; a large vessel may have two such counters, called a lower counter and a second or upper counter. The lower counter stretches from directly above the wing transom to the lower counter rail, and the upper counter from the lower counter rail to the upper counter rail, immediately under the stern's lowest set of windows (which in naval parlance were called "lights").
Elliptical
The visual unpopularity of Seppings's rounded stern was soon rectified by Sir William Symonds. In this revised stern, a set of straight post timbers (also called "whiskers", "horn timbers", or "fan tail timbers") stretches from the keel diagonally aft and upward. It rests on the top of the sternpost and runs on either side of the rudder post (thus creating the "helm port" through which the rudder passes) to a point well above the vessel's waterline. Whereas the timbers of the transom stern all heeled on the wing transom, the timbers of the elliptical stern all heel on the whiskers, to which they are affixed at a 45° angle (i.e., "canted") when viewed from overhead and decrease in length as they are installed aft until the curvature is complete. The finished stern has a continuous curved edge around the outside and is raked aft.
Other names for the elliptical stern include a "counter stern", in reference to its very long counter, and a "cutaway stern". The elliptical stern began use during the age of sail, but remained very popular for both merchant and warships well into the nautical age of steam and through the first eight decades of steamship construction (roughly 1840–1920). Despite the design's leaving the rudder exposed and vulnerable in combat situations, many counter-sterned warships survived both World Wars, and stylish high-end vessels sporting them were coming off the ways into the 1950s, including the US-flagged sisters SS Constitution and SS Independence.
Cruiser
As ships of wooden construction gave way to iron and steel, the cruiser stern—another design without transoms and known variously as the canoe stern, parabolic stern, and the double-ended stern—became the next prominent development in ship stern design, particularly in warships of the earlier half of the 20th century. The intent of this re-design was to protect the steering gear by bringing it below the armor deck. The stern now came to a point rather than a flat panel or a gentle curve, and the counter reached from the sternpost all the way to the taffrail in a continuous arch. It was soon discovered that vessels with cruiser sterns experienced less water resistance when under way than those with elliptical sterns, and between World War I and World War II most merchant ship designs soon followed suit.
Others
None of these three main types of stern has vanished from the modern naval architectural repertoire, and all three continue to be used in one form or another by designers for many uses. Variations on these basic designs have resulted in an outflow of "new" stern types and names, only some of which are itemized here.
The reverse stern, reverse transom stern, sugar-scoop, or retroussé stern is a kind of transom stern that is raked backwards (common on modern yachts, rare on vessels before the 20th century); the vertical transom stern or plumb stern is raked neither forward nor back, but falls directly from the taffrail down to the wing transom. The rocket ship stern is a term for an extremely angled retroussé stern. A double ended ship with a very narrow square counter formed from the bulwarks or upper deck above the head of the rudder is said to have a pink stern or pinky stern. The torpedo stern or torpedo-boat stern describes a kind of stern with a low rounded shape that is nearly flat at the waterline, but which then slopes upward in a conical fashion towards the deck (practical for small high-speed power boats with very shallow drafts).
A Costanzi stern is a type of stern designed for use on ocean-going vessels. Its hard-chined design is a compromise between the 'spoon-shaped' stern usually found on ocean liners, and the flat transom, often required for fitting azimuth thrusters. The design allows for improved seagoing characteristics. It is the stern design on Queen Mary 2, and was originally proposed for SS Oceanic and Eugenio C, both constructed in the 1960s.
A lute stern is to be found on inshore craft on the Sussex, England, shore. It comprises a watertight transom with the topside planking extended aft to form a non-watertight counter which is boarded across the fashion timbers curving outward aft from the transom.
Some working boats and modern replicas have a similar form of counter, built to be water tight as described in the "transom stern" section above. These are being confused with lute sterns but as a lute is not watertight, a better term is needed. Chappelle in American Small Sailing Craft refers to a Bermudan boat with this form of counter, using the term "square tuck stern" to describe it. The term "tuck" is used in the northwest of England for this area of the hull at the sternpost, and for the bulkhead across the counter if one is fitted.
The fantail stern It was found on many 19th century tea clippers and the ill-fated RMS Titanic.
A bustle stern refers to any kind of stern (transom, elliptical, etc.) that has a large "bustle" or blister at the waterline below the stern to prevent the stern from "squatting" when getting underway. It only appears in sailboats, never in power-driven craft.
An ice horn is a triangular stern component that protects a ship's rudder and prop while traveling in reverse.
Image gallery
References
Nautical terminology
Shipbuilding
Watercraft components | Stern | [
"Engineering"
] | 2,224 | [
"Shipbuilding",
"Marine engineering"
] |
17,334,297 | https://en.wikipedia.org/wiki/List%20of%20reservoirs%20by%20volume | The classification of a reservoir by volume is not as straightforward as it may seem. As the name implies, water is held in reserve by a reservoir so it can serve a purpose. For example, in Thailand, reservoirs tend to store water from the wet season to prevent flooding, then release it during the dry season for farmers to grow rice. For this type of reservoir, almost the entire volume of the reservoir functions for the purpose it was built. Hydroelectric power generation, on the other hand, requires many dams to build up a large volume before operation can begin. For this type of reservoir only a small portion of the water held behind the dam is useful. Therefore, knowing the purpose for which a reservoir has been constructed, and knowing how much water can be used for that purpose, helps determine how much water is in possible reserve.
Terminology
The following terms are used in connection with the volume of reservoirs:
Expanded versus artificial lakes
The list below largely ignores many natural lakes that have been augmented with the addition of a relatively minor dam. For example, a small dam, two hydroelectric plants, and locks on the outlet of Lake Superior make it possible to artificially control the lake level. Certainly, the great majority of the lake is natural. However, the control of water that can be held in reserve means a portion of the vast lake functions as a reservoir.
Recognition of lakes like Lake Superior greatly changes the list below. For example, the Francis H. Clergue Generating Station and Saint Marys Falls Hydropower Plant, which are both on the lake's outlet, operate with just 5.9 meters total head. This is short compared to other dams. However, when viewed against the 81,200 km2 area of the lake, even a small range in Lake Superior's water level means its active volume is greater than the largest nominal in the table below.
List
See also
List of reservoirs by surface area
List of conventional hydroelectric power stations
List of largest reservoirs in the United States
References
Lists of buildings and structures
Lists of bodies of water
Reservoirs | List of reservoirs by volume | [
"Physics",
"Mathematics"
] | 410 | [
"Scalar physical quantities",
"Physical quantities",
"Quantity",
"Size",
"Extensive quantities",
"Volume",
"Wikipedia categories named after physical quantities"
] |
17,334,438 | https://en.wikipedia.org/wiki/Faggot%20cell | Faggot cells are cells normally found in the hypergranular form of acute promyelocytic leukemia (FAB - M3). These promyelocytes (not blast cells) have numerous Auer rods in the cytoplasm which gives the appearance of a bundle of sticks, from which the cells are given their name.
See also
Buttock cell
References
Human cells
Pathology
Hematology
Acute myeloid leukemia | Faggot cell | [
"Biology"
] | 89 | [
"Pathology"
] |
17,335,283 | https://en.wikipedia.org/wiki/NNC%2063-0532 | NNC 63-0532 is a nociceptoid drug used in scientific research. It acts as a potent and selective agonist for the nociceptin receptor, also known as the ORL-1 (opiate receptor-like 1) receptor.
The function of this receptor is still poorly understood, but it is thought to have roles in many disorders such as pain, drug addiction, development of tolerance to opioid drugs, and psychological disorders such as anxiety and depression. Research into the function of this receptor is an important focus of current pharmaceutical development, and selective agents such as NNC 63-0532 are essential for this work.
References
Opioids
1-Naphthyl compounds
Piperidines
Methyl esters
Spiro compounds
Imidazolidinones
Nociceptin receptor agonists | NNC 63-0532 | [
"Chemistry"
] | 166 | [
"Organic compounds",
"Spiro compounds"
] |
17,335,555 | https://en.wikipedia.org/wiki/New%20Croton%20Aqueduct | The New Croton Aqueduct is an aqueduct in the New York City water supply system in Westchester County, New York carrying the water of the Croton Watershed. Built roughly parallel to the Old Croton Aqueduct which it originally augmented, the new aqueduct opened in 1890. The old aqueduct remained in service until 1955, when supply from the Delaware and Catskill Aqueducts was sufficient to allow taking it off line.
Waters of the New Croton Aqueduct flow to the Jerome Park Reservoir in the Bronx before entering Croton Water Filtration Plant in Van Cortlandt Park for treatment, then out to distribution.
Overview
The Croton Watershed is one of three systems that provide water to New York City, joined by the waters of the Delaware and Catskill Aqueducts. The Croton system comprises 12 reservoirs and 3 controlled lakes.
History
The New Croton Aqueduct opened on July 15, 1890, replacing the Old Croton Aqueduct. The newer aqueduct is a brick-lined tunnel, in diameter and long, running from the New Croton Reservoir in Westchester County to the Jerome Park Reservoir in the Bronx. Water flows then proceed toward the Croton Water Filtration Plant for treatment. Treated water is distributed to certain areas of the Bronx and Manhattan.
In the late 1990s, the city stopped using water from the Croton system due to numerous water quality issues. In 1997 the U.S. Environmental Protection Agency (EPA), the U.S. Department of Justice and the State of New York filed suit against the city for violating the Safe Drinking Water Act and the New York State Sanitary Code. The city government agreed to rehabilitate the New Croton Aqueduct and build a filtration plant. The filtration system protects the public from disease-causing microorganisms such as Giardia and Cryptosporidium. The Croton Water Filtration Plant was activated in May 2015.
See also
New York City water supply system
Water supply network
References
Aqueducts in New York (state)
Geography of the Bronx
Interbasin transfer
Transportation buildings and structures in Westchester County, New York
Water infrastructure of New York City | New Croton Aqueduct | [
"Environmental_science"
] | 424 | [
"Hydrology",
"Interbasin transfer"
] |
17,335,615 | https://en.wikipedia.org/wiki/5%27-Guanidinonaltrindole | 5'-Guanidinonaltrindole (5'-GNTI) is an opioid antagonist used in scientific research which is highly selective for the κ opioid receptor. It is 5x more potent and 500 times more selective than the commonly used κ-opioid antagonist norbinaltorphimine. It has a slow onset and long duration of action, and produces antidepressant effects in animal studies. It also increases allodynia by interfering with the action of the κ-opioid peptide dynorphin.
In addition to activity at the KOR, 5'-GNTI has been found to act as a positive allosteric modulator of the α1A-adrenergic receptor (EC50 = 41 nM), and this may contribute to its "severe transient effects".
See also
6'-Guanidinonaltrindole
Binaltorphimine
JDTic
References
Alpha-1 adrenergic receptor agonists
Guanidines
Indolomorphinans
Irreversible antagonists
Kappa-opioid receptor antagonists
Hydroxyarenes
Semisynthetic opioids
Tertiary alcohols
Cyclopropyl compounds | 5'-Guanidinonaltrindole | [
"Chemistry"
] | 250 | [
"Guanidines",
"Functional groups"
] |
17,336,523 | https://en.wikipedia.org/wiki/Descriptive%20interpretation | According to Rudolf Carnap, in logic, an interpretation is a descriptive interpretation (also called a factual interpretation) if at least one of the undefined symbols of its formal system becomes, in the interpretation, a descriptive sign (i.e., the name of single objects, or observable properties). In his Introduction to Semantics (Harvard Uni. Press, 1942) he makes a distinction between formal interpretations which are logical interpretations (also called mathematical interpretation or logico-mathematical interpretation) and descriptive interpretations: a formal interpretation is a descriptive interpretation if it is not a logical interpretation.
Attempts to axiomatize the empirical sciences, Carnap said, use a descriptive interpretation to model reality.: the aim of these attempts is to construct a formal system for which reality is the only interpretation. - the world is an interpretation (or model) of these sciences, only insofar as these sciences are true.
Any non-empty set may be chosen as the domain of a descriptive interpretation, and all n-ary relations among the elements of the domain are candidates for assignment to any predicate of degree n.
Examples
A sentence is either true or false under an interpretation which assigns values to the logical variables. We might for example make the following assignments:
Individual constants
a: Socrates
b: Plato
c: Aristotle
Predicates:
Fα: α is sleeping
Gαβ: α hates β
Hαβγ: α made β hit γ
Sentential variables:
p "It is raining."
Under this interpretation the sentences discussed above would represent the following English statements:
p: "It is raining."
F(a): "Socrates is sleeping."
H(b,a,c): "Plato made Socrates hit Aristotle."
x(F(x)): "Everybody is sleeping."
z(G(a,z)): "Socrates hates somebody."
xyz(H(x,y,z)): "Somebody made everybody hit somebody."
xz(F(x)G(a,z)): Everybody is sleeping and Socrates hates somebody.
xyz (G(a,z)H(x,y,z)): Either Socrates hates somebody or somebody made everybody hit somebody.
Sources
Semantics
Formal languages
Interpretation (philosophy) | Descriptive interpretation | [
"Mathematics"
] | 472 | [
"Formal languages",
"Mathematical logic"
] |
17,336,718 | https://en.wikipedia.org/wiki/Desert%20Garden%20Conservatory | The Desert Garden Conservatory is a large botanical greenhouse and part of the Huntington Library, Art Collections and Botanical Gardens, in San Marino, California. It was constructed in 1985. The Desert Garden Conservatory is adjacent to the Huntington Desert Garden itself. The garden houses one of the most important collections of cacti and other succulent plants in the world, including a large number of rare and endangered species. The Desert Garden Conservatory serves The Huntington and public communities as a conservation facility, research resource and genetic diversity preserve. John N. Trager is the Desert Collection curator.
There are an estimated 10,000 succulents worldwide, about 1,500 of them classified as cacti. The Huntington Desert Garden Conservatory now contains more than 2,200 accessions, representing more than 43 plant families, 1,261 different species and subspecies, and 246 genera. The plant collection contains examples from the world's major desert regions, including the southern United States, Argentina, Bolivia, Chile, Brazil, Canary Islands, Madagascar, Malawi, Mexico and South Africa. The Desert Collection plays a critical role as a repository of biodiversity, in addition to serving as an outreach and education center.
Propagation program to save rare and endangered plants
Some studies estimate that as many as two-thirds of the world's flora and fauna may become extinct during the course of the 21st century, the result of global warming and encroaching development. Scientists alarmed by these prospects are working diligently to propagate plants outside their natural habitats, in protected areas. Ex-situ cultivation, as this practice is known, can serve as a stopgap for plants that will otherwise be lost to the world as their habitats disappear. To this end, The Huntington has a program to protect and plant propagate endangered plant species, designated International Succulent Introductions (ISI).
The aim of the ISI program is to propagate and distribute new or rare succulents to collectors, nurseries and institutions to further research and appreciation of these remarkable plants. The ISI distributes as many as 40 new succulent varieties every year. Field-collected plants, cuttings or seeds are not sold, only seedlings, grafts and rooted cuttings produced under nursery conditions without detriment to wild populations.
The Schick hybrids
The Schick hybrids are derived primarily from crosses of Harry Johnson's Paramount hybrids, created in the 1930s and 40s, and from successive crosses of their progeny. Like the Paramount hybrids, the Schick hybrids can flower several times in a season and, with increasing age, can produce greater numbers of flowers. Under the Huntington's growing conditions the first flush of flowers is typically in April with successive flushes occurring in May, June and July, and, in some hybrids, even into August, September and October. These horticultural-significant cultivars are also available through The Huntington's ISI program.
Interior images of the Desert Garden Conservatory
Plants in the Desert Garden Conservatory
Cactaceae
Other families represented
See also
Huntington Desert Garden
Greenhouse
Solar greenhouse (technical)
Seasonal thermal energy storage (STES)
cactus
cacti
References
External links
Cactus and Succulent Society of America
Huntington Library
Greenhouses in the United States
Huntington
Cactus gardens
Huntington
Buildings and structures in California
1985 establishments in California | Desert Garden Conservatory | [
"Biology"
] | 670 | [
"Flora",
"Desert flora"
] |
11,857,532 | https://en.wikipedia.org/wiki/Collision%20problem | The r-to-1 collision problem is an important theoretical problem in complexity theory, quantum computing, and computational mathematics. The collision problem most often refers to the 2-to-1 version: given even and a function , we are promised that f is either 1-to-1 or 2-to-1. We are only allowed to make queries about the value of for any . The problem then asks how many such queries we need to make to determine with certainty whether f is 1-to-1 or 2-to-1.
Classical solutions
Deterministic
Solving the 2-to-1 version deterministically requires queries, and in general distinguishing r-to-1 functions from 1-to-1 functions requires queries.
This is a straightforward application of the pigeonhole principle: if a function is r-to-1, then after queries we are guaranteed to have found a collision. If a function is 1-to-1, then no collision exists. Thus, queries suffice. If we are unlucky, then the first queries could return distinct answers, so queries is also necessary.
Randomized
If we allow randomness, the problem is easier. By the birthday paradox, if we choose (distinct) queries at random, then with high probability we find a collision in any fixed 2-to-1 function after queries.
Quantum solution
The BHT algorithm, which uses Grover's algorithm, solves this problem optimally by only making queries to f.
References
Algorithms
Polynomial-time problems | Collision problem | [
"Mathematics"
] | 320 | [
"Applied mathematics",
"Algorithms",
"Mathematical logic",
"Computational problems",
"Polynomial-time problems",
"Mathematical problems"
] |
11,857,892 | https://en.wikipedia.org/wiki/Potable%20water%20diving | Potable water diving is diving inside a tank that is used for potable water. This is usually done for inspection and cleaning tasks. A person who is trained to do this work may be described as a potable water diver. The risks to the diver associated with potable water diving are related to the access, confined spaces and outlets for the water. The risk of contamination of the water is managed by isolating the diver in a clean dry-suit and helmet or full-face mask which are decontaminated before the dive.
Scope
Divers can inspect water storage tanks, towers and clearwells without draining them or taking them out of service. The work is classified as commercial diving and diver qualifications, equipment and dive team composition will generally be regulated. Using a specially equipped pump or airlift system the diver can remove loose sediment without damaging painted surfaces. This allows the chlorine in the system to function more effectively. Divers are an effective means to clean and inspect potable water storage tanks because all of the maintenance can be done while the tank remains in-service and full of water, though it may be necessary to close all inlet and outlet valves during the operation as they may present an unacceptable pressure difference hazard, and most of the interior surfaces of the tank can be easily accessed.
Hazards
Diving in a confined space presents specific hazards related to the possibility of an unbreathable atmosphere above the water surface, and tight access openings. Large tanks and water towers present hazards of access by ladder and working at height. The recovery of an unconscious diver can be complicated by inaccessibility and special extrication equipment will be needed on site to deal with this possibility. Diving teams may require confined space training and working at heights certification and must follow the appropriate standards or code of practice for this work.
The diver should wear a diving harness, connected to a safety rope, so that in case of an emergency the dive tender can pull the diver up. Diving contractors always need to check the safety legislation appropriate to their local jurisdiction, and perform a job safety analysis for the specific site.
Differential pressure hazards are also usually present in operational storage tanks, and a lockout-tagout procedure for outlets is normally required to minimise the risk.
Equipment
Diving in potable water uses the same type of equipment that would be used for diving in contaminated water, and for a similar reason. It is necessary to prevent contamination, but in this case it is the diving medium which must not be contaminated, as decontamination takes place before the diver enters the water. The equipment used should be dedicated to this application to minimise the contamination risk. On the other hand, a leak into the suit is of little consequence. Wireless communications do not work well in metal and concrete structures, so hard-wired diver telephone systems are the standard. Umbilicals should have as little place to trap contaminants as reasonably practicable - umbilicals held together by twisting the components like laid rope are preferred to umbilicals held together by tape or a casing. Gas supply and the control point for communications and gas control may have to be at some distance from the access opening, so communications between team members is important.
A hoist system is often necessary as a means for recovering an unconscious diver from the enclosed space of the tank. Simple tripod frames are commonly used to support the hoist system over the access opening. Other hoisting systems may be used, providing that they do not unduly risk contaminating the water. The diver's harness must be suitable for lifting the diver out of the water without further injury, in a posture that allows the diver to be hoisted out through the access opening.
Regional requirements
In the USA commercial diving operations require at least one trained tender, a diver, and a supervisor. In some other countries a standby diver is required at all professional diving operations. Surface-supplied air with two-way voice communications with the diver and a safety rope are preferred and in some jurisdictions may be obligatory. In the US the Occupational Safety & Health Administration regulation 29 CFR Part 1910, Subpart T allows scuba with a rope for basic communications.
References
Underwater diving procedures
Commercial diving
Water supply
Underwater work | Potable water diving | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 853 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
11,859,732 | https://en.wikipedia.org/wiki/Norton%20AntiBot | Norton AntiBot, developed by Symantec, monitored applications for damaging behavior. The application was designed to prevent computers from being hijacked and controlled by hackers. According to Symantec, over 6 million computers have been hijacked, and the majority of users are unaware of their computers being hacked.
AntiBot was designed to be used in conjunction with other antivirus software. Unlike traditional antivirus products, AntiBot does not use signatures; there is a delay between when a vendor discovers a virus and distributes the signature. During the delay, computers can be affected. Instead, AntiBot attempts to identify a virus through its actions; viruses are malicious by nature. However, AntiBot was not intended to replace an antivirus product. The program uses technology licensed from Sana Security.
The product has been discontinued after AVG acquired Sana Security in January 2009, developing a standalone program similar to AntiBot called AVG Identity protection, which was also discontinued and integrated in AVG Internet Security 2011. Product updates and technical support were available from Symantec for one year after a customer's last purchase or renewal.
History
Ed Kim, director of product management at Symantec, highlighted the rise of botnets. A botnet is a collection of compromised computers, known as bots, which hackers usually control for malicious purposes. Two main uses of botnets include identity theft and e-mail spam. Kim cited a 29 percent increase of bots from the first half of 2006 to the second half. In all, there were six million active bots by the end of 2006.
On 7 June 2007, Symantec released a beta version of Norton AntiBot. AntiBot was designed to supplement a user's existing antivirus software. Unlike traditional antivirus software, AntiBot does not use signatures to identify malware. Instead, it monitors running applications for damaging or malicious behavior, licensing technology from Sana Security.
AntiBot can also supplement SONAR technology by Symantec, found in Norton AntiVirus 2007, Norton Internet Security 2007, and Norton 360. Similar to AntiBot, SONAR monitors for malicious behavior. However, SONAR does not run continuously in the background; only during a virus scan in those specific products.
AntiBot was made available to the general public on 17 July 2007. On 16 January 2009, AVG announced their plans to acquire Sana Security were finalized. J.R. Smith, CEO of AVG Technologies, highlighted the 40,000 unique malware samples their analysts see each day. He noted the time frame between when a sample is analyzed and a signature is created, emphasizing the need for "instant protection", since hackers are constantly modifying their malicious software to evade signature detection. Often, there are several strains, or variations, of one virus, each with a different classification and signature.
Symantec confirmed ceasing sales and distribution of Norton AntiBot in early 2009. Product help and updates would still be available for one year following a customer's last purchase or renewal.
Reception
PC Magazine noted AntiBot's above average ability to identify malicious programs based on behavior and the fact it did not mistakenly mark a legitimate program as malicious during testing. However, on some infected systems AntiBot failed to install or caused blue screens because it failed to completely remove a virus.
A technical limitation is that AntiBot cannot detect inactive malware since there is no behavior for the software to monitor.
References
Discontinued software
Computer security software
Gen Digital software
Windows-only proprietary software
AntiBot | Norton AntiBot | [
"Engineering"
] | 710 | [
"Cybersecurity engineering",
"Computer security software"
] |
11,859,948 | https://en.wikipedia.org/wiki/Halbert%20L.%20Dunn%20Award | The Halbert L. Dunn Award is the most prestigious award presented by the National Association for Public Health Statistics and Information Systems (NAPHSIS). The award has been presented since 1981 providing national recognition of outstanding and lasting contributions to the field of vital and health statistics at the national, state, or local level.
The award was established in honor of the late Halbert L. Dunn, M.D., Director of the National Office of Vital Statistics from 1936 to 1960. Dr. Dunn was highly instrumental in encouraging the states to establish state vital statistics associations and played a major role in developing NAPHSIS. The award is presented at the Hal Dunn Awards Luncheon during the association’s annual meeting.
The winners of the Halbert L. Dunn Award have been:
Source: NAPHSIS
1981 Deane Huxtable
1982 Loren Chancellor
1983 Vito Logrillo
1984 Carl Erhardt
1985 Irvin Franzen
1986 W. D. "Don" Carroll
1987 Margaret Shackelford
1988 John Brockert, State Registrar, Utah
1989 Margaret Watts
1990 John Patterson
1991 Patricia Potrzebowski, State Registrar, Pennsylvania
1992 Rose Trasatti, National Association for Public Health Statistics and Information Systems (NAPHSIS)
1993 Garland Land, State Registrar, Missouri
1994 George Van Amburg
1995 Jack Smith
1996 no award
1997 Ray Nashold
1998 Iwao Moriyama
1999 no award
2000 George Gay
2001 Dorothy Harshbarger, State Registrar, Alabama
2002 Lorne Phillips, State Registrar, Kansas
2003 Mary Anne Freedman, Director of the Division of Vital Statistics, NCHS
2004 no award
2005 Joe Carney
2006 Dan Friedman
2007 Harry Rosenberg, National Center for Health Statistics
2008 Alvin T. Onaka, Registrar, Hawaii
2009 Marshall Evans, National Center for Health Statistics
2010 Steven Schwartz, Registrar, New York City
2011 Charles Rothwell, Director, National Center for Health Statistics
2012 no award
2013 Stephanie Ventura, Director of Reproductive Statistics Branch, National Center for Health Statistics
2014 Bruce Cohen, Director of Research, MA Department of Health
2015 Isabelle Horon, State Registrar, Maryland
2016 Rose Trasatti Heim, NAPHSIS
2017 Jennifer Woodward, State Registrar, Oregon
2018 Glenn Copeland, State Registrar, Michigan
2019 Delton Atkinson, National Center for Health Statistics
2020 no award
2021 no award
2022 Jeff Duncan, State Registrar, Michigan
See also
List of mathematics awards
List of medicine awards
References
Civil registration and vital statistics
Medicine awards
Statistical awards
Awards established in 1981 | Halbert L. Dunn Award | [
"Technology"
] | 489 | [
"Science and technology awards",
"Medicine awards"
] |
11,861,063 | https://en.wikipedia.org/wiki/Global%20serializability | In concurrency control of databases, transaction processing (transaction management), and other transactional distributed applications, global serializability (or modular serializability) is a property of a global schedule of transactions. A global schedule is the unified schedule of all the individual database (and other transactional object) schedules in a multidatabase environment (e.g., federated database). Complying with global serializability means that the global schedule is serializable, has the serializability property, while each component database (module) has a serializable schedule as well. In other words, a collection of serializable components provides overall system serializability, which is usually incorrect. A need in correctness across databases in multidatabase systems makes global serializability a major goal for global concurrency control (or modular concurrency control). With the proliferation of the Internet, Cloud computing, Grid computing, and small, portable, powerful computing devices (e.g., smartphones), as well as increase in systems management sophistication, the need for atomic distributed transactions and thus effective global serializability techniques, to ensure correctness in and among distributed transactional applications, seems to increase.
In a federated database system or any other more loosely defined multidatabase system, which are typically distributed in a communication network, transactions span multiple (and possibly distributed) databases. Enforcing global serializability in such system, where different databases may use different types of concurrency control, is problematic. Even if every local schedule of a single database is serializable, the global schedule of a whole system is not necessarily serializable. The massive communication exchanges of conflict information needed between databases to reach conflict serializability globally would lead to unacceptable performance, primarily due to computer and communication latency. Achieving global serializability effectively over different types of concurrency control has been open for several years.
The global serializability problem
Problem statement
The difficulties described above translate into the following problem:
Find an efficient (high-performance and fault tolerant) method to enforce Global serializability (global conflict serializability) in a heterogeneous distributed environment of multiple autonomous database systems. The database systems may employ different concurrency control methods. No limitation should be imposed on the operations of either local transactions (confined to a single database system) or global transactions (span two or more database systems).
Quotations
Lack of an appropriate solution for the global serializability problem has driven researchers to look for alternatives to serializability as a correctness criterion in a multidatabase environment (e.g., see Relaxing global serializability below), and the problem has been characterized as difficult and open. The following two quotations demonstrate the mindset about it by the end of the year 1991, with similar quotations in numerous other articles:
"Without knowledge about local as well as global transactions, it is highly unlikely that efficient global concurrency control can be provided... Additional complications occur when different component DBMSs [Database Management Systems] and the FDBMSs [Federated Database Management Systems] support different concurrency mechanisms... It is unlikely that a theoretically elegant solution that provides conflict serializability without sacrificing performance (i.e., concurrency and/or response time) and availability exists."
Proposed solutions
Several solutions, some partial, have been proposed for the global serializability problem. Among them:
Global conflict graph (serializability graph, precedence graph) checking
Distributed Two phase locking (Distributed 2PL)
Distributed Timestamp ordering
Tickets (local logical timestamps which define local total orders, and are propagated to determine global partial order of transactions)
Relaxing global serializability
Some techniques have been developed for relaxed global serializability (i.e., they do not guarantee global serializability; see also Relaxing serializability). Among them (with several publications each):
Quasi serializability
Two-level serializability
Another common reason nowadays for Global serializability relaxation is the requirement of availability of internet products and services. This requirement is typically answered by large scale data replication. The straightforward solution for synchronizing replicas' updates of a same database object is including all these updates in a single atomic distributed transaction. However, with many replicas such a transaction is very large, and may span several computers and networks that some of them are likely to be unavailable. Thus such a transaction is likely to end with abort and miss its purpose.
Consequently, Optimistic replication (Lazy replication) is often utilized (e.g., in many products and services by Google, Amazon, Yahoo, and alike), while global serializability is relaxed and compromised for eventual consistency. In this case relaxation is done only for applications that are not expected to be harmed by it.
Classes of schedules defined by relaxed global serializability properties either contain the global serializability class, or are incomparable with it. What differentiates techniques for relaxed global conflict serializability (RGCSR) properties from those of relaxed conflict serializability (RCSR) properties that are not RGCSR is typically the different way global cycles (span two or more databases) in the global conflict graph are handled. No distinction between global and local cycles exists for RCSR properties that are not RGCSR. RCSR contains RGCSR. Typically RGCSR techniques eliminate local cycles, i.e., provide local serializability (which can be achieved effectively by regular, known concurrency control methods); however, obviously they do not eliminate all global cycles (which would achieve global serializability).
References
Data management
Databases
Transaction processing
Concurrency control | Global serializability | [
"Technology"
] | 1,156 | [
"Data management",
"Data"
] |
11,861,355 | https://en.wikipedia.org/wiki/Phosphogypsum | Phosphogypsum (PG) is the calcium sulfate hydrate formed as a by-product of the production of fertilizer, particularly phosphoric acid, from phosphate rock. It is mainly composed of gypsum (CaSO4·2H2O). Although gypsum is a widely used material in the construction industry, phosphogypsum is usually not used, but is stored indefinitely because of its weak radioactivity caused by the presence of naturally occurring uranium (U) and thorium (Th), and their daughter isotopes radium (Ra), radon (Rn) and polonium (Po). On the other hand, it includes several valuable components—calcium sulphates and elements such as silicon, iron, titanium, magnesium, aluminum, and manganese. However, the long-term storage of phosphogypsum is controversial. About five tons of phosphogypsum are generated per ton of phosphoric acid production. Annually, the estimated generation of phosphogypsum worldwide is 100 to 280 million metric tons.
Production and properties
Phosphogypsum is a by-product from the production of phosphoric acid by treating phosphate ore (apatite) with sulfuric acid according to the following reaction:
Ca5(PO4)3X + 5 H2SO4 + 10 H2O → 3 H3PO4 + 5 (CaSO4 · 2 H2O) + HX
where X may include OH, F, Cl, or Br
It is radioactive due to the presence of naturally occurring uranium (5–10 ppm) and thorium, and their daughter nuclides radium, radon, polonium, etc. Marine-deposited phosphate typically has a higher level of radioactivity than igneous phosphate deposits, because uranium is present in seawater at about 3 ppb (roughly 85 ppb of total dissolved solids). Uranium is concentrated during the formation of evaporite deposits as dissolved solids precipitate in order of solubility with easily dissolved materials such as sodium chloride remaining in solution longer than less soluble materials like uranium or sulfates. Other components of phosphogypsum include silica (5–10%), fluoride (F, ~1%), phosphorus (P, ~0.5%), iron (Fe, ~0.1%), aluminum (Al, ~0.1%), barium (Ba, 50 ppm), lead (Pb, ~5 ppm), chromium (Cr, ~3 ppm), selenium (Se, ~1 ppm), and cadmium (Cd, ~0.3 ppm). About 90% of Po and Ra from raw ore is retained into Phosphogypsum. Thus it can be considered technologically enhanced naturally occurring radioactive material (TENORM).
Use
Various applications have been proposed for using phosphogypsum, including using it as material for:
Artificial reefs and oyster beds
Cover for landfills
Road pavement
Roof tiles
Soil conditioner
According to Taylor (2009), "up to 15% of world PG production is used to make building materials, as a soil amendment and as a set controller in the manufacture of Portland cement". The rest remains in stack.
In the United States
The United States Environmental Protection Agency (EPA) has banned most applications of phosphogypsum having a 226Ra concentration of greater than 10 picocurie/gram (0.4 Bq/g) in 1990. As a result, phosphogypsum which exceeds this limit is stored in large stacks since extracting such low concentrations of radium is either not possible or not economical with current technology for either the use of the gypsum or the radium . Given the traditional definition of the Curie via the specific activity of , this limit is equivalent to of radium per metric ton or a concentration of 10 parts per trillion. (See below.)
EPA approved the use of phosphogypsum for road construction during the Trump Administration in 2020, saying that the approval came at the request of The Fertilizer Institute, which advocates for the fertilizer industry. Environmentalists opposed the decision, saying that using the radioactive material in this way can pose health risks. In 2021, the EPA withdrew the rule authorizing the use of phosphogypsum in road construction.
The state of Florida has approximately 80% of the world's phosphogypsum production capacity. In May 2023, the Florida legislature passed a bill requiring the Florida Department of Transportation to study the use of phosphogypsum in road construction, including demonstration projects, though this would require federal approval. The law, which requires the department to complete a study and make a recommendation by April 1, 2024, was signed into law by Governor Ron DeSantis on June 29, 2023.
In China
China's phosphate fertilizer production exceeded that of the US in 2005, and with it came the problem of excess phosphogypsum. By 2018, inappropriate storage has become a major problem in the Yangtze River watershed, with phosphorus accounting for 56% of all breaches of water quality standards. Phosphorus, which still remains in phosphogypsum, can lead to eutrophication of bodies of water and hence algal blooms or even anoxic events ("dead zones") in the lower layers of a body of water. The total amount of phosphogypsum in storage by 2020 exceeds 600 Mt, with 75 Mt produced each year.
The construction industry is the number one user of phosphogypsum in 2020, with 10.5 Mt used as concrete set retarder and 3.5 Mt used in drywall. It is also used as a chemical feedstock for producing sulfates, and as a soil conditioner similar to regular gypsum. The total consumption in 2020 was 31 Mt, much lower than the rate of accumulation. There has been a significant push to expand the use of phosphogypsum on the national level since 2016, being part of two consecutive five-year plans.
Phosphogypsum may require pre-processing to remove contaminants before use. Phosphorus (P) significantly retards curing and reduces the strength of the material, an important concern in construction. Fluorine (F) may accumulate in crops. Although Chinese phosphogypsum generally contain less toxic heavy metals and radioactive elements , some nevertheless exceed acceptable radioactivity limits for building material, or produce crops with unacceptable amounts of arsenic (As), lead (Pb), cadmium (Cd), or mercury (Hg). Barriers to further use include cost of heavy metal removal and considerable variation among sources of phosphogypsum.
Pollution and cleanup
Phosphogypsum may pollute the environment by its phosphorus content causing eutrophication, by its toxic heavy metal content, and by its radioactivity. PG releases radon, which can accumulate indoors if used as a construction material. Open-air stores also release radon at a level potentially hazardous to workers. Radon is a noble gas that is heavier than air and thus tends to accumulate in poorly ventilated underground spaces like mines or cellars. Naturally occurring radon is considered the second most common cause of lung cancer after smoking. More substantial however is the leaching of the contents of phosphogypsum into the water table and consequently soil, exacerbated by the fact that PG is often transported as a slurry. Accumulation of water inside of gypstacks can lead to weakening of the stack structure, a cause of several alarms in the United States.
The main approach to reducing PG pollution is to act before it leaches into the environment. This can mean recycling purified materials from PG in a variety of applications (see above) or converting it into a more stable form for storage. Cement paste backfill converts hazardous mining waste, such as PG, into a cement paste, and then uses the paste to fill in voids created by mining the rocks.
Bioremediation may be used to clean up already contaminated water and soil. Microbials can remove heavy metals, radioactive material, and any organic pollutants within, and reduce the sulfate material. With suitable soil amendments and additives, PG can also support the growth of hardy plants, hopefully preventing further erosion.
Gyp stacks
Often phosphogypsum reuse is uneconomical due to impurities, mining companies commonly dump the waste into man-made hills called "phosphogypsum stacks" or waste ponds near the mine. Waste ponds are open-air reservoirs that contain a variety of different types of industrial and agricultural waste. including at least 70 phosphogypsum stacks (from phosphate mines used for fertilizer production). A leaking phosphogypsum waste pond that nearly collapsed, if waste was not allowed to flow into Tampa Bay in Florida in 2021, highlights the dangers and near-disasters associated with wastewater ponds throughout the country.
Central Florida has a large quantity of phosphate deposits, particularly in the Bone Valley region. The marine-deposited phosphate ore from central Florida is weakly radioactive, and as such, the phosphogypsum by-product (in which the radionuclides are somewhat concentrated) is too radioactive to be used for most applications. As a result, there are about a billion tons of phosphogypsum stacked in 25 stacks in Florida (22 are in central Florida) and about 30 million additional tons are generated each year.
See also
Red mud - highly alkaline waste product from aluminum processing
Mine tailings - general issue of waste products left after mining
Acid mine drainage - highly acidic waters produced from interactions between water oxygen and sulfur compounds deposited under reducing conditions
References
Further reading
Radioactive waste
Sulfates | Phosphogypsum | [
"Chemistry",
"Technology"
] | 2,057 | [
"Sulfates",
"Salts",
"Environmental impact of nuclear power",
"Hazardous waste",
"Radioactivity",
"Radioactive waste"
] |
11,862,677 | https://en.wikipedia.org/wiki/Sandhawk | The Sandhawk was a sounding rocket developed by Sandia National Laboratories in 1966. This single-stage, sub-orbital rocket had a mass of 700 kg (1,540 lb), a takeoff thrust of 80 kN (18,000 lbf), and could reach heights of around 200 km. Sandia launched eight of these rockets between 1966 and 1974 as part of experiments conducted for the United States Atomic Energy Commission. About 25% of the launches failed.
The Sandhawk was also used as the second stage of other sounding rockets, such as Terrier-Sandhawk, launched 30 times, and as the first stage of Dualhawk, with TE-416 Tomahawk as the second stage.
References
Sounding rockets of the United States
Sandia National Laboratories | Sandhawk | [
"Astronomy"
] | 155 | [
"Rocketry stubs",
"Astronomy stubs"
] |
11,862,679 | https://en.wikipedia.org/wiki/Signal%20subspace | In signal processing, signal subspace methods are empirical linear methods for dimensionality reduction and noise reduction. These approaches have attracted significant interest and investigation recently in the context of speech enhancement, speech modeling, and speech classification research. The signal subspace is also used in radio direction finding using the MUSIC (algorithm).
Essentially the methods represent the application of a principal components analysis (PCA) approach to ensembles of observed time-series obtained by sampling, for example sampling an audio signal. Such samples can be viewed as vectors in a high-dimensional vector space over the real numbers. PCA is used to identify a set of orthogonal basis vectors (basis signals) which capture as much as possible of the energy in the ensemble of observed samples. The vector space spanned by the basis vectors identified by the analysis is then the signal subspace. The underlying assumption is that information in speech signals is almost completely contained in a small linear subspace of the overall space of possible sample vectors, whereas additive noise is typically distributed through the larger space isotropically (for example when it is white noise).
By projecting a sample on a signal subspace, that is, keeping only the component of the sample that is in the signal subspace defined by linear combinations of the first few most energized basis vectors, and throwing away the rest of the sample, which is in the remainder of the space orthogonal to this subspace, a certain amount of noise filtering is then obtained.
Signal subspace noise-reduction can be compared to Wiener filter methods. There are two main differences:
The basis signals used in Wiener filtering are usually harmonic sine waves, into which a signal can be decomposed by Fourier transform. In contrast, the basis signals used to construct the signal subspace are identified empirically, and may for example be chirps, or particular characteristic shapes of transients after particular triggering events, rather than pure sinusoids.
The Wiener filter grades smoothly between linear components that are dominated by signal, and linear components that are dominated by noise. The noise components are filtered out, but not quite completely; the signal components are retained, but not quite completely; and there is a transition zone which is partly accepted. In contrast, the signal subspace approach represents a sharp cut-off: an orthogonal component either lies within the signal subspace, in which case it is 100% accepted, or orthogonal to it, in which case it is 100% rejected. This reduction in dimensionality, abstracting the signal into a much shorter vector, can be a particularly desired feature of the method.
In the simplest case signal subspace methods assume white noise, but extensions of the approach to colored noise removal and the evaluation of the subspace-based speech enhancement for robust speech recognition have also been reported.
References
Signal processing
Noise reduction | Signal subspace | [
"Technology",
"Engineering"
] | 566 | [
"Telecommunications engineering",
"Computer engineering",
"Signal processing"
] |
11,863,361 | https://en.wikipedia.org/wiki/History%20of%20molecular%20evolution | The history of molecular evolution starts in the early 20th century with "comparative biochemistry", but the field of molecular evolution came into its own in the 1960s and 1970s, following the rise of molecular biology. The advent of protein sequencing allowed molecular biologists to create phylogenies based on sequence comparison, and to use the differences between homologous sequences as a molecular clock to estimate the time since the last common ancestor. In the late 1960s, the neutral theory of molecular evolution provided a theoretical basis for the molecular clock, though both the clock and the neutral theory were controversial, since most evolutionary biologists held strongly to panselectionism, with natural selection as the only important cause of evolutionary change. After the 1970s, nucleic acid sequencing allowed molecular evolution to reach beyond proteins to highly conserved ribosomal RNA sequences, the foundation of a reconceptualization of the early history of life.
Early history
Before the rise of molecular biology in the 1950s and 1960s, a small number of biologists had explored the possibilities of using biochemical differences between species to study evolution.
Alfred Sturtevant predicted the existence of chromosomal inversions in 1921 and with Dobzhansky constructed one of the first molecular phylogenies on 17 Drosophila Pseudo-obscura strains from the accumulation of chromosomal inversions observed from the hybridization of polyten chromosomes.
Ernest Baldwin worked extensively on comparative biochemistry beginning in the 1930s, and Marcel Florkin pioneered techniques for constructing phylogenies based on molecular and biochemical characters in the 1940s. However, it was not until the 1950s that biologists developed techniques for producing biochemical data for the quantitative study of molecular evolution.
The first molecular systematics research was based on immunological assays and protein "fingerprinting" methods. Alan Boyden—building on immunological methods of George Nuttall—developed new techniques beginning in 1954, and in the early 1960s Curtis Williams and Morris Goodman used immunological comparisons to study primate phylogeny. Others, such as Linus Pauling and his students, applied newly developed combinations of electrophoresis and paper chromatography to proteins subject to partial digestion by digestive enzymes to create unique two-dimensional patterns, allowing fine-grained comparisons of homologous proteins.
Beginning in the 1950s, a few naturalists also experimented with molecular approaches—notably Ernst Mayr and Charles Sibley. While Mayr quickly soured on paper chromatography, Sibley successfully applied electrophoresis to egg-white proteins to sort out problems in bird taxonomy, soon supplemented that with DNA hybridization techniques—the beginning of a long career built on molecular systematics.
While such early biochemical techniques found grudging acceptance in the biology community, for the most part they did not impact the main theoretical problems of evolution and population genetics. This would change as molecular biology shed more light on the physical and chemical nature of genes.
Genetic load, the classical/balance controversy, and the measurement of heterozygosity
At the time that molecular biology was coming into its own in the 1950s, there was a long-running debate—the classical/balance controversy—over the causes of heterosis, the increase in fitness observed when inbred lines are outcrossed. In 1950, James F. Crow offered two different explanations (later dubbed the classical and balance positions) based on the paradox first articulated by J. B. S. Haldane in 1937: the effect of deleterious mutations on the average fitness of a population depends only on the rate of mutations (not the degree of harm caused by each mutation) because more-harmful mutations are eliminated more quickly by natural selection, while less-harmful mutations remain in the population longer. H. J. Muller dubbed this "genetic load".
Muller, motivated by his concern about the effects of radiation on human populations, argued that heterosis is primarily the result of deleterious homozygous recessive alleles, the effects of which are masked when separate lines are crossed—this was the dominance hypothesis, part of what Dobzhansky labeled the classical position. Thus, ionizing radiation and the resulting mutations produce considerable genetic load even if death or disease does not occur in the exposed generation, and in the absence of mutation natural selection will gradually increase the level of homozygosity. Bruce Wallace, working with J. C. King, used the overdominance hypothesis to develop the balance position, which left a larger place for overdominance (where the heterozygous state of a gene is more fit than the homozygous states). In that case, heterosis is simply the result of the increased expression of heterozygote advantage. If overdominant loci are common, then a high level of heterozygosity would result from natural selection, and mutation-inducing radiation may in fact facilitate an increase in fitness due to overdominance. (This was also the view of Dobzhansky.)
Debate continued through 1950s, gradually becoming a central focus of population genetics. A 1958 study of Drosophila by Wallace suggested that radiation-induced mutations increased the viability of previously homozygous flies, providing evidence for heterozygote advantage and the balance position; Wallace estimated that 50% of loci in natural Drosophila populations were heterozygous. Motoo Kimura's subsequent mathematical analyses reinforced what Crow had suggested in 1950: that even if overdominant loci are rare, they could be responsible for a disproportionate amount of genetic variability. Accordingly, Kimura and his mentor Crow came down on the side of the classical position. Further collaboration between Crow and Kimura led to the infinite alleles model, which could be used to calculate the number of different alleles expected in a population, based on population size, mutation rate, and whether the mutant alleles were neutral, overdominant, or deleterious. Thus, the infinite alleles model offered a potential way to decide between the classical and balance positions, if accurate values for the level of heterozygosity could be found.
By the mid-1960s, the techniques of biochemistry and molecular biology—in particular protein electrophoresis—provided a way to measure the level of heterozygosity in natural populations: a possible means to resolve the classical/balance controversy. In 1963, Jack L. Hubby published an electrophoresis study of protein variation in Drosophila; soon after, Hubby began collaborating with Richard Lewontin to apply Hubby's method to the classical/balance controversy by measuring the proportion of heterozygous loci in natural populations. Their two landmark papers, published in 1966, established a significant level of heterozygosity for Drosophila (12%, on average). However, these findings proved difficult to interpret. Most population geneticists (including Hubby and Lewontin) rejected the possibility of widespread neutral mutations; explanations that did not involve selection were anathema to mainstream evolutionary biology. Hubby and Lewontin also ruled out heterozygote advantage as the main cause because of the segregation load it would entail, though critics argued that the findings actually fit well with overdominance hypothesis.
Protein sequences and the molecular clock
While evolutionary biologists were tentatively branching out into molecular biology, molecular biologists were rapidly turning their attention toward evolution.
After developing the fundamentals of protein sequencing with insulin between 1951 and 1955, Frederick Sanger and his colleagues had published a limited interspecies comparison of the insulin sequence in 1956. Francis Crick, Charles Sibley and others recognized the potential for using biological sequences to construct phylogenies, though few such sequences were yet available. By the early 1960s, techniques for protein sequencing had advanced to the point that direct comparison of homologous amino acid sequences was feasible. In 1961, Emanuel Margoliash and his collaborators completed the sequence for horse cytochrome c (a longer and more widely distributed protein than insulin), followed in short order by a number of other species.
In 1962, Linus Pauling and Emile Zuckerkandl proposed using the number of differences between homologous protein sequences to estimate the time since divergence, an idea Zuckerkandl had conceived around 1960 or 1961. This began with Pauling's long-time research focus, hemoglobin, which was being sequenced by Walter Schroeder; the sequences not only supported the accepted vertebrate phylogeny, but also the hypothesis (first proposed in 1957) that the different globin chains within a single organism could also be traced to a common ancestral protein. Between 1962 and 1965, Pauling and Zuckerkandl refined and elaborated this idea, which they dubbed the molecular clock, and Emil L. Smith and Emanuel Margoliash expanded the analysis to cytochrome c. Early molecular clock calculations agreed fairly well with established divergence times based on paleontological evidence. However, the essential idea of the molecular clock—that individual proteins evolve at a regular rate independent of a species' morphological evolution—was extremely provocative (as Pauling and Zuckerkandl intended it to be).
The "molecular wars"
From the early 1960s, molecular biology was increasingly seen as a threat to the traditional core of evolutionary biology. Established evolutionary biologists—particularly Ernst Mayr, Theodosius Dobzhansky and G. G. Simpson, three of the founders of the modern evolutionary synthesis of the 1930s and 1940s—were extremely skeptical of molecular approaches, especially when it came to the connection (or lack thereof) to natural selection. Molecular evolution in general—and the molecular clock in particular—offered little basis for exploring evolutionary causation. According to the molecular clock hypothesis, proteins evolved essentially independently of the environmentally determined forces of selection; this was sharply at odds with the panselectionism prevalent at the time. Moreover, Pauling, Zuckerkandl, and other molecular biologists were increasingly bold in asserting the significance of "informational macromolecules" (DNA, RNA and proteins) for all biological processes, including evolution. The struggle between evolutionary biologists and molecular biologists—with each group holding up their discipline as the center of biology as a whole—was later dubbed the "molecular wars" by Edward O. Wilson, who experienced firsthand the domination of his biology department by young molecular biologists in the late 1950s and the 1960s.
In 1961, Mayr began arguing for a clear distinction between functional biology (which considered proximate causes and asked "how" questions) and evolutionary biology (which considered ultimate causes and asked "why" questions) He argued that both disciplines and individual scientists could be classified on either the functional or evolutionary side, and that the two approaches to biology were complementary. Mayr, Dobzhansky, Simpson and others used this distinction to argue for the continued relevance of organismal biology, which was rapidly losing ground to molecular biology and related disciplines in the competition for funding and university support. It was in that context that Dobzhansky first published his famous statement, "nothing in biology makes sense except in the light of evolution", in a 1964 paper affirming the importance of organismal biology in the face of the molecular threat; Dobzhansky characterized the molecular disciplines as "Cartesian" (reductionist) and organismal disciplines as "Darwinian".
Mayr and Simpson attended many of the early conferences where molecular evolution was discussed, critiquing what they saw as the overly simplistic approaches of the molecular clock. The molecular clock, based on uniform rates of genetic change driven by random mutations and drift, seemed incompatible with the varying rates of evolution and environmentally-driven adaptive processes (such as adaptive radiation) that were among the key developments of the evolutionary synthesis. At the 1962 Wenner-Gren conference, the 1964 Colloquium on the Evolution of Blood Proteins in Bruges, Belgium, and the 1964 Conference on Evolving Genes and Proteins at Rutgers University, they engaged directly with the molecular biologists and biochemists, hoping to maintain the central place of Darwinian explanations in evolution as its study spread to new fields.
Gene-centered view of evolution
Though not directly related to molecular evolution, the mid-1960s also saw the rise of the gene-centered view of evolution, spurred by George C. Williams's Adaptation and Natural Selection (1966). Debate over units of selection, particularly the controversy over group selection, led to increased focus on individual genes (rather than whole organisms or populations) as the theoretical basis for evolution. However, the increased focus on genes did not mean a focus on molecular evolution; in fact, the adaptationism promoted by Williams and other evolutionary theories further marginalized the apparently non-adaptive changes studied by molecular evolutionists.
The neutral theory of molecular evolution
The intellectual threat of molecular evolution became more explicit in 1968, when Motoo Kimura introduced the neutral theory of molecular evolution. Based on the available molecular clock studies (of hemoglobin from a wide variety of mammals, cytochrome c from mammals and birds, and triosephosphate dehydrogenase from rabbits and cows), Kimura (assisted by Tomoko Ohta) calculated an average rate of DNA substitution of one base pair change per 300 base pairs (encoding 100 amino acids) per 28 million years. For mammal genomes, this indicated a substitution rate of one every 1.8 years, which would produce an unsustainably high substitution load unless the preponderance of substitutions was selectively neutral. Kimura argued that neutral mutations occur very frequently, a conclusion compatible with the results of the electrophoretic studies of protein heterozygosity. Kimura also applied his earlier mathematical work on genetic drift to explain how neutral mutations could come to fixation, even in the absence of natural selection; he soon convinced James F. Crow of the potential power of neutral alleles and genetic drift as well.
Kimura's theory—described only briefly in a letter to Nature—was followed shortly after with a more substantial analysis by Jack L. King and Thomas H. Jukes—who titled their first paper on the subject "Non-Darwinian Evolution". Though King and Jukes produced much lower estimates of substitution rates and the resulting genetic load in the case of non-neutral changes, they agreed that neutral mutations driven by genetic drift were both real and significant. The fairly constant rates of evolution observed for individual proteins was not easily explained without invoking neutral substitutions (though G. G. Simpson and Emil Smith had tried). Jukes and King also found a strong correlation between the frequency of amino acids and the number of different codons encoding each amino acid. This pointed to substitutions in protein sequences as being largely the product of random genetic drift.
King and Jukes' paper, especially with the provocative title, was seen as a direct challenge to mainstream neo-Darwinism, and it brought molecular evolution and the neutral theory to the center of evolutionary biology. It provided a mechanism for the molecular clock and a theoretical basis for exploring deeper issues of molecular evolution, such as the relationship between rate of evolution and functional importance. The rise of the neutral theory marked synthesis of evolutionary biology and molecular biology—though an incomplete one.
With their work on firmer theoretical footing, in 1971 Emile Zuckerkandl and other molecular evolutionists founded the Journal of Molecular Evolution.
The neutralist-selectionist debate and near-neutrality
The critical responses to the neutral theory that soon appeared marked the beginning of the neutralist-selectionist debate. In short, selectionists viewed natural selection as the primary or only cause of evolution, even at the molecular level, while neutralists held that neutral mutations were widespread and that genetic drift was a crucial factor in the evolution of proteins. Kimura became the most prominent defender of the neutral theory—which would be his main focus for the rest of his career. With Ohta, he refocused his arguments on the rate at which drift could fix new mutations in finite populations, the significance of constant protein evolution rates, and the functional constraints on protein evolution that biochemists and molecular biologists had described. Though Kimura had initially developed the neutral theory partly as an outgrowth of the classical position within the classical/balance controversy (predicting high genetic load as a consequence of non-neutral mutations), he gradually deemphasized his original argument that segregational load would be impossibly high without neutral mutations (which many selectionists, and even fellow neutralists King and Jukes, rejected).
From the 1970s through the early 1980s, both selectionists and neutralists could explain the observed high levels of heterozygosity in natural populations, by assuming different values for unknown parameters. Early in the debate, Kimura's student Tomoko Ohta focused on the interaction between natural selection and genetic drift, which was significant for mutations that were not strictly neutral, but nearly so. In such cases, selection would compete with drift: most slightly deleterious mutations would be eliminated by natural selection or chance; some would move to fixation through drift. The behavior of this type of mutation, described by an equation that combined the mathematics of the neutral theory with classical models, became the basis of Ohta's nearly neutral theory of molecular evolution.
In 1973, Ohta published a short letter in Nature suggesting that a wide variety of molecular evidence supported the theory that most mutation events at the molecular level are slightly deleterious rather than strictly neutral. Molecular evolutionists were finding that while rates of protein evolution (consistent with the molecular clock) were fairly independent of generation time, rates of noncoding DNA divergence were inversely proportional to generation time. Noting that population size is generally inversely proportional to generation time, Tomoko Ohta proposed that most amino acid substitutions are slightly deleterious while noncoding DNA substitutions are more neutral. In this case, the faster rate of neutral evolution in proteins expected in small populations (due to genetic drift) is offset by longer generation times (and vice versa), but in large populations with short generation times, noncoding DNA evolves faster while protein evolution is retarded by selection (which is more significant than drift for large populations).
Between then and the early 1990s, many studies of molecular evolution used a "shift model" in which the negative effect on the fitness of a population due to deleterious mutations shifts back to an original value when a mutation reaches fixation. In the early 1990s, Ohta developed a "fixed model" that included both beneficial and deleterious mutations, so that no artificial "shift" of overall population fitness was necessary. According to Ohta, however, the nearly neutral theory largely fell out of favor in the late 1980s, because of the mathematically simpler neutral theory for the widespread molecular systematics research that flourished after the advent of rapid DNA sequencing. As more detailed systematics studies started to compare the evolution of genome regions subject to strong selection versus weaker selection in the 1990s, the nearly neutral theory and the interaction between selection and drift have once again become an important focus of research.
Microbial phylogeny
While early work in molecular evolution focused on readily sequenced proteins and relatively recent evolutionary history, by the late 1960s some molecular biologists were pushing further toward the base of the tree of life by studying highly conserved nucleic acid sequences. Carl Woese, a molecular biologist whose earlier work was on the genetic code and its origin, began using small subunit ribosomal RNA to reclassify bacteria by genetic (rather than morphological) similarity. Work proceeded slowly at first, but accelerated as new sequencing methods were developed in the 1970s and 1980s. By 1977, Woese and George Fox announced that some bacteria, such as methanogens, lacked the rRNA units that Woese's phylogenetic studies were based on; they argued that these organisms were actually distinct enough from conventional bacteria and the so-called higher organisms to form their own kingdom, which they called archaebacteria. Though controversial at first (and challenged again in the late 1990s), Woese's work became the basis of the modern three-domain system of Archaea, Bacteria, and Eukarya (replacing the five-domain system that had emerged in the 1960s).
Work on microbial phylogeny also brought molecular evolution closer to cell biology and origin of life research. The differences between archaea pointed to the importance of RNA in the early history of life. In his work with the genetic code, Woese had suggested RNA-based life had preceded the current forms of DNA-based life, as had several others before him—an idea that Walter Gilbert would later call the "RNA world". In many cases, genomics research in the 1990s produced phylogenies contradicting the rRNA-based results, leading to the recognition of widespread lateral gene transfer across distinct taxa. Combined with the probable endosymbiotic origin of organelle-filled eukarya, this pointed to a far more complex picture of the origin and early history of life, one which might not be describable in the traditional terms of common ancestry.
References
Notes
Dietrich, Michael R. "The Origins of the Neutral Theory of Molecular Evolution." Journal of the History of Biology, Vol. 27, No. 1 (Spring 1994), pp 21–59
Crow, James F. "Motoo Kimura, 13 November 1924 - 13 November 1994." Biographical Memoirs of Fellows of the Royal Society, Vol. 43 (November 1997), pp 254–265
Kreitman, Martin. "The neutralist-selectionist debate: The neutral theory is dead. Long live the neutral theory", BioEssays, Vol. 18, No. 8 (1996), pp. 678–684
Ohta, Tomoko. "The neutralist-selectionist debate: The current significance and standing of neutral and nearly neutral theories", BioEssays, Vol. 18, No. 8 (1996), pp. 673–677
Sapp, Jan. Genesis: The Evolution of Biology. New York: Oxford University Press, 2003.
Wilson, Edward O. Naturalist. Warner Books, 1994.
External links
Perspectives on Molecular Evolution - maintained by historian of science Michael R. Dietrich
Molecular evolution
Molecular evolution | History of molecular evolution | [
"Chemistry",
"Biology"
] | 4,577 | [
"History of biology by subdiscipline",
"Evolutionary processes",
"Molecular evolution",
"Molecular biology"
] |
11,863,699 | https://en.wikipedia.org/wiki/Panavision%20cameras | Panavision has been a manufacturer of cameras for the motion picture industry since the 1950s, beginning with anamorphic widescreen lenses. The lightweight Panaflex is credited with revolutionizing filmmaking. Other influential cameras include the Millennium XL and the digital video Genesis.
Panavision Silent Reflex
Panavision Silent Reflex (1967)
Panavision Super PSR (R-200 or Super R-200)
Panaflex
Panaflex (1972)
Panaflex-X (1974)
Panaflex Lightweight (1975) The Panaflex Lightweight is a sync-sound 35 mm motion picture camera, stripped of all components not essential for work with "floating camera" systems such as the Steadicam. Contemporary cameras such as the Panavision Gold II can weigh as much as depending on configuration. The Panaflex Lightweight II (1993) is crystal-controlled in one-frame increments between four and 36 frames per second, and has a fixed focal-plane shutter. 200°, 180°, 172.8° or 144° shutters can be installed by Panavision prior to rental per the customer's order. This camera is still available through Panavision.
Panaflex Gold (1976)
Panaflex Gold II (1987) The Panaflex Gold II is a sync-sound 35 mm motion picture camera. It is capable of crystal sync at 24 and 25 or 29.97 frame/s, and the non-sync speed is variable from 4–34 fps (frames per second) according to Panavision; the Gold II can safely run up to 40 fps, crystal controlled with a special board which can be fitted on request. It has a focal-plane shutter which can be adjusted from 50 to 200° while the camera is running, either by an external control unit or by manually turning a knob. Improvements over the Panavision Gold include a brighter viewfinder. While the movement remains essentially the same as the original Panaflex movement introduced in 1972, the Gold II's dual registration pins are "full-fitting" according to Panavision, implying a more precise grip on the film during exposure and thus greater sharpness. This camera is still available through Panavision.
Panastar
Panastar (1977)
Panastar II (1987) The Panastar II is an MOS 35 mm motion picture camera. It is capable of 4–120 fps both forward and reverse, though reverse running requires a reversing magazine, with camera timing crystal-controlled at one-frame increments. It has a focal-plane shutter which can be adjusted between 45° and 180° while the camera is running, either by using an external remote control or manually turning a knob. Improvements over the original Panastar include a weight reduction of , a more accurate digital shutter angle readout, the inclusion of the Panaglow ground glass illuminator, and the ability to adjust the speed of the camera in single-frame increments without need for an external speed control, rather that being tied to the preset running speeds of the first Panastar. At high speeds, the Panastar II is incredibly loud, often leading those unfamiliar with its operation to question whether it is functioning properly.
Panaflex Platinum
Platinum (1986): The Panaflex Platinum is a sync-sound 35 mm motion picture camera, intended as the replacement for the Gold and Gold II series of cameras. It is capable of 4–36 fps forward and reverse in frame increments, and is crystal-controlled at all speeds. It has a focal-plane shutter which can be adjusted from 50 to 200° while the camera is running, either by an external control unit or by manually turning a knob. While the movement remains essentially the same as the original Panaflex movement introduced in 1972, the Platinum's dual registration pins are "full-fitting" according to Panavision, implying a more precise grip on the film during exposure and thus greater sharpness.
Panaflex Millennium
Millennium (1997) The Panaflex Millennium is a sync-sound 35 mm motion picture camera. Where the Panavision Platinum was mostly an evolution and refinement of the original 1972 Panaflex, the Millennium is a totally new design, having a new twin-sprocket drum incorporated with the movement, major electronics revisions, and a general weight reduction from 24 to . The Millennium is capable of 3–50 fps forward and reverse, though reverse running requires a reversing magazine, and it has a focal-plane shutter, the aperture angle of which can be adjusted electronically while the camera is running, between 11.2° and 180°, allowing for four stops of exposure ramping within a shot with no iris adjustment. All of the focus, iris, and zoom motor controls have been moved to the camera's internal circuitry, removing the need for cumbersome external circuit boxes, and it has an integrated camera built into the lens light, allowing the first assistant camera to see witness marks without having to physically look at the lens. It also has a brighter viewfinder than the Platinum, multiple run switches, and footage counters on either side of the camera for easier readings.
Millennium XL (1999)
Millennium XL2 (2004)
Digital Video
Sony HDW-900 CineAlta (2002)
Panavision Genesis (2005)
Millenium Digital XL (2016): The Millennium DXL is an 8k Digital Cinema camera based on the Red Digital Cinema Weapon platform. It marks the first digital camera in Panavision's portfolio to carry the Millennium name. The Red designed and produced 8k sensor is a 40.96mm wide sensor putting it about halfway between super35 (24.92mm) and 65mm (48.59mm) in size.
Weighing in at 10 lbs, this 16-bit, 35.5 Megapixel CMOS sensor camera system operates at a maximum frame rate of 60 fps at 8K Full Frame (8192 x 4320) resolution, and 75 fps at 8K 2.4:1 (8192 x 3456). Along with its ability to capture up to 15 stops of dynamic range, the DXL can record 8K RAW, .r3d (supported in RED SDK) with simultaneous 4K proxy (ProRes or DNx), for up to 1 hour on a single magazine.
It is capable of proprietary image mapping process Light Iron Color profile, which is compatible with all currently popular gamuts and transfer curves.
The Millennium DXL is the first camera to capture 4K anamorphic at 21 megapixels. With the 8K HDR sensor, this camera is optimized for Panavision large format lenses, including the Sphero 65, System 65, Super Panavision 70, Ultra Panavision 70, and the new Primo 70 and Primo Artiste (2017) wirelessly motorized lenses.
Millenium Digital XL2 (2018)
65mm
Super Panavision 70 (1965)
Panavision Super 70
Panavision System 65 (1991)
System 65 is Panavision's most modern range of cameras for the 5/65 format. It comprises two separate camera offerings:
Panaflex 65SPFX (intended for studio use)
Panaflex 65HSSM (suitable for location use but limited to MOS only).
See also
Super Panavision 70
Ultra Panavision 70
References
Movie cameras
Cameras
Television technology | Panavision cameras | [
"Technology"
] | 1,550 | [
"Information and communications technology",
"Television technology"
] |
11,864,519 | https://en.wikipedia.org/wiki/Approximate%20Bayesian%20computation | Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters.
In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate.
ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection.
ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences, e.g. in population genetics, ecology, epidemiology, systems biology, and in radio propagation.
History
The first ABC-related ideas date back to the 1980s. Donald Rubin, when discussing the interpretation of Bayesian statements in 1984, described a hypothetical sampling mechanism that yields a sample from the posterior distribution. This scheme was more of a conceptual thought experiment to demonstrate what type of manipulations are done when inferring the posterior distributions of parameters. The description of the sampling mechanism coincides exactly with that of the ABC-rejection scheme, and this article can be considered to be the first to describe approximate Bayesian computation. However, a two-stage quincunx was constructed by Francis Galton in the late 1800s that can be seen as a physical implementation of an ABC-rejection scheme for a single unknown (parameter) and a single observation. Another prescient point was made by Rubin when he argued that in Bayesian inference, applied statisticians should not settle for analytically tractable models only, but instead consider computational methods that allow them to estimate the posterior distribution of interest. This way, a wider range of models can be considered. These arguments are particularly relevant in the context of ABC.
In 1984, Peter Diggle and Richard Gratton suggested using a systematic simulation scheme to approximate the likelihood function in situations where its analytic form is intractable. Their method was based on defining a grid in the parameter space and using it to approximate the likelihood by running several simulations for each grid point. The approximation was then improved by applying smoothing techniques to the outcomes of the simulations. While the idea of using simulation for hypothesis testing was not new, Diggle and Gratton seemingly introduced the first procedure using simulation to do statistical inference under a circumstance where the likelihood is intractable.
Although Diggle and Gratton's approach had opened a new frontier, their method was not yet exactly identical to what is now known as ABC, as it aimed at approximating the likelihood rather than the posterior distribution. An article of Simon Tavaré and co-authors was first to propose an ABC algorithm for posterior inference. In their seminal work, inference about the genealogy of DNA sequence data was considered, and in particular the problem of deciding the posterior distribution of the time to the most recent common ancestor of the sampled individuals. Such inference is analytically intractable for many demographic models, but the authors presented ways of simulating coalescent trees under the putative models. A sample from the posterior of model parameters was obtained by accepting/rejecting proposals based on comparing the number of segregating sites in the synthetic and real data. This work was followed by an applied study on modeling the variation in human Y chromosome by Jonathan K. Pritchard and co-authors using the ABC method. Finally, the term approximate Bayesian computation was established by Mark Beaumont and co-authors, extending further the ABC methodology and discussing the suitability of the ABC-approach more specifically for problems in population genetics. Since then, ABC has spread to applications outside population genetics, such as systems biology, epidemiology, and phylogeography.
Approximate Bayesian computation can be understood as a kind of Bayesian version of indirect inference.
Several efficient Monte Carlo based approaches have been developed to perform sampling from the ABC posterior distribution for purposes of estimation and prediction problems. A popular choice is the SMC Samplers algorithm adapted to the ABC context in the method (SMC-ABC).
Method
Motivation
A common incarnation of Bayes’ theorem relates the conditional probability (or density) of a particular parameter value given data to the probability of given by the rule
,
where denotes the posterior, the likelihood, the prior, and the evidence (also referred to as the marginal likelihood or the prior predictive probability of the data). Note that the denominator is normalizing the total probability of the posterior density to one and can be calculated that way.
The prior represents beliefs or knowledge (such as e.g. physical constraints) about before is available. Since the prior narrows down uncertainty, the posterior estimates have less variance, but might be biased. For convenience the prior is often specified by choosing a particular distribution among a set of well-known and tractable families of distributions, such that both the evaluation of prior probabilities and random generation of values of are relatively straightforward. For certain kinds of models, it is more pragmatic to specify the prior using a factorization of the joint distribution of all the elements of in terms of a sequence of their conditional distributions. If one is only interested in the relative posterior plausibilities of different values of , the evidence can be ignored, as it constitutes a normalising constant, which cancels for any ratio of posterior probabilities. It remains, however, necessary to evaluate the likelihood and the prior . For numerous applications, it is computationally expensive, or even completely infeasible, to evaluate the likelihood, which motivates the use of ABC to circumvent this issue.
The ABC rejection algorithm
All ABC-based methods approximate the likelihood function by simulations, the outcomes of which are compared with the observed data. More specifically, with the ABC rejection algorithm — the most basic form of ABC — a set of parameter points is first sampled from the prior distribution. Given a sampled parameter point , a data set is then simulated under the statistical model specified by . If the generated is too different from the observed data , the sampled parameter value is discarded. In precise terms, is accepted with tolerance if:
,
where the distance measure determines the level of discrepancy between and based on a given metric (e.g. Euclidean distance). A strictly positive tolerance is usually necessary, since the probability that the simulation outcome coincides exactly with the data (event ) is negligible for all but trivial applications of ABC, which would in practice lead to rejection of nearly all sampled parameter points. The outcome of the ABC rejection algorithm is a sample of parameter values approximately distributed according to the desired posterior distribution, and, crucially, obtained without the need to explicitly evaluate the likelihood function.
Summary statistics
The probability of generating a data set with a small distance to typically decreases as the dimensionality of the data increases. This leads to a substantial decrease in the computational efficiency of the above basic ABC rejection algorithm. A common approach to lessen this problem is to replace with a set of lower-dimensional summary statistics , which are selected to capture the relevant information in . The acceptance criterion in ABC rejection algorithm becomes:
.
If the summary statistics are sufficient with respect to the model parameters , the efficiency increase obtained in this way does not introduce any error. Indeed, by definition, sufficiency implies that all information in about is captured by .
As elaborated below, it is typically impossible, outside the exponential family of distributions, to identify a finite-dimensional set of sufficient statistics. Nevertheless, informative but possibly insufficient summary statistics are often used in applications where inference is performed with ABC methods.
Example
An illustrative example is a bistable system that can be characterized by a hidden Markov model (HMM) subject to measurement noise. Such models are employed for many biological systems: They have, for example, been used in development, cell signaling, activation/deactivation, logical processing and non-equilibrium thermodynamics. For instance, the behavior of the Sonic hedgehog (Shh) transcription factor in Drosophila melanogaster can be modeled with an HMM. The (biological) dynamical model consists of two states: A and B. If the probability of a transition from one state to the other is defined as in both directions, then the probability to remain in the same state at each time step is . The probability to measure the state correctly is (and conversely, the probability of an incorrect measurement is ).
Due to the conditional dependencies between states at different time points, calculation of the likelihood of time series data is somewhat tedious, which illustrates the motivation to use ABC. A computational issue for basic ABC is the large dimensionality of the data in an application like this. The dimensionality can be reduced using the summary statistic , which is the frequency of switches between the two states. The absolute difference is used as a distance measure with tolerance . The posterior inference about the parameter can be done following the five steps presented in.
Step 1: Assume that the observed data form the state sequence AAAABAABBAAAAAABAAAA, which is generated using and . The associated summary statistic—the number of switches between the states in the experimental data—is .
Step 2: Assuming nothing is known about , a uniform prior in the interval is employed. The parameter is assumed to be known and fixed to the data-generating value , but it could in general also be estimated from the observations. A total of parameter points are drawn from the prior, and the model is simulated for each of the parameter points , which results in sequences of simulated data. In this example, , with each drawn parameter and simulated dataset recorded in Table 1, columns 2-3. In practice, would need to be much larger to obtain an appropriate approximation.
Step 3: The summary statistic is computed for each sequence of simulated data .
Step 4: The distance between the observed and simulated transition frequencies is computed for all parameter points. Parameter points for which the distance is smaller than or equal to are accepted as approximate samples from the posterior.
Step 5: The posterior distribution is approximated with the accepted parameter points. The posterior distribution should have a non-negligible probability for parameter values in a region around the true value of in the system if the data are sufficiently informative. In this example, the posterior probability mass is evenly split between the values 0.08 and 0.43.
The posterior probabilities are obtained via ABC with large by utilizing the summary statistic (with and ) and the full data sequence (with ). These are compared with the true posterior, which can be computed exactly and efficiently using the Viterbi algorithm. The summary statistic utilized in this example is not sufficient, as the deviation from the theoretical posterior is significant even under the stringent requirement of . A much longer observed data sequence would be needed to obtain a posterior concentrated around , the true value of .
This example application of ABC uses simplifications for illustrative purposes. More realistic applications of ABC are available in a growing number of peer-reviewed articles.
Model comparison with ABC
Outside of parameter estimation, the ABC framework can be used to compute the posterior probabilities of different candidate models. In such applications, one possibility is to use rejection sampling in a hierarchical manner. First, a model is sampled from the prior distribution for the models. Then, parameters are sampled from the prior distribution assigned to that model. Finally, a simulation is performed as in single-model ABC. The relative acceptance frequencies for the different models now approximate the posterior distribution for these models. Again, computational improvements for ABC in the space of models have been proposed, such as constructing a particle filter in the joint space of models and parameters.
Once the posterior probabilities of the models have been estimated, one can make full use of the techniques of Bayesian model comparison. For instance, to compare the relative plausibilities of two models and , one can compute their posterior ratio, which is related to the Bayes factor :
.
If the model priors are equal—that is, —the Bayes factor equals the posterior ratio.
In practice, as discussed below, these measures can be highly sensitive to the choice of parameter prior distributions and summary statistics, and thus conclusions of model comparison should be drawn with caution.
Pitfalls and remedies
As for all statistical methods, a number of assumptions and approximations are inherently required for the application of ABC-based methods to real modeling problems. For example, setting the tolerance parameter to zero ensures an exact result, but typically makes computations prohibitively expensive. Thus, values of larger than zero are used in practice, which introduces a bias. Likewise, sufficient statistics are typically not available and instead, other summary statistics are used, which introduces an additional bias due to the loss of information. Additional sources of bias- for example, in the context of model selection—may be more subtle.
At the same time, some of the criticisms that have been directed at the ABC methods, in particular within the field of phylogeography, are not specific to ABC and apply to all Bayesian methods or even all statistical methods (e.g., the choice of prior distribution and parameter ranges). However, because of the ability of ABC-methods to handle much more complex models, some of these general pitfalls are of particular relevance in the context of ABC analyses.
This section discusses these potential risks and reviews possible ways to address them.
Approximation of the posterior
A non-negligible comes with the price that one samples from instead of the true posterior . With a sufficiently small tolerance, and a sensible distance measure, the resulting distribution should often approximate the actual target distribution reasonably well. On the other hand, a tolerance that is large enough that every point in the parameter space becomes accepted will yield a replica of the prior distribution. There are empirical studies of the difference between and as a function of , and theoretical results for an upper -dependent bound for the error in parameter estimates. The accuracy of the posterior (defined as the expected quadratic loss) delivered by ABC as a function of has also been investigated. However, the convergence of the distributions when approaches zero, and how it depends on the distance measure used, is an important topic that has yet to be investigated in greater detail. In particular, it remains difficult to disentangle errors introduced by this approximation from errors due to model mis-specification.
As an attempt to correct some of the error due to a non-zero , the usage of local linear weighted regression with ABC to reduce the variance of the posterior estimates has been suggested. The method assigns weights to the parameters according to how well simulated summaries adhere to the observed ones and performs linear regression between the summaries and the weighted parameters in the vicinity of observed summaries. The obtained regression coefficients are used to correct sampled parameters in the direction of observed summaries. An improvement was suggested in the form of nonlinear regression using a feed-forward neural network model. However, it has been shown that the posterior distributions obtained with these approaches are not always consistent with the prior distribution, which did lead to a reformulation of the regression adjustment that respects the prior distribution.
Finally, statistical inference using ABC with a non-zero tolerance is not inherently flawed: under the assumption of measurement errors, the optimal can in fact be shown to be not zero. Indeed, the bias caused by a non-zero tolerance can be characterized and compensated by introducing a specific form of noise to the summary statistics. Asymptotic consistency for such “noisy ABC”, has been established, together with formulas for the asymptotic variance of the parameter estimates for a fixed tolerance.
Choice and sufficiency of summary statistics
Summary statistics may be used to increase the acceptance rate of ABC for high-dimensional data. Low-dimensional sufficient statistics are optimal for this purpose, as they capture all relevant information present in the data in the simplest possible form. However, low-dimensional sufficient statistics are typically unattainable for statistical models where ABC-based inference is most relevant, and consequently, some heuristic is usually necessary to identify useful low-dimensional summary statistics. The use of a set of poorly chosen summary statistics will often lead to inflated credible intervals due to the implied loss of information, which can also bias the discrimination between models. A review of methods for choosing summary statistics is available, which may provide valuable guidance in practice.
One approach to capture most of the information present in data would be to use many statistics, but the accuracy and stability of ABC appears to decrease rapidly with an increasing numbers of summary statistics. Instead, a better strategy is to focus on the relevant statistics only—relevancy depending on the whole inference problem, on the model used, and on the data at hand.
An algorithm has been proposed for identifying a representative subset of summary statistics, by iteratively assessing whether an additional statistic introduces a meaningful modification of the posterior. One of the challenges here is that a large ABC approximation error may heavily influence the conclusions about the usefulness of a statistic at any stage of the procedure. Another method decomposes into two main steps. First, a reference approximation of the posterior is constructed by minimizing the entropy. Sets of candidate summaries are then evaluated by comparing the ABC-approximated posteriors with the reference posterior.
With both of these strategies, a subset of statistics is selected from a large set of candidate statistics. Instead, the partial least squares regression approach uses information from all the candidate statistics, each being weighted appropriately. Recently, a method for constructing summaries in a semi-automatic manner has attained a considerable interest. This method is based on the observation that the optimal choice of summary statistics, when minimizing the quadratic loss of the parameter point estimates, can be obtained through the posterior mean of the parameters, which is approximated by performing a linear regression based on the simulated data. Summary statistics for model selection have been obtained using multinomial logistic regression on simulated data, treating competing models as the label to predict.
Methods for the identification of summary statistics that could also simultaneously assess the influence on the approximation of the posterior would be of substantial value. This is because the choice of summary statistics and the choice of tolerance constitute two sources of error in the resulting posterior distribution. These errors may corrupt the ranking of models and may also lead to incorrect model predictions.
Bayes factor with ABC and summary statistics
It has been shown that the combination of insufficient summary statistics and ABC for model selection can be problematic. Indeed, if one lets the Bayes factor based on the summary statistic be denoted by , the relation between and takes the form:
.
Thus, a summary statistic is sufficient for comparing two models and if and only if:
,
which results in that . It is also clear from the equation above that there might be a huge difference between and if the condition is not satisfied, as can be demonstrated by toy examples. Crucially, it was shown that sufficiency for or alone, or for both models, does not guarantee sufficiency for ranking the models. However, it was also shown that any sufficient summary statistic for a model in which both and are nested is valid for ranking the nested models.
The computation of Bayes factors on may therefore be misleading for model selection purposes, unless the ratio between the Bayes factors on and would be available, or at least could be approximated reasonably well. Alternatively, necessary and sufficient conditions on summary statistics for a consistent Bayesian model choice have recently been derived, which can provide useful guidance.
However, this issue is only relevant for model selection when the dimension of the data has been reduced. ABC-based inference, in which the actual data sets are directly compared—as is the case for some systems biology applications (e.g., see )—circumvents this problem.
Indispensable quality controls
As the above discussion makes clear, any ABC analysis requires choices and trade-offs that can have a considerable impact on its outcomes. Specifically, the choice of competing models/hypotheses, the number of simulations, the choice of summary statistics, or the acceptance threshold cannot currently be based on general rules, but the effect of these choices should be evaluated and tested in each study.
A number of heuristic approaches to the quality control of ABC have been proposed, such as the quantification of the fraction of parameter variance explained by the summary statistics. A common class of methods aims at assessing whether or not the inference yields valid results, regardless of the actually observed data. For instance, given a set of parameter values, which are typically drawn from the prior or the posterior distributions for a model, one can generate a large number of artificial datasets. In this way, the quality and robustness of ABC inference can be assessed in a controlled setting, by gauging how well the chosen ABC inference method recovers the true parameter values, and also models if multiple structurally different models are considered simultaneously.
Another class of methods assesses whether the inference was successful in light of the given observed data, for example, by comparing the posterior predictive distribution of summary statistics to the summary statistics observed. Beyond that, cross-validation techniques and predictive checks represent promising future strategies to evaluate the stability and out-of-sample predictive validity of ABC inferences. This is particularly important when modeling large data sets, because then the posterior support of a particular model can appear overwhelmingly conclusive, even if all proposed models in fact are poor representations of the stochastic system underlying the observation data. Out-of-sample predictive checks can reveal potential systematic biases within a model and provide clues on to how to improve its structure or parametrization.
Fundamentally novel approaches for model choice that incorporate quality control as an integral step in the process have recently been proposed. ABC allows, by construction, estimation of the discrepancies between the observed data and the model predictions, with respect to a comprehensive set of statistics. These statistics are not necessarily the same as those used in the acceptance criterion. The resulting discrepancy distributions have been used for selecting models that are in agreement with many aspects of the data simultaneously, and model inconsistency is detected from conflicting and co-dependent summaries. Another quality-control-based method for model selection employs ABC to approximate the effective number of model parameters and the deviance of the posterior predictive distributions of summaries and parameters. The deviance information criterion is then used as measure of model fit. It has also been shown that the models preferred based on this criterion can conflict with those supported by Bayes factors. For this reason, it is useful to combine different methods for model selection to obtain correct conclusions.
Quality controls are achievable and indeed performed in many ABC-based works, but for certain problems, the assessment of the impact of the method-related parameters can be challenging. However, the rapidly increasing use of ABC can be expected to provide a more thorough understanding of the limitations and applicability of the method.
General risks in statistical inference exacerbated in ABC
This section reviews risks that are strictly speaking not specific to ABC, but also relevant for other statistical methods as well. However, the flexibility offered by ABC to analyze very complex models makes them highly relevant to discuss here.
Prior distribution and parameter ranges
The specification of the range and the prior distribution of parameters strongly benefits from previous knowledge about the properties of the system. One criticism has been that in some studies the “parameter ranges and distributions are only guessed based upon the subjective opinion of the investigators”, which is connected to classical objections of Bayesian approaches.
With any computational method, it is typically necessary to constrain the investigated parameter ranges. The parameter ranges should if possible be defined based on known properties of the studied system, but may for practical applications necessitate an educated guess. However, theoretical results regarding objective priors are available, which may for example be based on the principle of indifference or the principle of maximum entropy. On the other hand, automated or semi-automated methods for choosing a prior distribution often yield improper densities. As most ABC procedures require generating samples from the prior, improper priors are not directly applicable to ABC.
One should also keep the purpose of the analysis in mind when choosing the prior distribution. In principle, uninformative and flat priors, that exaggerate our subjective ignorance about the parameters, may still yield reasonable parameter estimates. However, Bayes factors are highly sensitive to the prior distribution of parameters. Conclusions on model choice based on Bayes factor can be misleading unless the sensitivity of conclusions to the choice of priors is carefully considered.
Small number of models
Model-based methods have been criticized for not exhaustively covering the hypothesis space. Indeed, model-based studies often revolve around a small number of models, and due to the high computational cost to evaluate a single model in some instances, it may then be difficult to cover a large part of the hypothesis space.
An upper limit to the number of considered candidate models is typically set by the substantial effort required to define the models and to choose between many alternative options. There is no commonly accepted ABC-specific procedure for model construction, so experience and prior knowledge are used instead. Although more robust procedures for a priori model choice and formulation would be beneficial, there is no one-size-fits-all strategy for model development in statistics: sensible characterization of complex systems will always necessitate a great deal of detective work and use of expert knowledge from the problem domain.
Some opponents of ABC contend that since only few models—subjectively chosen and probably all wrong—can be realistically considered, ABC analyses provide only limited insight. However, there is an important distinction between identifying a plausible null hypothesis and assessing the relative fit of alternative hypotheses. Since useful null hypotheses, that potentially hold true, can extremely seldom be put forward in the context of complex models, predictive ability of statistical models as explanations of complex phenomena is far more important than the test of a statistical null hypothesis in this context. It is also common to average over the investigated models, weighted based on their relative plausibility, to infer model features (e.g., parameter values) and to make predictions.
Large datasets
Large data sets may constitute a computational bottleneck for model-based methods. It was, for example, pointed out that in some ABC-based analyses, part of the data have to be omitted. A number of authors have argued that large data sets are not a practical limitation, although the severity of this issue depends strongly on the characteristics of the models. Several aspects of a modeling problem can contribute to the computational complexity, such as the sample size, number of observed variables or features, time or spatial resolution, etc. However, with increasing computing power, this issue will potentially be less important.
Instead of sampling parameters for each simulation from the prior, it has been proposed alternatively to combine the Metropolis-Hastings algorithm with ABC, which was reported to result in a higher acceptance rate than for plain ABC. Naturally, such an approach inherits the general burdens of MCMC methods, such as the difficulty to assess convergence, correlation among the samples from the posterior, and relatively poor parallelizability.
Likewise, the ideas of sequential Monte Carlo (SMC) and population Monte Carlo (PMC) methods have been adapted to the ABC setting. The general idea is to iteratively approach the posterior from the prior through a sequence of target distributions. An advantage of such methods, compared to ABC-MCMC, is that the samples from the resulting posterior are independent. In addition, with sequential methods the tolerance levels must not be specified prior to the analysis, but are adjusted adaptively.
It is relatively straightforward to parallelize a number of steps in ABC algorithms based on rejection sampling and sequential Monte Carlo methods. It has also been demonstrated that parallel algorithms may yield significant speedups for MCMC-based inference in phylogenetics, which may be a tractable approach also for ABC-based methods. Yet an adequate model for a complex system is very likely to require intensive computation irrespectively of the chosen method of inference, and it is up to the user to select a method that is suitable for the particular application in question.
Curse of dimensionality
High-dimensional data sets and high-dimensional parameter spaces can require an extremely large number of parameter points to be simulated in ABC-based studies to obtain a reasonable level of accuracy for the posterior inferences. In such situations, the computational cost is severely increased and may in the worst case render the computational analysis intractable. These are examples of well-known phenomena, which are usually referred to with the umbrella term curse of dimensionality.
To assess how severely the dimensionality of a data set affects the analysis within the context of ABC, analytical formulas have been derived for the error of the ABC estimators as functions of the dimension of the summary statistics. In addition, Blum and François have investigated how the dimension of the summary statistics is related to the mean squared error for different correction adjustments to the error of ABC estimators. It was also argued that dimension reduction techniques are useful to avoid the curse-of-dimensionality, due to a potentially lower-dimensional underlying structure of summary statistics. Motivated by minimizing the quadratic loss of ABC estimators, Fearnhead and Prangle have proposed a scheme to project (possibly high-dimensional) data into estimates of the parameter posterior means; these means, now having the same dimension as the parameters, are then used as summary statistics for ABC.
ABC can be used to infer problems in high-dimensional parameter spaces, although one should account for the possibility of overfitting (e.g., see the model selection methods in and ). However, the probability of accepting the simulated values for the parameters under a given tolerance with the ABC rejection algorithm typically decreases exponentially with increasing dimensionality of the parameter space (due to the global acceptance criterion). Although no computational method (based on ABC or not) seems to be able to break the curse-of-dimensionality, methods have recently been developed to handle high-dimensional parameter spaces under certain assumptions (e.g., based on polynomial approximation on sparse grids, which could potentially heavily reduce the simulation times for ABC). However, the applicability of such methods is problem dependent, and the difficulty of exploring parameter spaces should in general not be underestimated. For example, the introduction of deterministic global parameter estimation led to reports that the global optima obtained in several previous studies of low-dimensional problems were incorrect. For certain problems, it might therefore be difficult to know whether the model is incorrect or, as discussed above, whether the explored region of the parameter space is inappropriate. More pragmatic approaches are to cut the scope of the problem through model reduction, discretisation of variables and the use of canonical models such as noisy models. Noisy models exploit information on the conditional independence between variables.
Software
A number of software packages are currently available for application of ABC to particular classes of statistical models.
The suitability of individual software packages depends on the specific application at hand, the computer system environment, and the algorithms required.
See also
Markov chain Monte Carlo
Empirical Bayes
Method of moments (statistics)
References
External links
Bayesian statistics
Statistical approximations | Approximate Bayesian computation | [
"Mathematics"
] | 6,461 | [
"Statistical approximations",
"Mathematical relations",
"Approximations"
] |
11,864,935 | https://en.wikipedia.org/wiki/Haar-like%20feature | Haar-like features are digital image features used in object recognition. They owe their name to their intuitive similarity with Haar wavelets and were used in the first real-time face detector.
Working with only image intensities (i.e., the RGB pixel values at each and every pixel of image) made the task of feature calculation computationally expensive. A publication by Papageorgiou et al. discussed working with an alternate feature set based on Haar wavelets instead of the usual image intensities. Paul Viola and Michael Jones adapted the idea of using Haar wavelets and developed the so-called Haar-like features. A Haar-like feature considers adjacent rectangular regions at a specific location in a detection window, sums up the pixel intensities in each region and calculates the difference between these sums. This difference is then used to categorize subsections of an image.
For example, with a human face, it is a common observation that among all faces the region of the eyes is darker than the region of the cheeks. Therefore, a common Haar feature for face detection is a set of two adjacent rectangles that lie above the eye and the cheek region. The position of these rectangles is defined relative to a detection window that acts like a bounding box to the target object (the face in this case).
In the detection phase of the Viola–Jones object detection framework, a window of the target size is moved over the input image, and for each subsection of the image the Haar-like feature is calculated. This difference is then compared to a learned threshold that separates non-objects from objects. Because such a Haar-like feature is only a weak learner or classifier (its detection quality is slightly better than random guessing) a large number of Haar-like features are necessary to describe an object with sufficient accuracy. In the Viola–Jones object detection framework, the Haar-like features are therefore organized in something called a classifier cascade to form a strong learner or classifier.
The key advantage of a Haar-like feature over most other features is its calculation speed. Due to the use of integral images, a Haar-like feature of any size can be calculated in constant time (approximately 60 microprocessor instructions for a 2-rectangle feature).
Rectangular Haar-like features
A simple rectangular Haar-like feature can be defined as the difference of the sum of pixels of areas inside the rectangle, which can be at any position and scale within the original image. This modified feature set is called 2-rectangle feature. Viola and Jones also defined 3-rectangle features and 4-rectangle features. The values indicate certain characteristics of a particular area of the image. Each feature type can indicate the existence (or absence) of certain characteristics in the image, such as edges or changes in texture. For example, a 2-rectangle feature can indicate where the border lies between a dark region and a light region.
Fast computation of Haar-like features
One of the contributions of Viola and Jones was to use summed-area tables, which they called integral images. Integral images can be defined as two-dimensional lookup tables in the form of a matrix with the same size of the original image. Each element of the integral image contains the sum of all pixels located on the up-left region of the original image (in relation to the element's position). This allows to compute sum of rectangular areas in the image, at any position or scale, using only four lookups:
where points belong to the integral image , as shown in the figure.
Each Haar-like feature may need more than four lookups, depending on how it was defined. Viola and Jones's 2-rectangle features need six lookups, 3-rectangle features need eight lookups, and 4-rectangle features need nine lookups.
Tilted Haar-like features
Lienhart and Maydt introduced the concept of a tilted (45°) Haar-like feature. This was used to increase the dimensionality of the set of features in an attempt to improve the detection of objects in images. This was successful, as some of these features are able to describe the object in a better way. For example, a 2-rectangle tilted Haar-like feature can indicate the existence of an edge at 45°.
Messom and Barczak extended the idea to a generic rotated Haar-like feature. Although the idea is sound mathematically, practical problems prevent the use of Haar-like features at any angle. In order to be fast, detection algorithms use low resolution images introducing rounding errors. For this reason rotated Haar-like features are not commonly used.
References
Further reading
Haar A. Zur Theorie der orthogonalen Funktionensysteme, Mathematische Annalen, 69, pp. 331–371, 1910.
Bioinformatics
Feature detection (computer vision) | Haar-like feature | [
"Engineering",
"Biology"
] | 1,025 | [
"Bioinformatics",
"Biological engineering"
] |
11,864,966 | https://en.wikipedia.org/wiki/Exophiala%20jeanselmei | Exophiala jeanselmei is a saprotrophic fungus in the family Herpotrichiellaceae. Four varieties have been discovered: Exophiala jeanselmei var. heteromorpha, E. jeanselmei var. lecanii-corni, E. jeanselmei var. jeanselmei, and E. jeanselmei var. castellanii. Other species in the genus Exophiala such as E. dermatitidis and E. spinifera have been reported to have similar annellidic conidiogenesis and may therefore be difficult to differentiate.
History
Exophiala jeanselmei was first isolated in 1928 by Jeanselme from a case of black mycetoma on the foot. The nomenclature was based on the fungus' morphological characteristics, hence, it was originally classified as Torula jeanselmei because of its yeast like shape when grown in culture. It was later reclassified by McGinnis and Padhye in 1977 as Exophiala jeanselmei after further research on conidiogenesis.
Morphology
In culture, E. jeanselmei produces slow growing colonies that are green black in color. Cultures manifest a combination of mycelial and yeast-like growth forms, however the yeast-like typically predominates. Black aerial mycelium develops on the colony surface that consists of hyphae with swellings at regular intervals. Conidia are variable in size and are often formed in clusters at the tip of annellidic conidiogenous cells. The conidia are narrowly ellipsoidal in shape and 2.6–5.9 μm × 1.2–2.5 μm in size. Immature sexual fruiting bodies called ascomata have been reported but their rare occurrence are thought to be due to the lack of mating compatibility. Exophiala jeanselmei is affiliated with the ascomycete genus Capronia.
Ecology
Exophiala jeanselmei is commonly found in soil, plants, water, and can also be isolated from decaying wood as this fungus is a saprotroph in nature. This species has world-wide occurrence but are particularly noted in Asia and more commonly in tropical and subtropical regions. The genus Exophiala has been isolated from hydrocarbon rich environments as well as from hot, humid, and oligotrophic environments such as dishwashers, steam bath facilities and bathrooms that only provide low levels of nutrients. It has been proposed that the conditions usually found within dishwashers such as high temperature, moisture and alkaline pH can provide an alternative habitat for human pathogenic species. The fungus has optimal growth at 30 °C but growth is inhibited at 40 °C. Most strains isolated from soil cannot grow at temperatures higher than 30 °C while strains isolated from humans can grow at higher temperatures such as 37 °C of the human body. This adaptation of E. jeanselmei had developed evolutionarily in order to survive on their human hosts. This is a distinguishing factor that helps in determining the pathogenicity of a particular strain. A feature that distinguishes E. jeanselmei from Cladosporium which forms very similar colonies is that E. jeanselmei is not proteolytic. It is able to assimilate glucose, galactose, maltose, and sucrose, but not lactose.
Pathogenesis
Exophiala jeanselmei has versatile adaptability and acts as an opportunistic pathogen. Infections are more common in immunocompromised people and can also have manifestations in healthy people with wounded skin via traumatic implantation. Chronic steroid use has been found to increase the severity of inflammation. There were also cases where infections by E. jeanselmei occurred during solid organ transplants. Infections frequently cause inflammation in the cutaneous and subcutaneous tissues of the skin, causing phaeomycotic cyst, chromoblastomycosis and can occasionally cause eumycetoma which is a chronic granulomatous disease in the form of black grains. Mycetoma, a common form of clinical manifestation of E. jeanselmei, is a chronic granulomatous inflammatory disease that forms abscess and draining sinuses in more advanced stages. In mycotic mycetoma, vesicles of cyst like structures are formed. Dissemination, endocarditis and arthritis could arise from an opportunistic infection by E. jeanselmei, and it was also isolated from phaeohyphomycosis with sclerotic round bodies. There have been several cases of E. jeanselmei being the etiological agent of phaeohyphomycosis in domesticated cats where diagnoses were confirmed by sequencing the fungus' ribosomal RNA. The grains of this fungus are small, black in color and have soft centers. Rare cases of keratitis, infection of the cornea, have also identified E. jeanselmei as the etiological agent.
In vitro susceptibility and treatment
The minimum inhibitory concentration (MIC) of fluconazole for E. jeanselmei is very high, flucytosine and miconazole also have relatively high MICs which indicate that the fungus is fairly resistant to these drugs. Amphotericin B, ketoconazole, and voriconazole have lower MICs, and E. jeanselmei is most susceptible to itraconazole and terbinafine. Novel drugs such as echinocandin and caspofungin also have favorable antifungal activity against Exophiala jeanselmei isolates. However, in vitro susceptibility in comparison to the efficacy of antifungal agents in clinical manifestations of this fungus is currently unknown, that in vitro success may or may not directly correlate clinically.
Previous cases of black grain mycetoma caused by E. jeanselmei were clinically treated and cases of phaeohyphomycosis caused by this fungus were completely cured where both cases were remedied by administering itraconazole. E. jeanselmei also showed some susceptibility to being treated with antifungal agents such as amphotericin B, voriconazole and posaconazole. Amphotericin B used to be the most potent antifungal treatment for severe fungal infections, but due to its strong association with severe side effects such as nephrotoxicity, its use is now often replaced with azoles and echinocandins. The use of combinations of surgical excision and pharmacological treatments for severe infections is usually the preferred way to treat diseases caused by this fungus.
References
Fungi described in 1928
Eurotiomycetes
Fungus species | Exophiala jeanselmei | [
"Biology"
] | 1,400 | [
"Fungi",
"Fungus species"
] |
11,865,049 | https://en.wikipedia.org/wiki/Trichosporon | Trichosporon is a genus of anamorphic fungi in the family Trichosporonaceae. All species of Trichosporon are yeasts with no known teleomorphs (sexual states). Most are typically isolated from soil, but several species occur as a natural part of the skin microbiota of humans and other animals. Proliferation of Trichosporon yeasts in the hair can lead to an unpleasant but non-serious condition known as white piedra. Trichosporon species can also cause severe opportunistic infections (trichosporonosis) in immunocompromised individuals.
Taxonomy
The genus was first described by the German dermatologist Gustav Behrend in 1890, based on yeasts isolated from the hairs of a moustache where they were causing the condition known as "white piedra". Behrend called his new species Trichosporon ovoides. Friedrich Küchenmeister and Rabenhorst had, however, previously described a species in 1867 from the hairs of a wig. They thought that the organism was an alga and placed it in the genus Pleurococcus as Pleurococcus beigelii. The French mycologist Vuillemin later realized it was a yeast and transferred it to the genus Trichosporon, considering it to be an earlier name for Trichosporon ovoides.
Over 100 additional yeast species were referred to Trichosporon by later authors. With the advent of DNA sequencing, however, it became clear that many of these additional species belonged in other genera. Based on cladistic analysis of DNA sequences, 12 species are now accepted in the genus.
DNA sequencing has also shown that white piedra can be caused by more than one Trichoporon species. As a result, Trichosporon beigelii has become a name of uncertain application. McPartland & Goff selected a neotype strain that makes T. beigelii synonymous with Cutaneotrichosporon cutaneum. Guého and others, however, have argued that T. beigelii should be discarded (as a dubious name) and Behrend's original T. ovoides (for which a neotype strain has also been selected) should become the type. As a result of this uncertainty, the name T. beigelii is now obsolete.
Description and habitat
Trichosporon species are distinguished microscopically by having yeast cells that germinate to produce hyaline hyphae that disarticulate at the septa, the hyphal compartments acting as arthroconidia (asexual propagules). No teleomorphic (sexual) states are known.
Species of Trichosporon and related genera are widespread and have been isolated from a wide range of substrates, including human hair (Trichosporon ovoides), soil (Cutaneotrichosporon guehoae), cabbages (Apiotrichum brassicae), cheese (T. caseorum), scarab beetles (Apiotrichum scarabaeorum), parrot droppings (T. coremiiforme), and sea water (Cutaneotrichosporon dermatis).
Human pathogens
Several Trichosporon species occur naturally as part of the microbiota of human skin. Occasionally, particularly in circumstances of high humidity, the fungus can proliferate, causing an unpleasant but harmless hair condition known as white piedra. Soft, pale nodules containing yeast cells and arthroconidia form on hairs of the scalp and body. The species responsible include Trichosporon ovoides, T. inkin, T. asahii, Cutaneotrichosporon mucoides, T. asteroides, and Cutaneotrichosporon cutaneum. The obsolete name T. beigelii was formerly applied to all or any of these species.
Much more serious opportunistic infections, collectively called trichosporonosis, have been reported in immunocompromised individuals. Species said to be agents of trichosporonosis are T. asahii, T. asteroides, Cutaneotrichosporon cutaneum, Cutaneotrichosporon dermatis, T. dohaense, T. inkin, Apiotrichum loubieri, Cutaneotrichosporon mucoides, and T. ovoides.
Species
Trichosporon aquatile
Trichosporon asahii
Trichosporon asteroides
Trichosporon caseorum
Trichosporon coremiiforme
Trichosporon dohaense
Trichosporon faecale
Trichosporon inkin
Trichosporon insectorum
Trichosporon japonicum
Trichosporon lactis
Trichosporon ovoides
References
External links
Tremellomycetes
Yeasts | Trichosporon | [
"Biology"
] | 1,036 | [
"Yeasts",
"Fungi"
] |
11,865,154 | https://en.wikipedia.org/wiki/DBpedia | DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia project. This structured information is made available on the World Wide Web using OpenLink Virtuoso. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets.
The project was heralded as "one of the more famous pieces" of the decentralized Linked Data effort by Tim Berners-Lee, one of the Internet's pioneers. As of June 2021, DBPedia contained over 850 million triples.
Background
The project was started by people at the Free University of Berlin and Leipzig University in collaboration with OpenLink Software, and is now maintained by people at the University of Mannheim and Leipzig University. The first publicly available dataset was published in 2007. The data is made available under free licenses (CC BY-SA), allowing others to reuse the dataset; it does not use an open data license to waive the sui generis database rights.
Wikipedia articles consist mostly of free text, but also include structured information embedded in the articles, such as "infobox" tables (the pull-out panels that appear in the top right of the default view of many Wikipedia articles, or at the start of the mobile versions), categorization information, images, geo-coordinates and links to external Web pages. This structured information is extracted and put in a uniform dataset which can be queried.
Dataset
The 2016-04 release of the DBpedia data set describes 6.0 million entities, out of which 5.2 million are classified in a consistent ontology, including 1.5 million persons, 810,000 places, 135,000 music albums, 106,000 films, 20,000 video games, 275,000 organizations, 301,000 species and 5,000 diseases. DBpedia uses the Resource Description Framework (RDF) to represent extracted information and consists of 9.5 billion RDF triples, of which 1.3 billion were extracted from the English edition of Wikipedia and 5.0 billion from other language editions.
From this data set, information spread across multiple pages can be extracted. For example, book authorship can be put together from pages about the work, or the author.
One of the challenges in extracting information from Wikipedia is that the same concepts can be expressed using different parameters in infobox and other templates, such as and . Because of this, queries about where people were born would have to search for both of these properties in order to get more complete results. As a result, the DBpedia Mapping Language has been developed to help in mapping these properties to an ontology while reducing the number of synonyms. Due to the large diversity of infoboxes and properties in use on Wikipedia, the process of developing and improving these mappings has been opened to public contributions.
Version 2014 was released in September 2014. A main change since previous versions was the way abstract texts were extracted. Specifically, running a local mirror of Wikipedia and retrieving rendered abstracts from it made extracted texts considerably cleaner. Also, a new data set extracted from Wikimedia Commons was introduced.
As of June 2021, DBPedia contains over 850 million triples.
Examples
DBpedia extracts factual information from Wikipedia pages, allowing users to find answers to questions where the information is spread across multiple Wikipedia articles. Data is accessed using an SQL-like query language for RDF called SPARQL.
For example, if one were interested in the Japanese shōjo manga series Tokyo Mew Mew, and wanted to find the genres of other works written by its illustrator Mia Ikumi. DBpedia combines information from Wikipedia's entries on Tokyo Mew Mew, Mia Ikumi and on this author's works such as Super Doll Licca-chan and Koi Cupid. Since DBpedia normalises information into a single database, the following query can be asked without needing to know exactly which entry carries each fragment of information, and will list related genres:
PREFIX dbprop: <http://dbpedia.org/ontology/>
PREFIX db: <http://dbpedia.org/resource/>
SELECT ?who, ?WORK, ?genre WHERE {
db:Tokyo_Mew_Mew dbprop:author ?who .
?WORK dbprop:author ?who .
OPTIONAL { ?WORK dbprop:genre ?genre } .
}
Use cases
DBpedia has a broad scope of entities covering different areas of human knowledge. This makes it a natural hub for connecting datasets, where external datasets could link to its concepts. The DBpedia dataset is interlinked on the RDF level with various other Open Data datasets on the Web. This enables applications to enrich DBpedia data with data from these datasets. , there are more than 45 million interlinks between DBpedia and external datasets including: Freebase, OpenCyc, UMBEL, GeoNames, MusicBrainz, CIA World Fact Book, DBLP, Project Gutenberg, DBtune Jamendo, Eurostat, UniProt, Bio2RDF, and US Census data. The Thomson Reuters initiative OpenCalais, the Linked Open Data project of The New York Times, the Zemanta API and DBpedia Spotlight also include links to DBpedia. The BBC uses DBpedia to help organize its content. Faviki uses DBpedia for semantic tagging. Samsung also includes DBpedia in its "Knowledge Sharing Platform".
Such a rich source of structured cross-domain knowledge is fertile ground for artificial intelligence systems. DBpedia was used as one of the knowledge sources in IBM Watson's Jeopardy! winning system.
Amazon provides a DBpedia Public Data Set that can be integrated into Amazon Web Services applications.
Data about creators from DBpedia can be used for enriching artworks' sales observations.
The crowdsourcing software company, Ushahidi, built a prototype of its software that leveraged DBpedia to perform semantic annotations on citizen-generated reports. The prototype incorporated the "YODIE" (Yet another Open Data Information Extraction system) service developed by the University of Sheffield, which uses DBpedia to perform the annotations. The goal for Ushahidi was to improve the speed and facility with which incoming reports could be validated managed.
DBpedia Spotlight
DBpedia Spotlight is a tool for annotating mentions of DBpedia resources in text. This allows linking unstructured information sources to the Linked Open Data cloud through DBpedia. DBpedia Spotlight performs named entity extraction, including entity detection and name resolution (in other words, disambiguation). It can also be used for named entity recognition, and other information extraction tasks. DBpedia Spotlight aims to be customizable for many use cases. Instead of focusing on a few entity types, the project strives to support the annotation of all 3.5million entities and concepts from more than 320 classes in DBpedia. The project started in June 2010 at the Web Based Systems Group at the Free University of Berlin.
DBpedia Spotlight is publicly available as a web service for testing and a Java/Scala API licensed via the Apache License. The DBpedia Spotlight distribution includes a jQuery plugin that allows developers to annotate pages anywhere on the Web by adding one line to their page. Clients are also available in Java or PHP. The tool handles various languages through its demo page and web services. Internationalization is supported for any language that has a Wikipedia edition.
Archivo ontology database
From 2020, the DBpedia project provides a regularly updated database of web‑accessible ontologies written in the OWL ontology language. Archivo also provides a four star rating scheme for the ontologies it scrapes, based on accessibility, quality, and related fitness‑for‑use criteria. For instance, SHACL compliance for graph‑based data is evaluated when appropriate. Ontologies should also contain metadata about their characteristics and specify a public license describing their terms‑of‑use. the Archivo database contains 1368 entries.
History
DBpedia was initiated in 2007 by Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak and Zachary Ives.
See also
BabelNet
Semantic MediaWiki
Wikidata
YAGO (database)
References
External links
Free software culture and documents
Open data
Semantic Web
Knowledge bases
History of Wikipedia
Java platform
Free software programmed in Scala | DBpedia | [
"Technology"
] | 1,780 | [
"Computing platforms",
"Java platform"
] |
11,865,483 | https://en.wikipedia.org/wiki/BRN-3 | BRN-3 is a group of related transcription factors in the POU family. They are also known as class 4 POU domain homeobox proteins.
There are three BRN-3 proteins encoded by the following genes:
BRN3A (POU4F1, )
BRN3B (POU4F2, )
BRN3C (POU4F3, )
Nomenclature
The BRN or Brn prefix is an abbreviation for "brain"; the longer name is "Brain-specific homeobox". The name of the group may also be abbreviated as POU4, Pou4, POU IV, or POU-IV.
References
External links
Transcription factors | BRN-3 | [
"Chemistry",
"Biology"
] | 146 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
11,865,929 | https://en.wikipedia.org/wiki/Tantangara%20Dam | Tantangara Dam is a major ungated concrete gravity dam with concrete chute spillway across the Murrumbidgee River in Tantangara, upstream of Adaminaby in the Snowy Mountains region of New South Wales, Australia. The dam is part of the Snowy Mountains Scheme, a vast hydroelectricity and irrigation complex constructed in south-east Australia between 1949 and 1974 and now run by Snowy Hydro. The purpose of the dam includes water management and conservation, with much of the impounded headwaters diverted to Lake Eucumbene. The impounded reservoir is called Tantangara Reservoir.
Location and features
Commenced in 1958 and completed in 1960, the Tantangara Dam is located on the Murrumbidgee River, approximately downstream of its confluence with Gurrangorambla Creek and is wholly within the Kosciuszko National Park. Her Royal Highness Princess Alexandra of Kent visited the dam in 1959, during its construction.
The dam was constructed by Utah-Brown & Root Sudamericana on behalf of the Snowy Mountains Hydro-Electric Authority, and is now managed by Snowy Hydro Limited. The concrete gravity dam of is high, with a crest length of . At 100 per cent capacity, the dam wall holds back of water. The surface area of Tantangara Reservoir is and the catchment area is . The spillway across the Murrumbidgee River is capable of discharging .
Water flows from Tantangara Reservoir to Lake Eucumbene via the diameter Murrumbidgee-Eucumbene tunnel falling in the process. Flow is controlled by a regulating gate such that a maximum of is allowed. Flow downstream into the Murrumbidgee River is controlled at the dam and comprises two shafts, at the outlet tower and tapering to diameter before passing through a diameter nozzle to the river diversion tunnel, with a capacity of
.
Water flows
Environmental Water
The Snowy Water Initiative (SWI) is an agreement for water recovery and environmental flows between the NSW, Victorian and Australian Governments, and Snowy Hydro Limited (SHL) which is set out in the Snowy Water Inquiry Outcomes Implementation Deed 2002 (SWIOID 2002). The SWI provides three main environmental water programs as part of rebalancing the impacts of the Snowy Hydro Scheme on montane rivers. These three programs are increased flows for: (i) Snowy River, (ii) River Murray, and (iii) Snowy Montane Rivers.
The Snowy Montane River Increased Flows (SMRIF) program identifies five montane rivers to receive environmental water. The water availability for SMRIF is linked to the water availability for Snowy River Increased Flows (SRIF) (Williams 2017), which is determined by the water recovery in the western rivers and the preceding climatic conditions. The SWIOID 2002 provides for SHL to forego up to 150 gigawatt hours (GW h) of electricity generation to allow for environmental releases to be made to SMRIF. This value of 150 GW h is converted into a volumetric allocation, but the conversion factor differs depending on the location of the releases in the Snowy Mountains Scheme, and thus influence the overall volume released. In some locations water released can be re-used to generate electricity so a smaller conversion factor is applied (SWIOID 2002), however, where water is lost to the Snowy Scheme a higher conversion factor is applied.
Releases to the Murrumbidgee River are made from Tantangara dam, a much larger structure than the other release points, for which a gated release structure is available. Accordingly, the daily flow release strategy for the Murrumbidgee River differs from the other SMRIF release points.
The releases comprise two components: (i) the SMRIF and (ii) a Base Passing Flow (BPF). The BPF has some key components, these being:
A 2 GL year volume is targeted over the longer term.
A 32 ML/day discharge is to be targeted at Mittagang Crossing, near Cooma.
A maximum of 3.5 GL is set for any one year.
The Base Passing Flow releases typically occur during drier weather.
A modified 'flow scaling' approach used to set SRIF releases to the Snowy River has also been applied to the SMRIF in the Murrumbidgee River (;). This modified approach uses the recorded flows in a nearby natural catchment (in a year where similar volumes of flow occurred) to set daily releases. For releases to the Murrumbidgee River from Tantangara dam, the initial daily flow targets were set using the flow sequence for the Murrumbidgee River above Tantangara (station No. 410535). Daily targets are then amended to account for operational implementation.
The SWIOID 2002 identifies a target of 27 GL (i.e. 30% of the Mean Annual Natural Flow- MANF) in a full allocation scenario. Since 2005–06, environmental water has been released to the Upper Murrumbidgee River to repair the condition of the river. Over the period 2005 to 2018, an average of 17.7 GL per year of environmental water has been released to the upper Murrumbidgee River. The annual allocation varied between 4 GL per year during the drought and 42.3 GL per year once sufficient water had been recovered.
The environmental water portfolio does not allow for environmental water to be released everyday of the year from Tantangara Dam. Over the period 2005 to 2018, various environmental flow strategies have attempted to re-instate a winter-spring montane flow pattern. These approaches have attempted to improve instream habitat and ecological processes as the basis for river recovery, rather than the traditional Australian e-water approach of managing rarity of aquatic biota.
A 2011 report by the Snowy Scientific Committee stated that the Tantangara Dam was starving the upper Murrumbidgee River of environmental water flows from the Snowy Mountains; needed to restore river health. The committee claimed that there was an apparent "administrative and managerial void", with no river management strategy and no proper monitoring because of a lack of regulatory resources.
Water Transfers
In 2005, the Australian Capital Territory Government explored the options of augmenting water supply for Canberra by developing a long tunnel alternative including weir, connecting tunnel, outflow pipes, and hydro-power plant construction to link the Murrumbidgee with Corin Reservoir; and/or a Murrumbidgee River flow alternative including weir, pumping station and pipeline construction to link the Murrumbidgee with Googong Reservoir. In 2009, the ACT Government endorsed a recommendation from ACTEW for implementation of the Tantangara Transfer Project, that involves transferring water from the Murrumbidgee River (below the Burrinjuck and Blowering dams) in New South Wales to the ACT via the Snowy Mountains Scheme.
Power station
The reservoir is a key part of the Snowy 2.0 Pumped Storage Power Station. It will act as the top storage for a pumped hydro power station.
Recreation
Water levels are held at 20% in the summer months so that the Port Philip Trail remains above the water. There are good populations of both brown trout and rainbow trout within the reservoir. The wet winter of 2016 saw levels exceed 70% in October- the highest for 2 decades.
Gallery
See also
List of dams and reservoirs in New South Wales
References
External links
Engineering projects
Murrumbidgee River
Snowy Mountains Scheme
Gravity dams
Dams completed in 1960
Dams in New South Wales
Snowy Monaro Regional Council | Tantangara Dam | [
"Engineering"
] | 1,524 | [
"nan"
] |
11,866,035 | https://en.wikipedia.org/wiki/Cottrell%20equation | In electrochemistry, the Cottrell equation describes the change in electric current with respect to time in a controlled potential experiment, such as chronoamperometry. Specifically it describes the current response when the potential is a step function in time. It was derived by Frederick Gardner Cottrell in 1903. For a simple redox event, such as the ferrocene/ferrocenium couple, the current measured depends on the rate at which the analyte diffuses to the electrode. That is, the current is said to be "diffusion controlled". The Cottrell equation describes the case for an electrode that is planar but can also be derived for spherical, cylindrical, and rectangular geometries by using the corresponding Laplace operator and boundary conditions in conjunction with Fick's second law of diffusion.
where,
= current, in units of A
= number of electrons (to reduce/oxidize one molecule of analyte , for example)
= Faraday constant, 96485 C/mol
= area of the (planar) electrode in cm2
= initial concentration of the reducible analyte in mol/cm3;
= diffusion coefficient for species in cm2/s
= time in s.
Deviations from linearity in the plot of vs. sometimes indicate that the redox event is associated with other processes, such as association of a ligand, dissociation of a ligand, or a change in geometry. Deviations from linearity can be expected at very short time scales due to non-ideality in the potential step. At long time scales, buildup of the diffusion layer causes a shift from a linearly dominated to a radially dominated diffusion regime, which causes another deviation from linearity.
In practice, the Cottrell equation simplifies to where is the collection of constants for a given system (, , ).
See also
Voltammetry
Electroanalytical methods
Limiting current
Anson equation
References
Electrochemical equations | Cottrell equation | [
"Chemistry",
"Mathematics"
] | 402 | [
"Mathematical objects",
"Equations",
"Electrochemistry",
"Electrochemistry stubs",
"Physical chemistry stubs",
"Electrochemical equations"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.