id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
3,077,259 | https://en.wikipedia.org/wiki/Purlin | A purlin (or historically purline, purloyne, purling, perling) is a longitudinal, horizontal, structural member in a roof. In traditional timber framing there are three basic types of purlin: purlin plate, principal purlin, and common purlin.
Purlins also appear in steel frame construction. Steel purlins may be painted or greased for protection from the environment.
Etymology
Information on the origin of the term "purlin" is scant. The Oxford Dictionary suggests a French origin, with the earliest quote using a variation of purlin in 1447, though the accuracy of this claim has been disputed.
In wood construction
Purlin plate
A purlin plate in wood construction is also called an "arcade plate" in European English, "under purlin", and "principal purlin". The term plate means a major, horizontal, supporting timber. Purlin plates are beams which support the mid-span of rafters and are supported by posts. By supporting the rafters they allow longer spans than the rafters alone could span, thus allowing a wider building. Purlin plates are very commonly found in large old barns in North America. A crown plate has similarities to a purlin plate but supports collar beams in the middle of a timber-framed building.
Principal purlin
Principal purlins in wood construction, also called "major purlins" and "side purlins," are supported by principal rafters and support common rafters in what is known as a "double roof" (a roof framed with a layer of principal rafters and a layer of common rafters). Principal purlins are further classified by how they connect to the principal rafters: "through purlins" pass over the top; "butt purlins" tenon into the sides of the principal rafters; and "clasped purlins," of which only one historic U.S. example is known,) are captured by a collar beam. Through purlins are further categorized as "trenched," "back," or "clasped;" butt purlins are classified as "threaded," "tenoned," and/or "staggered."
Common purlin
Common purlins in wood construction, also called a "major-rafter minor-purlin system". Common purlins are typically "trenched through" the top sides (backs) of principal rafters and carry vertical roof sheathing (the key to identifying this type of roof system). Common purlin roofs in North America are found in areas settled by the English and may have been a new invention in the Massachusetts Bay Colony. No examples of framed buildings with common purlin roofs have been reported in England, however some stone barns in England have vertically boarded, common purlin roofs. Historically, these roofs are found in New England, the highest concentration in Maine, and isolated parts of New York and along the St. Lawrence River in Canada. One of the oldest surviving examples is in the Coffin House in Newbury, Massachusetts, from 1678. The purpose of a common purlin roof may be they allow a board roof, that is a roof of nothing but vertically laid boards with seams covered with battens or another layer of boards.
In steel construction
In steel construction, the term purlin typically refers to roof framing members that span parallel to the building eave, and support the roof decking or sheeting. The purlins are in turn supported by rafters or walls. Purlins are most commonly used in Steel Framed Building Systems, where Z-shapes are utilized in a manner that allows flexural continuity between spans.
Steel industry practice assigns structural shapes representative designations for convenient shorthand description on drawings and documentation: Channel sections, with or without flange stiffeners, are usually referenced as C shapes; Channel sections without flange stiffeners are also referenced as U shapes; Point symmetric sections that are shaped similar to the letter Z are referenced as Z shapes. Section designations can be regional and even specific to a manufacturer. In steel building construction, secondary members such as purlins (roof) and girts (wall) are frequently cold-formed steel C, Z or U sections, (or mill rolled) C sections.
Cold formed members can be efficient on a weight basis relative to mill rolled sections for secondary member applications. Additionally, Z sections can be nested for transportation bundling and, on the building, lapped at the supports to develop a structurally efficient continuous beam across multiple supports.
Gallery
Note: The sketches in this section reference terminology commonly used in the UK and Australia.
See also
Girt
Joist
Roof construction
Timber roof trusses
References
External links
An Illustrated glossary of roofs and roofing terms.
Roofs
Building engineering
Structural system
Timber framing | Purlin | [
"Technology",
"Engineering"
] | 977 | [
"Structural engineering",
"Timber framing",
"Building engineering",
"Structural system",
"Civil engineering",
"Roofs",
"Architecture"
] |
3,077,649 | https://en.wikipedia.org/wiki/Sociology%20of%20the%20body | Sociology of the body is a branch of sociology studying the representations and social uses of the human body in modern societies.
Early theories
According to Thomas Laqueur, prior to the eighteenth century the predominant model for a social understanding of the body was the "one sex model/one flesh model". It followed that there was one model of the body which differed between the sexes and races, for example, the vagina was simply seen as a weaker version of the penis and even thought to emit sperm.
This was changed by the Enlightenment. In the sixteenth century, Europe began to participate in the slave trade and in order to justify this a large quantity of literature was produced showing the deviant sexuality and savagery of the African (Fanon, 1976). In the eighteenth century, the ideas of egalitarianism and universal and inalienable rights were becoming the intellectual norm. However, they could not justify the subordination of women within this theory.
To explain these the biology of incommensurability was created. This essentially claimed that different sexes and races were better adjusted for different tasks and could therefore show the necessity of discrimination and subordination. For example, craniometry was used to show people of African descent to be less evolved than those of European descent (Gould, 1981).
This was also combined with the technological developments which were taking place, leading to people seeing the body as a machine and therefore understandable, classifiable and repairable, one of the first examples of this was the work of William Harvey in the early seventeenth century.
Another early key area of development was the Cartesian Dichotomy. This saw the mind and the body as separated and led to the principle of interaction between the two being an accepted theory on the body until the development of the Structuralist approach in the twentieth century.
The importance of studying the body
Especially important within the sociology of the body tradition is the sociology of health and illness. This is because illness may obviously reduce the level of normal functioning of the body. Also, increasingly people in society believe that illness is prevented by fulfilling activities leading to a healthy body (thus changing one's lifestyle) such as dieting and exercise, as well as avoiding anything that can cause damage to the body, like smoking. Moreover, medical science is now able to alter our bodies through plastic surgery, transplanting organs, reproductive aids and even change in an unborn baby's genetic structure.
1
Historical physical practices
Many researchers have worked on this topic in France. The first was probably Jean-Marie Brohm, writing a book titled Body and Politics in 1974 (Delarge), followed by numerous authors. Georges Vigarello wrote Le Corps redressé, in 1978, Christian Pociello Sports et Société in 1981, André Rauch Le souci du corps in 1983, Jacques Gleyse (fr) Archéologie de l'Education physique au XXe siècle en France, in 1995 and L'Instrumentalisation du corps, in 1997. He is specially working on the topic of links between words and flesh.
Various journals are publishing papers in this domain in France: STAPS International Journal of Sport Science and Physical Education, Corps & Culture, and Corps
Body image and related disorders
Sociology of the body has become deeply affected by society and the way in which society views one another, which in turn results in the way in which people, as individuals, view themselves. At one end of the spectrum, there are eating disorders such as anorexia nervosa, bulimia nervosa, binge-eating disorder, and body dysmorphic disorder, and on the other end, there is a growing epidemic of obesity, especially in the US. Both ideals have increased widely over the last few decades due in part to growing mass media coverage in which there are norms within society and the always growing pressure to either look and feel a certain way.
Anorexia nervosa is an eating disorder marked by the refusal to maintain a healthy weight due to the intense fear of weight gain, with the disturbance in perception of the individuals body shape and weight. This disorder is shown in severely underweight individuals who engage in purging, which includes self-induced vomiting, misuse of laxatives or diuretics, or preferring enemas. People who have anorexia nervosa often think about food/have an appetite however the fear of weight gain dominates the want for food leaving them starving. If these individuals do eat, they search for low calorie foods and also engage in excessive exercise.
Bulimia nervosa is marked by binge-eating, inappropriate methods to prevent weight gain with negative self-evaluation about body shape and weight. Binge-eating is eating a certain amount of food in a limited amount of time, that is typically too much for most people to eat in one sitting. This happens mostly in private and is triggered by low self-esteem, stress, and depression. After these binge-eating episodes, people purge (self-induced vomiting). People with this disorder are overweight or have a normal body weight.
Binge eating disorder is similar to bulimia nervosa in the way that people engage in binge-eating episodes, however they do not purge after as patients with bulimia nervosa do.
Body dysmorphic disorder is an obsessive-compulsive related disorder in which a person is constantly thinking negatively about one's body and engaging in repetitive behaviors to compare themselves to others. These thoughts and behaviors are unwanted, time consuming, and intrusive causing significant stress. This disorder affects men and women, typically starting in adolescents/ teenage years. This disorder may cause an individual to avoid social scenes and isolate themselves from others. People with this disorder often seek treatment such as surgery or skin treatments to approve their appearance.
All of these disorders are rooted in feelings of shame and desire to have control over one's body (Giddens, Duneier, Appelbaum, and Carr 2009). The individual feels inadequate and imperfect. They may have anxieties about how others perceive them which becomes a focus on feelings about their body. (Giddens, Duneier, Appelbaum, and Carr 2009).
Obesity is defined as abnormal or excessive fat accumulation that presents a risk to health, according to the World Health Organization. With a BMI ( body mass index) of over 25 is considered overweight, and 30 plus is considered obese. According to the global burden of disease, obesity has now become an epidemic with 4 million people dying each year with obesity being the cause in 2017.
Reasons thought to be behind obesity vary widely and are often debated. Some believe it has to do with the "population being a statistical artifact, some believe that childhood obesity is due to compositional factors, and then others believe that the problems are due to something called an "obesogenic environment"." Sociologists are perplexed mostly in the persistence of negative attitudes towards overweight and obese individuals.
From the sociological perspective the interactions that have been seen to show up between society and that of obese individuals is that persons of obesity are more likely to experience employment discrimination, discrimination by health care providers, and the daily experience of teasing, insult, and shame (Giddens, Duneier, Appelbaum, and Carr 2009).
People in or who influenced sociology of the body
Philippe Ariès
Alexandre Baril
Susan Bordo
Pierre Bourdieu
Judith Butler
Peter Conrad
Mary Douglas
Norbert Elias
Frantz Fanon
Mike Featherstone
Michel Foucault
Erving Goffman
Donna Haraway
Julia Kristeva
Thomas W. Laqueur
Deborah Lupton
Maurice Merleau-Ponty
Ann Oakley
John O'Neill
Susie Orbach
Nikolas Rose
Barbara Katz Rothman
Dorothy E. Smith
Bryan Turner
See also
embodiment theory in anthropology
sociology of health and illness, a subfield of sociology one of whose characteristics is that it interacts with sociology of the body more than medical sociology does
References
Body
Human body | Sociology of the body | [
"Physics"
] | 1,648 | [
"Human body",
"Physical objects",
"Matter"
] |
3,077,731 | https://en.wikipedia.org/wiki/Interdimensional | Interdimensional may refer to:
Interdimensional hypothesis
Interdimensional doorway
Interdimensional travel
Dimension | Interdimensional | [
"Physics"
] | 26 | [
"Geometric measurement",
"Dimension",
"Physical quantities",
"Theory of relativity"
] |
3,077,732 | https://en.wikipedia.org/wiki/Fenchel%27s%20theorem | In differential geometry, Fenchel's theorem is an inequality on the total absolute curvature of a closed smooth space curve, stating that it is always at least . Equivalently, the average curvature is at least , where is the length of the curve. The only curves of this type whose total absolute curvature equals and whose average curvature equals are the plane convex curves. The theorem is named after Werner Fenchel, who published it in 1929.
The Fenchel theorem is enhanced by the Fáry–Milnor theorem, which says that if a closed smooth simple space curve is nontrivially knotted, then the total absolute curvature is greater than .
Proof
Given a closed smooth curve with unit speed, the velocity is also a closed smooth curve (called tangent indicatrix). The total absolute curvature is its length .
The curve does not lie in an open hemisphere. If so, then there is such that , so , a contradiction. This also shows that if lies in a closed hemisphere, then , so is a plane curve.
Consider a point such that curves and have the same length. By rotating the sphere, we may assume and are symmetric about the axis through the poles. By the previous paragraph, at least one of the two curves and intersects with the equator at some point . We denote this curve by . Then .
We reflect across the plane through , , and the north pole, forming a closed curve containing antipodal points , with length . A curve connecting has length at least , which is the length of the great semicircle between . So , and if equality holds then does not cross the equator.
Therefore, , and if equality holds then lies in a closed hemisphere, and thus is a plane curve.
References
; see especially equation 13, page 49
Theorems in differential geometry
Theorems in plane geometry
Theorems about curves
Curvature (mathematics) | Fenchel's theorem | [
"Physics",
"Mathematics"
] | 373 | [
"Geometric measurement",
"Theorems in differential geometry",
"Physical quantities",
"Theorems in plane geometry",
"Theorems about curves",
"Theorems in geometry",
"Curvature (mathematics)"
] |
3,077,734 | https://en.wikipedia.org/wiki/Fenchel%27s%20duality%20theorem | In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel.
Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn. Then, if regularity conditions are satisfied,
where ƒ * is the convex conjugate of ƒ (also referred to as the Fenchel–Legendre transform) and g * is the concave conjugate of g. That is,
Mathematical theorem
Let X and Y be Banach spaces, and be convex functions and be a bounded linear map. Then the Fenchel problems:
satisfy weak duality, i.e. . Note that are the convex conjugates of f,g respectively, and is the adjoint operator. The perturbation function for this dual problem is given by .
Suppose that f,g, and A satisfy either
f and g are lower semi-continuous and where is the algebraic interior and , where h is some function, is the set , or
where are the points where the function is continuous.
Then strong duality holds, i.e. . If then supremum is attained.
One-dimensional illustration
In the following figure, the minimization problem on the left side of the equation is illustrated. One seeks to vary x such that the vertical distance between the convex and concave curves at x is as small as possible. The position of the vertical line in the figure is the (approximate) optimum.
The next figure illustrates the maximization problem on the right hand side of the above equation. Tangents are drawn to each of the two curves such that both tangents have the same slope p. The problem is to adjust p in such a way that the two tangents are as far away from each other as possible (more precisely, such that the points where they intersect the y-axis are as far from each other as possible). Imagine the two tangents as metal bars with vertical springs between them that push them apart and against the two parabolas that are fixed in place.
Fenchel's theorem states that the two problems have the same solution. The points having the minimum vertical separation are also the tangency points for the maximally separated parallel tangents.
See also
Legendre transformation
Convex conjugate
Moreau's theorem
Wolfe duality
Werner Fenchel
References
Theorems in analysis
Convex optimization | Fenchel's duality theorem | [
"Mathematics"
] | 485 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
3,077,796 | https://en.wikipedia.org/wiki/Fluorodeoxyglucose%20%2818F%29 | {{DISPLAYTITLE:Fluorodeoxyglucose (18F)}}
[18F]Fluorodeoxyglucose (INN), or fluorodeoxyglucose F 18 (USAN and USP), also commonly called fluorodeoxyglucose and abbreviated [18F]FDG, 2-[18F]FDG or FDG, is a radiopharmaceutical, specifically a radiotracer, used in the medical imaging modality positron emission tomography (PET). Chemically, it is 2-deoxy-2-[18F]fluoro-D-glucose, a glucose analog, with the positron-emitting radionuclide fluorine-18 substituted for the normal hydroxyl group at the C-2 position in the glucose molecule.
The uptake of [18F]FDG by tissues is a marker for the tissue uptake of glucose, which in turn is closely correlated with certain types of tissue metabolism. After [18F]FDG is injected into a patient, a PET scanner can form two-dimensional or three-dimensional images of the distribution of [18F]FDG within the body.
Since its development in 1976, [18F]FDG had a profound influence on research in the neurosciences. The subsequent discovery in 1980 that [18F]FDG accumulates in tumors underpins the evolution of PET as a major clinical tool in cancer diagnosis. [18F]FDG is now the standard radiotracer used for PET neuroimaging and cancer patient management.
The images can be assessed by a nuclear medicine physician or radiologist to provide diagnoses of various medical conditions.
History
In 1968, Dr. Josef Pacák, Zdeněk Točík and Miloslav Černý at the Department of Organic Chemistry, Charles University, Czechoslovakia were the first to describe the synthesis of FDG. Later, in the 1970s, Tatsuo Ido and Al Wolf at the Brookhaven National Laboratory were the first to describe the synthesis of FDG labeled with fluorine-18. The compound was first administered to two normal human volunteers by Abass Alavi in August, 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of [18F]FDG in that organ (see history reference below).
Beginning in August 1990, and continuing throughout 1991, a shortage of oxygen-18, a raw material for FDG, made it necessary to ration isotope supplies. Israel's oxygen-18 facility had shut down due to the Gulf War, and the U.S. government had shut down its isotopes of carbon, oxygen and nitrogen facility at Los Alamos National Laboratory, leaving Isotec as the main supplier.
Synthesis
[18F]FDG was first synthesized via electrophilic fluorination with [18F]F2. Subsequently, a "nucleophilic synthesis" was devised with the same radioisotope.
As with all radioactive 18F-labeled radioligands, the fluorine-18 must be made initially as the fluoride anion in a cyclotron. Synthesis of complete [18F]FDG radioactive tracer begins with synthesis of the unattached fluoride radiotracer, since cyclotron bombardment destroys organic molecules of the type usually used for ligands, and in particular, would destroy glucose.
Cyclotron production of fluorine-18 may be accomplished by bombardment of neon-20 with deuterons, but usually is done by proton bombardment of 18O-enriched water, causing a (p-n) reaction (sometimes called a "knockout reaction"a common type of nuclear reaction with high probability where an incoming proton "knocks out" a neutron) in the 18O. This produces "carrier-free" dissolved [18F]fluoride ([18F]F−) ions in the water. The 109.8-minute half-life of fluorine-18 makes rapid and automated chemistry necessary after this point.
Anhydrous fluoride salts, which are easier to handle than fluorine gas, can be produced in a cyclotron. To achieve this chemistry, the [18F]F− is separated from the aqueous solvent by trapping it on an ion-exchange column, and eluted with an acetonitrile solution of 2,2,2-cryptand and potassium carbonate. Evaporation of the eluate gives [(crypt-222)K]+ [18F]F− (2) .
The fluoride anion is nucleophilic, so anhydrous conditions are required to avoid competing reactions involving hydroxide, which is also a good nucleophile. The use of the cryptand to sequester the potassium ions avoids ion-pairing between free potassium and fluoride ions, rendering the fluoride anion more reactive.
Intermediate 2 is treated with the protected mannose triflate (1); the fluoride anion displaces the triflate leaving group in an SN2 reaction, giving the protected fluorinated deoxyglucose (3). Base hydrolysis removes the acetyl protecting groups, giving the desired product (4) after removing the cryptand via ion-exchange:
Mechanism of action, metabolic end-products, and metabolic rate
[18F]FDG, as a glucose analog, is taken up by high-glucose-using cells such as brain, brown adipocytes, kidney, and cancer cells, where phosphorylation prevents the glucose from being released again from the cell, once it has been absorbed. The 2-hydroxyl group (–OH) in normal glucose is needed for further glycolysis (metabolism of glucose by splitting it), but [18F]FDG is missing this 2-hydroxyl. Thus, in common with its sister molecule 2-deoxy-D-glucose, FDG cannot be further metabolized in cells. The [18F]FDG-6-phosphate formed when [18F]FDG enters the cell cannot exit the cell before radioactive decay. As a result, the distribution of [18F]FDG is a good reflection of the distribution of glucose uptake and phosphorylation by cells in the body.
The fluorine in [18F]FDG decays radioactively via beta-decay to 18O−. After picking up a proton H+ from a hydronium ion in its aqueous environment, the molecule becomes glucose-6-phosphate labeled with harmless nonradioactive "heavy oxygen" in the hydroxyl at the C-2 position. The new presence of a 2-hydroxyl now allows it to be metabolized normally in the same way as ordinary glucose, producing non-radioactive end-products.
Although in theory all [18F]FDG is metabolized as above with a radioactivity elimination half-life of 110 minutes (the same as that of fluorine-18), clinical studies have shown that the radioactivity of [18F]FDG partitions into two major fractions. About 75% of the fluorine-18 activity remains in tissues and is eliminated with a half-life of 110 minutes, presumably by decaying in place to O-18 to form [18O]O-glucose-6-phosphate, which is non-radioactive (this molecule can soon be metabolized to carbon dioxide and water, after nuclear transmutation of the fluorine to oxygen ceases to prevent metabolism). Another fraction of [18F]FDG, representing about 20% of the total fluorine-18 activity of an injection, is excreted renally by two hours after a dose of [18F]FDG, with a rapid half-life of about 16 minutes (this portion makes the renal-collecting system and bladder prominent in a normal PET scan). This short biological half-life indicates that this 20% portion of the total fluorine-18 tracer activity is eliminated renally much more quickly than the isotope itself can decay. Unlike normal glucose, FDG is not fully reabsorbed by the kidney. Because of this rapidly excreted urine 18F, the urine of a patient undergoing a PET scan may therefore be especially radioactive for several hours after administration of the isotope.
All radioactivity of [18F]FDG, both the 20% which is rapidly excreted in the first several hours of urine which is made after the exam, and the 80% which remains in the patient, decays with a half-life of 110 minutes (just under two hours). Thus, within 24 hours (13 half-lives after the injection), the radioactivity in the patient and in any initially voided urine which may have contaminated bedding or objects after the PET exam will have decayed to 2−13 = of the initial radioactivity of the dose. In practice, patients who have been injected with [18F]FDG are told to avoid the close vicinity of especially radiation-sensitive persons, such as infants, children and pregnant women, for at least 12 hours (7 half-lives, or decay to the initial radioactive dose).
Production
Alliance Medical and Siemens Healthcare are the only producers in the United Kingdom. A dose of FDG in England costs about £130. In Northern Ireland, where there is a single supplier, doses cost up to £450. IBA Molecular North America and Zevacor Molecular, both of which are owned by Illinois Health and Science (IBAM having been purchased as of 1 August 2015), Siemens' PETNET Solutions (a subsidiary of Siemens Healthcare), and Cardinal Health are producers in the U.S.
Distribution
The labeled [18F]FDG compound has a relatively short shelf life which is dominated by the physical decay of fluorine-18 with a half-life of 109.8 minutes, or slightly less than two hours. Still, this half life is sufficiently long to allow shipping the compound to remote PET scanning facilities, in contrast to other medical radioisotopes like carbon-11. Due to transport regulations for radioactive compounds, delivery is normally done by specially licensed road transport, but means of transport may also include dedicated small commercial jet services. Transport by air allows expanding the distribution area around a [18F]FDG production site to deliver the compound to PET scanning centres even hundreds of miles away.
Recently, on-site cyclotrons with integral shielding and portable chemistry stations for making [18F]FDG have accompanied PET scanners to remote hospitals. This technology holds some promise in the future, for replacing some of the scramble to transport [18F]FDG from site of manufacture to site of use.
Applications
In PET imaging, [18F]FDG is primarily used for imaging tumors in oncology, where a static [18F]FDG PET scan is performed and the tumor [18F]FDG uptake is analyzed in terms of Standardized Uptake Value (SUV). FDG PET/CT can be used for the assessment of glucose metabolism in the heart and the brain. [18F]FDG is taken up by cells, and subsequently phosphorylated by hexokinase (whose mitochondrial form is greatly elevated in rapidly growing malignant tumours). Phosphorylated [18F]FDG cannot be further metabolised and is thus retained by tissues with high metabolic activity, such as most types of malignant tumours. As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin's disease, non-Hodgkin lymphoma, colorectal cancer, breast cancer, melanoma, and lung cancer. It has also been approved for use in diagnosing Alzheimer's disease.
In body-scanning applications in searching for tumor or metastatic disease, a dose of [18F]-FDG in solution (typically 5 to 10 millicuries or 200 to 400 MBq) is typically injected rapidly into a saline drip running into a vein, in a patient who has been fasting for at least six hours, and who has a suitably low blood sugar. (This is a problem for some diabetics; usually PET scanning centers will not administer the isotope to patients with blood glucose levels over about 180 mg/dL = 10 mmol/L, and such patients must be rescheduled). The patient must then wait about an hour for the sugar to distribute and be taken up into organs which use glucose – a time during which physical activity must be kept to a minimum, in order to minimize uptake of the radioactive sugar into muscles (this causes unwanted artifacts in the scan, interfering with reading especially when the organs of interest are inside the body vs. inside the skull). Then, the patient is placed in the PET scanner for a series of one or more scans which may take from 20 minutes to as long as an hour (often, only about one-quarter of the body length may be imaged at a time).
References
Aldoses
Deoxy sugars
Medicinal radiochemistry
Neuroimaging
Organofluorides
PET radiotracers
Pyranoses
Radiopharmaceuticals
de:Fluordesoxyglucose | Fluorodeoxyglucose (18F) | [
"Chemistry"
] | 2,839 | [
"Carbohydrates",
"Medicinal radiochemistry",
"Deoxy sugars",
"PET radiotracers",
"Radiopharmaceuticals",
"Medicinal chemistry",
"Chemicals in medicine"
] |
3,078,180 | https://en.wikipedia.org/wiki/Kiss%20%28cryptanalysis%29 | In cryptanalysis, a kiss is a pair of identical messages sent using different ciphers, one of which has been broken. The term was used at Bletchley Park during World War II. A deciphered message in the breakable system provided a "crib" (piece of known plaintext) which could then be used to read the unbroken messages. One example was where messages read in a German meteorological cipher could be used to provide cribs for reading the difficult 4-wheel Naval Enigma cipher.
cribs from re-encipherments ... were known as 'kisses' in Bletchley Park parlance because the relevant signals were marked with 'xx'''
See also
Cryptanalysis of the Enigma
Known-plaintext attack
References
Smith, Michael and Erskine, Ralph (editors): Action this Day'' (2001, Bantam London)
Bletchley Park
Classical cryptography
Cryptographic attacks | Kiss (cryptanalysis) | [
"Technology"
] | 192 | [
"Cryptographic attacks",
"Computer security exploits"
] |
3,078,210 | https://en.wikipedia.org/wiki/Radio%20teleswitch | A radio teleswitch is a device used in the United Kingdom primarily to allow electricity suppliers to switch large numbers of electricity meters between different tariff rates, by broadcasting an embedded signal in broadcast radio signals. Radio teleswitches are also used to switch on/off consumer appliances to make use of cheaper differential tariffs such as Economy 7.
Service role
The typical use of a teleswitch is to manage the start and end times of off-peak charging periods associated with tariffs such as Economy 7 and Economy 10. This includes switching between 'peak' and 'off-peak' meter registers as well as controlling the supply to dedicated off-peak loads such as night storage heating. The use of dynamic switching instead of a fixed timer allows some additional demand management, such as by flexing start and finish times for electric heating loads according to prevailing overall demand levels.
Some suppliers also offer more sophisticated heating control using the radio teleswitch network. For example, Scottish Power 'Weathercall' and SSE's 'Total Heat Total Control' both dynamically vary the length of time storage heating is energised each night depending on the forecast temperature for the following day to help maintain a consistent household temperature.
Teleswitching has also been used to help level out demand in areas where the supply network is close to capacity. In the 1990s, Manweb used such a system to provide different households with different off-peak periods on a weekly alternating basis. By spreading out the high peak demand associated with electric storage heating in Mid Wales, the company avoided upgrading costs of over a million pounds, and £200,000 a year in reduced use-of-system charges.
In the north of Scotland, the radio teleswitch service is also used to help control the local electricity distribution network for resilience purposes.
Operation
Each of the user companies (the RTS Users, or Service Providers) has its own database on the Central Teleswitch Control Unit (CTCU), which is an HPE Integrity computer running OpenVMS on IA-64 for reliability and clustering technology to minimise downtime.
The database defines how each group of teleswitches belonging to the user-company will control the loads and meter registers connected to it. The CTCU uses the database and certain rules to generate and control a continuous string of messages, which is forwarded to the BBC for transmission.
Although each message will be received by all installed teleswitches, the unique user and group-codes carried by the message ensure that only teleswitches carrying the same combination of codes will act on it.
History
The Radio Teleswitch Service (RTS) has its origins in the energy management projects initiated in the United Kingdom by the Electricity Council in the early 1980s. Three projects investigated the feasibility of using the telephone network, the distribution network and national radio for large scale energy management purposes. The radio teleswitch project was chaired by Walter Waring, deputy chairman of Eastern Electricity, and supported by the BBC.
The idea of phase modulating control and data signals onto the low frequency carrier wave used for broadcasting the BBC Radio 4 programmes was tested. The BBC was satisfied that there was no discernible distortion of its broadcast service and no infringement of its Royal Charter.
The technique won the Queen's Award for Technology, while its application for controlling consumer tariffs and loads was approved by the Home Office. The project was funded by the CEGB and the mainland electricity boards, which were each allocated one of sixteen message channels. One channel was reserved for testing and the final one was allocated to Northern Ireland when it joined the project.
The Central Teleswitch Control Unit (CTCU) system was updated in 2007 to replace the obsolete computing hardware with brand new, modern, fully supported equipment. The old DEC MicroVAX machines were replaced with HP Integrity 2660s. The operating system has also been upgraded to OpenVMS 8.4. No effort has been made to upgrade the obsolete transmitter hardware. All communications lines have also been updated and Internet access has been introduced in addition to the dial-in access by modem.
The new system went live at the end of January 2008 and the update has markedly improved the performance and stability of the system.
In 2016, it was reported that over 1.6 million teleswitches were in use. Around 190,000 were reported as being used dynamically to control loads such as heating on a day-to-day basis, with the remainder following generally fixed switching times that were updated less frequently.
In early 2020, 1.4 million electricity meter 'MPAN' locations were thought to still be using radio teleswitching - although the actual number of individual premises may be lower as some single-household installations use two MPAN references for historical technical reasons.
Formal agreements
The Electricity Association (EA), which was previously known as the Electricity Council, entered into a renewed formal agreement with the BBC in 1996 as an agent of the users. The EA had also negotiated an agreement with the National Grid Company (NGC) concerning the servicing of the CTCU. Since 2004 the functions of EA regarding this contract have been taken over by the Energy Networks Association.
Transmitter and service obsolescence
The Radio Teleswitch Service is broadcast alongside the longwave output of BBC Radio 4 from the Droitwich Transmitting Station.
In October 2011, the BBC stated that the Droitwich transmitter, including Radio 4's longwave service and Radio Teleswitch, will cease to operate when one of the last two valves breaks, and no effort would be made to manufacture more nor to install a replacement longwave transmitter.
The BBC stated that their plan is simply to cease broadcasting on longwave forever once the Droitwich transmitter fails. It has been reported that the BBC estimated that fewer than ten spare compatible valves existed in the world, and that each valve had a working life of between one and ten years.
A 50 kW longwave transmitter transmitting a 1000 Hz wide signal could take over the teleswitch role with adequate nationwide coverage—but some legal provision would probably have to be made to make this possible.
In 2016, the Energy Networks Association consulted on the future of the service, highlighting that no agreement for its operation past the end of 2017 existed. While most suppliers acknowledged the system could ultimately be replaced by smart meter functionality, there was general agreement that the system would need to continue until at least 2020 to avoid significant inconvenience to customers and agreement was secured for the service to continue until the end of March 2020.
Several further extensions have been negotiated between the Energy Networks Association and the BBC from 2019 onwards, due to the lack of progress in developing suitable smart meter alternatives. In late 2020 SSE, one of the major users of the service, began informing its customers that teleswitched meters would be withdrawn and replaced with alternative solutions.
The availability of radio teleswitching has been progressively extended and as of early 2021 the service was due to remain in operation until at least March 2024. By that date, however, the radio frequency (in England) was expected to be switched off in summer 2025.
See also
Economy 7
Economy 10
References and more information
Adapted from: "An introduction to the Radio Teleswitch Service". Shau Sumar - EA internal document, 2003.
D.T. Wright: L.F. radio-data: specification of BBC phase-modulated transmissions on long-waves, Report1984-19, BBC Research & Development, January 1984
British Standards: BS 7647:1993 "Radio teleswitches for tariff and load control"
Energy Networks Association
Electric power in the United Kingdom
Radio technology
British inventions | Radio teleswitch | [
"Technology",
"Engineering"
] | 1,571 | [
"Information and communications technology",
"Telecommunications engineering",
"Radio technology"
] |
3,078,218 | https://en.wikipedia.org/wiki/Castellane | Castellane (; Provençal Occitan: Castelana) is a commune in the Alpes-de-Haute-Provence department in the Provence-Alpes-Côte d'Azur region in Southeastern France. With a population of 1,470 (2019), it has the distinction of being France's least populated subprefecture, ahead of Largentière in Ardèche.
Its inhabitants are referred to as Castellanais (masculine) and Castellanaises (feminine).
Geography
Castellane is a very old city located upstream of the Gorges du Verdon. The city is above sea level.
The Roc, or the Roc of Notre-Dame, overlooks the city from above. It has been occupied since the High Middle Ages and is a registered historical site. It can be accessed from the centre of town behind the old Church of St. Andrew. The walk takes about 25 minutes.
Two reservoirs are located in the territory of Castellane:
Lake Castillon
Lake Chaudanne, created by the dam of the same name, the .
The area has two water gaps:
the clue de Taulanne containing the Asse de Blieux river and the Route Napoléon along its banks.
the clue de Chasteuil, which contains the Verdon valley.
The GR 4 hiking trail crosses through the town. The neighboring municipalities are:
Climate
Castellane features a warm-summer mediterranean climate (Köppen: Csb), bordering on a mediterraneran continental climate (Köppen: Dsb). Summers are warm to hot and dry, while winters are cold and snowy.
Geology
The commune is part of the Jurassic limestone area of the French Prealps in Provence, formed by the tectonic upheaval of the Alps during the Tertiary. Limestone deposits run the length of the Verdon river, giving rise to spectacular gorges formed through karst erosion. Around Castellane older formations surface, such as gypsum and Triassic black marl.
Neighboring mountains and passes
Castellard ()
Pré Chauvin ()
, the route taken by Napoleon
Blaches Pass, on the road to Saint-André-les-Alpes
Environment
The commune contains of wood and forests (about 59% of its surface area). The extends over the western part of the commune (former communes of Taulanne, Chasteuil and Villars-Brandis).
History of place names
Castellane's name appeared in texts for the first time circa 965–977 as Petra Castellana. The name breaks down into three Occitan terms, pèira, castel and the suffix -ana, which means fortified rock and village, and could be translated as "Castellane rock", in other words, the rock that has a fortified village, or simply the stronghouse or stronghold. Castellane is called Castelana in the Provençal dialect in the classical norm, or Castelano in the Mistralian.
The former commune of Castillon, now beneath the lake, appeared around 1300 as de Castilhone, an Occitan word for a small castle.
The first part of the name "Chasteuil" is obscure, but the second, -ialo, is a Celtic suffix for "clearing".
The village of Robion has the same name as the river that and flows through it into the and takes its source from the Massif du Robion to the east of the village. The name in Rubione, which first appeared for it in 1045, is derived from the vulgar Latin robigonem, a distortion of the classical Latin robiginem for rust, according to Ernest Nègre. Charles Rostaing, on the other hand, believed that the name might predate the Gauls and designate a steep-sided ravine.
, was first mentioned in 1095 when the château of Taloire was given to the Abbey of St. Victor, Marseille, which became the lay seigneur of the fief. It derived its name from the Occitan talador, meaning soldiers especially recruited to devastate the land of an adversary. Adding the -ia suffix designates, either a land inhabited by these devastators, or a land devastated by the taladors. Rostaing thought this name also probably also predated the Gauls. noted a tautology: *Tal- et *Tor-, the teo roots. The name "Taloire" contains two terms designating a mountain.
Economy
In 2017, the unemployment rate of the Castellane population was 20.1%, up from 12.4% in 2007. The largest sector of activity in December 2015 was the public sector (administration, education, health, social work) with 45 businesses and 280 paid workers.
History
Prehistory and antiquity
The inhabitants of Castellane are known back to a very early date. Neolithic nomads came through the area; the oldest traces date back to 6000 BC. A grotto with cave paintings exists in the commune but its location is kept confidential to protect the artwork; Bronze Age tombs have also been discovered in a cave in Castillon. Ligurian tribes occupied the territory. The Suetrii or Suètres later created an oppidum named Ducelia, near the Roc. They mined salt in the area and sold it. Most of the communes attached to Castellane today were peopled by the Suetrii. Taulanne was the exception, inhabited by the people who had their capital in Senez. (Their name is uncertain and Roman historians differ on the subject.)
The region was conquered by Augustus in 14 BC. Castellane was attached to the Roman province of Alpes-Maritimes and began to grow. Homes were established in the plain, and the city was named Civitas Saliniensum (city of salt merchants). The name of the town later became Salinae.
Several roads left from or passed through the town:
Via Salinaria, going west towards Durance and the current Château-Arnoux
Via Ventiana, from Cimiez to Sisteron by way of Vence; a carved milestone from the beginning of the third century was found on this road in the Saint-Pierre pass six miles from Castellane
a fork towards Via Aurelia and Via Domitia
a road towards Entrevaux (Glandèves) by way of Briançonnet
Residents first settled on the bank of the Verdon to mine the saline sources which are still visible today. A treasure of antiquity, 34 gold coins issued by Arcadius and Honorius, were discovered in 1797 in Taloire. A limestone funerary stele for one Julius Trofimus, dating back to Roman times, was discovered near the old chapel of Notre-Dame-du-Plan. Until 1942 it was used in a retaining wall, and can be found today in the public garden of the savings bank. The inscription is included in the (ILGN) collection of Gallia Narbonensis Roman inscriptions and is listed on the historic register.
A diocese was founded in the fifth century: its seat was transferred to Senez before the 6th century however and despite all attempts to have it return to Castellan it remained there until it was closed in the French Revolution.
Middle Ages
In the early ninth century, the area around the current town of Castellane was inhabited by only 84 people. In 812 the area was invaded by Moors, also sometimes called Saracens; they destroyed Salines, the early settlement near the salt marshes. The inhabitants of Salines took refuge on the summit of the Roc and built a stronghold there, building the first Notre-Dame there, inaugurated in 852, in thanks for the refuge. Some vestiges of this site, which was named Sinaca in 813 and Petra Castellana in 965, are still visible at the place now known as Le Signal. People later also settled at the foot of the Roc in the valley bottom.
In 852 a lord of Castellane, possibly named Guillaume won a victory against the Moors and put together a barony of 46 village communities stretching from Cotignac in Var to the south, to Thorame-Haute in the north, and from Soleilhas to Esparron-de-Verdon. The Barony was considered a small sovereign state ruled by hereditary sovereign barons. Over time Castellane came to have three co-existing sites:
the Rupes, on top of the Rock, was soon entirely occupied by the castle built in 977 by Pons-Arbaud and Aldebert
the Castrum, halfway up, on a larger site but easy to defend;
the Burgum, current site of Castellane, easily accessible, facilitating trade.
In 1189, Baron de Castellane Boniface III was attacked by Alfonso I of Provence. He had refused to do homage, explaining that he was a vassal of the Holy Roman Empire. But in the face of brute force he was forced to bend the knee. Another war broke out in 1227 between the Provence and Boniface of Castellane, presumably the son. In 1257 Charles II—then still just prince of Salernes, gave the castle to the Austinian monks. In 1262, Charles I of Anjou defeated Boniface of Castellane and made Castelle the seat of a baile. In the thirteenth century, the family of Castellane lost possession of the city to the Counts of Provence. To protect themselves from attack, in addition to the protections for the city, Castellane built a series of fortified outposts at Demandolx, La Garde, , Rougon, and perhaps Taloire. In 1300 a small Jewish community of eight households was established in the area.
The Black Death reached Castellane in 1348, and was followed by a devastating flood of the Verdon River. The capture and death of Queen Joanna I of Naples created a succession issue in the county of Provence, the cities of the Union of Aix (1382–1387) supporting Charles de Duras against Louis I of Anjou.
Lord of Castellane Louis d'Anduse, also often known as Lord of La Voulte, sided with the Duke of Anjou from the spring of 1382, supporting him on condition he participate in an expedition to rescue the queen. Castellane itself initially also backed the Duke, but changed allegiance in February 1386 after the Duke died, and rallied to the cause of the queen-regent, Marie de Blois. She negotiated with them, hoping to set off a chain of similar declarations of support. Guillaume de Forcalquier and his son Jean Raynaut, lords of Eoulx, submitted to the Duchess in July 1386. In 1390, ravaged the surrounding territory and the village of Taulanne and failed to take the city, but did destroy the wooden bridge over the Verdon River.
The wooden bridge over the Verdon was rebuilt in stone in the 15th century. A monastery took care of its maintenance. The bridge on the Place Castellane put Castellane on the frequently travelled routes between the Mediterranean and the bridge over the Durance river at Sisteron. The bridge toll for the Verdon and the fair began at the end of the Middle Ages. The fair continued until the end of the Ancien Régime, assuring the town relative prosperity.
In the fifteenth century, a community settled on the present site of Taloire. In the middle of the fifteenth century, the upper village was completely abandoned in favor of the lowland site.
Renaissance
At the end of the Middle Ages, the transhumance system developed enormously, herds of sheep from the coast going up into the high Alpine valleys in the summer. Some drailles (herding routes) crossed the Castellane bridge, where a toll was instituted. At the beginning of the 16th century, between 78,000 and 120,000 head were crossing each year during May and June.
The imperial army of Charles V pillaged the town en 1536.
Religious unrest broke out in 1559. Brun de Caille had converted some of the townspeople of Castellane, who gathered at his home for services. A sectarian skirmish took place at his home. Huguenot captain , of another rich Protestant family, sacked the town in the summer of 1560, then established himself there after reaching an armistice with the governor of Provence, the count of Tende, Claude of Savoy. The town was attacked by Protestants on 4 October 1574, but the residents of Castellane and its surroundings chased them off, pursuing them as far as the clue de Taulanne. On 30 January 1586, the and the Duke of Lesdiguières tried to surprise the town. The sneak attack was repulsed and the Baron d'Allemagne was injured by a bullet in his back, which caused the assailants to retreat. The Baron was killed in September of that year trying to lift the siege of his own castle, shot in the head by an arquebus. The end of the siege of Castellane has since been celebrated every year in the last weekend of January with the Pétardiers ceremony reenacting the attack, and notably the episode of Judith André or Andrau, the goodwife of Barrême, who killed pétardier captain Jean Motte by pouring a kettle of boiling peas over him from the top of the porte de l'Annonciade, reputed to be the weak point in the defenses.
s1z
17th and 18th century
The plague struck the town again in 1630.
The Jansenist bishop Jean Soanen tried to make the celebrations of Saint-Sacrement, Saint-Jean and Saint-Éloi more sedate and less unbridled, the youth of the town having a tradition of celebrating with drums, music and gunshots. The youth refused, resisted, made even more noise and even revolted, preventing the procession of the octave du Saint-Sacrement from leaving the church on 22 June 1710. In 1726 the youth of Robion, whom the priest wanted to prevent from dancing on Sunday, also revolted.
The Austrian-Sardinian army briefly occupied the town in 1746 during the war of the Austrian Succession. In December 1746, Provence was invaded by an Austrian-Sardinian army. A troop of 2,000 men took Castellane, then the surrounding villages, as far as the château of Trigance. After some difficulty, the Spanish and French armies coordinated a counteroffensive, which began at the start of January when French soldiers under the orders of the Count of Maulévrier took an Austrian outpost in Chasteuil. The Austrian commander, Maximilian Ulysses Browne, reinforced his right wing with four battalions garrisoned at Castellane, and six on the south bank of the Verdon. Seven other battalions formed a second available squadron. Nine battalions and ten squads of Spaniards were stationed in Riez and 2,500 Swiss paid by Spain were stationed at Senez. On 21 January Hispano-French troops went on the offensive, commanded by the Frenchman Maulévrier and the Spanish Marquis of Taubin. The Spanish left their quarters by night and advanced on Castellane through the clue de Taulanne, while the French passing through the Vodon gorge. The difficult marches necessary to approach in this way nonetheless allowed a coordinated attack around 7am.
The first outposts were taken without difficulty, which allowed Maulévrier to connect on his left with Taubin, and to send a column of dragoons onto the right bank to cut the Austrians' retreat. This assault took the Austrian-Sardinians fortifications without difficulty, the French-Spanish entered the town and did prevent the last Austrians from retreating. In all, they took 287 Austrians prisoner, including the baron de Neuhaus, the lieutenant-general in command. The Austrian-Sardinians also had a hundred-odd dead, versus twenty Franco-Spanish soldiers. The villages of La Garde, Eoulx, Robion, Taloire, Trigance and Comps were evacuated on 22 January.
In 1760, a tax imposed by the king of Piedmont-Sardinia on sales of cloth brought a large reduction in the town's textile production. Production of , a local form of wool, and of cordeillat, a coarse woolen fabric, continued until the Revolution, and was used by the local residents.
Until the Revolution salt was produced from two local salt marshes.
On the eve of the French Revolution, several fiefs existed on the actual territory of the commune: Éoulx, Le Castellet-de-Robion (which became a barony in 1755), Chasteuil, Taulanne and Castillon, plus Castellane. On the same territory there were nine parishes: Castillon, La Baume, Taulanne, La Palud, Chasteuil, Taloire, Villars-Brandis, Robion, et Castellane. The parish of Éoulx overlapped the community of La Garde.
The city of Castellane alone paid more tax than Digne; it was an important rural town, both for its judiciary functions (with eight lawyers and five prosecutors) and for its production, with twelve factories: among which were six hat shops, two wax factories, one faïence works, one tile factory, one silk fabrication works, and the leather industry was also represented. A royal post office was also installed in Castellane near the end of the Ancien Régime.
19th century
The cloth industry, already well established in the preceding century, prospered in the first half of the 19th century. But cottage industries were replaced by the Barneaud factory, built in the late 1830s on the model of the Honnorat factory in Saint-André-de-Méouilles. It employed nine workers in 1872, then disappeared in 1878.
The Revolution and the Empire brought social reforms, including proportional taxation based on the assets. In order to put this in place on a precise basis, a land registry was drawn up. The loi de finances du 15 septembre 1807 specified its methods, but its accomplishment took time to get started, since the officials conducting the cadastre dealt with the communes in successive geographic groups. Not until 1834–1835 was the land registry known as the Napoleonic cadastre of Castellane and its associated communes finished.
The coup d'état of 2 December 1851 committed by Louis-Napoléon Bonaparte against the Second Republic provoked an armed uprising in the Basses-Alpes in defense of the Constitution. Insurgent republicans took a number of cities in the center and south of France, including Digne, the prefecture of the Basses-Alpes, and held them for several days. The local fighters held out the longest, almost three weeks. But this allowed Bonaparte to portray himself as the protector of France, and many participants were sent to penal colonies in Lambesa and Cayenne, banished or less permanently exiled. Eight inhabitants of Castellane were brought before the commission mixte; their most common penalty was deportation to Algeria.
20th century
On 10 September 1926, the sous-préfecture was eliminated in the economic plan of Raymond Poincaré, then re-established by the Vichy government in June 1942.
An internment camp was built in Chaudanne during World War II. Seventeen Jews were arrested in Castellane and deported. On 9 December 1943, the French armée secrète (AS) and the Francs-tireurs et partisans (FTP) attacked the construction site at the Castillon dam and seized five tonnes of explosives.
The commune was liberated 18 August 1944 by the 36th division of British infantry.
In the mid-20th century wine growing for local consumption ended.
Demographics
Sites and monuments
The Rock which dominates the city, rising to (over above the Verdon), is a listed historical site.
The oldest monument in the territory of the commune is the dolmen of Pierres Blanches Neolithic-Chalcolithic, a registered historical site on private property. The Roc towers above the community of Castellane.
The Musée des sirènes et fossiles and the Moyen Verdon are networked with other museums in the Gorges du Verdon, including the home of Pauline Gréoux-les-Bains, the museum of the life of yesteryear Esparron-de-Verdon, home gorges du Verdon in La Palud-sur-Verdon and the Museum of prehistory in Quinson gorges du Verdon.
Architecture
The bridge of the Roc, which carries the Sisteron-Vence road across the river, was built in the first decade of the fifteenth century, replacing a succession of several wooden bridges, the last spanning the Verdon in 1300 and destroyed by in 1390. The construction of the new bridge parallels that of Nyons (built in 1401, long), Pont de Claix (built in 1607–13, long), Tournon (built in the sixteenth century, long) and Entrechaux ( long). Pope Benedict XIII granted indulgences to anyone who gave alms to finance its construction. The rearguard of the Austrian-Sardinian army was caught there by a sortie from the garrison.
The tympana of the Roc bridge have been restored several times. Metal tie rods were laid in 1697–99. The bridge as a whole was restored in 2008 and closed to traffic. It was decommissioned in 1967 and delisted in 1982. The bridge and its approaches have been a registered historical site since 1940.
The library is in the former convent of the Visitation, founded in 1644. The eighteenth-century castle at Éoulx is richly decorated with plasterwork, including the first floor ceilings, the panels surrounding the doors, the rosette in the second-floor ceilings. Externally, it has two towers, with arched openings. The town hall is housed in the building that used to be the savings bank. It resembles a villa: balconies supported by large corbels and thick balusters, and a façade adorned with a pediment.
Catellane's largest fountain, in the main square, features a pyramid on which is carved a compass crossed by a carpenter's square, two chisels and a mallet, emblems of the Freemasons. At the top of the pyramid is a pedestal with a ball. On National Street, two doors have transoms or capitals with volutes, and one lintel is decorated with carved foliage. In the town, several buildings, mostly dry stone, have been recorded in the inventory of topographic DRAC. One of them, in Rayaup, dates from the eighteenth century (the inscription that says 1586 is very recent).
Notre-Dame du Roc
The Chapel of Our Lady of the Rock, dating from the High Middle Ages, dominates the city from atop the Rock and belongs to the former Convent of Mercy. But only the wall and the south façade date from the twelfth century; the building was half demolished during the wars of religion and rebuilt in 1590.
Crumbling by 1703, the chapel was again rebuilt in the early eighteenth century and once more in 1860. A capital adorned with foliage and scrolls dates from the Renaissance.
The furnishings include:
a statue of the Virgin, in marble, possibly 16th century
two paintings, St. Charles Borromeo and St. Francis and St. Jeanne de Chantal, on the historic register for both the paintings and their gilded frames, bearing the arms of the Bishop of Senez Duchaîne and dating from the seventeenth century.
It received numerous votive offerings in the 19th and 20th centuries, including:
traditional engraved plates (136 total);
bridal bouquets (21 total);
a painting given after a vow to Our Lady, dating from 1757, and registered;
a painting given after the cholera epidemic in 1835, registered;
a painting given by a released prisoner, dated 1875 (registered);
a painting given in thanks after a smallpox epidemic, dated 1870, registered;
a painting given by a person who escaped a shipwreck in 1896, registered.
Saint-Victor
Parts of the old parish church of Saint-Victor date from the mid-11th century. It is listed as an historical building. It was constructed in a similar manner and on the same plane as the Church of St. Andrew in the old village above the modern-day town, and was formerly the seat of a priory of the Abbey of St. Victor in Marseille.
The apse is decorated with Lombard bands, which have been described as remarkable, each arch carved from a single stone. Unusually for the region, it has a Roman nave, with arches rebuilt in the 17th century. The base of the tower dates from 1445, but its top was rebuilt in the 18th century following damage by Protestants in 1560.
The altar dates from 1724. The choir is adorned with paintings framed in wood, and an Annunciation carved in relief from gilded wood (18th century, on historical register). The wooden furniture, the stalls, the pulpit and the lectern with its hexagonal base, form an interesting 18th and 19th-century set, some of which is on the historic register. The furnishings also include an early 17th-century silver chalice with an unusual multilobed foot, also on the historic register.
Other historic churches
The Church of the Sacred Heart, now a parish church, was built in 1868–1873 by Father Pougnet and dedicated to Our Lady. It was widened by side aisles in 1896. The first transept is occupied by a platform. The interior is Gothic, with the tower built against the façade. The furnishings include some registered historical items:two silver custodes, cases for carrying communion hosts; one dating from around 1650 and one from the 18th century, a gilded wooden 18th-century cross, and a 16th-century silver chalice
the Chapel of St. Joseph is part of the Augustinian church rebuilt to replace the chapel of the Blue Penitents. It was partly demolished to widen the boulevard Saint-Michel
the 12th-century near Robion was restored in 1942 and declared a historical monument in 1944, It is the earliest Romanesque (11th and 12th centuries) church in the region.
Saint-Pons in Eoulx, unchanged since its construction, not arched, has preserved the original cornices from the middle or late twelfth century, also the thirteenth century according to the Direction régionale des affaires culturelles (DRAC). This building is on the historic register. It has a flat copper collection plate believed to be 17th-century
ruins of the Church of St. André – 13th century, in ruins since the 18th century (site of Petra Castellana)
The 12th-century church of Notre-Dame-du-Plan, a former priory in Castellane
Church of St. Sebastian Chasteuil (16th century)
Saint-Pons – 16th century, with bell dated 1436, in Robion
Saint-Jean in Taloire, may have been built as early as the 13th or 14th century, but the 15th century seems more likely. It was damaged by the earthquake of 1951.
St. Pierre in Taulanne
Saint-Jean-Baptiste in Villars-Brandis has an exceptional late 15th-century copper thurible (a type of censer) with bilevel windows.
Chapelle Sainte-Victoire in a place called Angles: late 19th century at the earliest
chapels of Saint-Pons Blaron (ex-Castillon), Saint-Antoine and Notre Dame (ruined) in Eoulx,
St. Trophimus, built into the mountain above Petit Robion after a chapel built below was repeatedly damaged by rocks falling onto its roof, has a 17th-century silver chalice and a flat copper 16th-century collection plate, both registered historical items.
St. Stephen, on high ground in Taloire
St. John, in Villars
the cemetery Notre-Dame-du-Plan includes several funeral chapels
Military architecture
The outline of the walls of Petra Castellana, the ancient city beneath the current one, is still visible, and in places they reach seven meters in height. The walls are thought to date from the 12th century, although a June 2016 archeological excavation sought to date them more precisely. Only one tower survives of the fourteen that originally reinforced these walls: the 14th-century pentagonal keep. This keep, which dominates the town center, is on private property but was declared a historic monument in 1921.
Construction on the wall enclosing the lower town began in 1359, with the permission of the Count of Provence, Louis I of Naples. Traces of this wall are still visible in the square towers on the front of the houses on the square. Corbels, which could support defenses (brattices or simple parapets with battlements) are visible on their façades.
Two of the original gates in the wall remain:
the gate of the Annunciation or of the pétardiers, flanked by two towers, high ground in the resistance of 1586;
the gate of the clock, also called St. Augustine's gate, set in a square tower. A passage crosses under the tower through an arch. The tower is listed on the register of historic places.
One of the towers in the Saint-Michel neighborhood has been home to a dovecote since 1585.
International relations
Castellane is twinned with Pescasseroli, Italy.
See also
Route Napoléon
House of Castellane
Communes of the Alpes-de-Haute-Provence department
Notes
References
External links
Official website
Communes of Alpes-de-Haute-Provence
Subprefectures in France
Neolithic sites
Gaul
War of the Austrian Succession
Architectural history
Geologic formations of Europe
History of France by location
Archaeological sites in France
Castellane | Castellane | [
"Engineering"
] | 6,079 | [
"Architectural history",
"Architecture"
] |
3,078,303 | https://en.wikipedia.org/wiki/Ramond%E2%80%93Ramond%20field | In theoretical physics, Ramond–Ramond fields are differential form fields in the 10-dimensional spacetime of type II supergravity theories, which are the classical limits of type II string theory. The ranks of the fields depend on which type II theory is considered. As Joseph Polchinski argued in 1995, D-branes are the charged objects that act as sources for these fields, according to the rules of p-form electrodynamics. It has been conjectured that quantum RR fields are not differential forms, but instead are classified by twisted K-theory.
The adjective "Ramond–Ramond" reflects the fact that in the RNS formalism, these fields appear in the Ramond–Ramond sector in which all vector fermions are periodic. Both uses of the word "Ramond" refer to Pierre Ramond, who studied such boundary conditions (the so-called Ramond boundary conditions) and the fields that satisfy them in 1971.
Defining the fields
The fields in each theory
As in Maxwell's theory of electromagnetism and its generalization, p-form electrodynamics, Ramond–Ramond (RR) fields come in pairs consisting of a p-form potential Cp and a (p + 1)-form field strength Gp+1. The field strength is, as usual defined to be the exterior derivative of the potential Gp+1 = dCp.
As is usual in such theories, if one allows topologically nontrivial configurations or charged matter (D-branes) then the connections are only defined on each coordinate patch of spacetime, and the values on various patches are glued using transition functions. Unlike the case of electromagnetism, in the presence of a nontrivial Neveu–Schwarz 3-form field strength the field strength defined above is no longer gauge invariant and so also needs to be defined patchwise with the Dirac string off of a given patch interpreted itself as a D-brane. This extra complication is responsible for some of the more interesting phenomena in string theory, such as the Hanany–Witten transition.
The choices of allowed values of p depend on the theory. In type IIA supergravity, fields exist for p = 1 and p = 3. In type IIB supergravity, on the other hand, there are fields for p = 0, p = 2 and p = 4, although the p = 4 field is constrained to satisfy the self-duality condition G5 = *G5 where * is the Hodge star. The self-duality condition cannot be imposed by a Lagrangian without either introducing extra fields or ruining the manifest super-Poincaré invariance of the theory, thus type IIB supergravity is considered to be a non-Lagrangian theory. A third theory, called massive or Romans IIA supergravity, includes a field strength G0, called the Romans mass. Being a zero-form, it has no corresponding connection. Furthermore, the equations of motion impose that the Romans mass is constant. In the quantum theory Joseph Polchinski has shown that G0 is an integer, which jumps by one as one crosses a D8-brane.
The democratic formulation
It is often convenient to use the democratic formulation of type II string theories, which was introduced by Paul Townsend in p-Brane Democracy. In D-brane Wess-Zumino Actions, T-duality and the Cosmological Constant Michael Green, Chris Hull and Paul Townsend constructed the field strengths and found the gauge transformations that leave them invariant. Finally in New Formulations of D=10 Supersymmetry and D8-O8 Domain Walls the authors completed the formulation, providing a Lagrangian and explaining the role of the fermions. In this formulation one includes all of the even field strengths in IIA and all of the odd field strengths in IIB. The additional field strengths are defined by the star condition Gp=*G10−p. As a consistency check, notice that the star condition is compatible with the self-duality of G5, thus the democratic formulation contains the same number of degrees of freedom as the original formulation. Similarly to attempts to simultaneously include both electric and magnetic potentials in electromagnetism, the dual gauge potentials may not be added to the democratically formulated Lagrangian in a way that maintains the manifest locality of the theory. This is because the dual potentials are obtained from the original potentials by integrating the star condition.
Ramond–Ramond gauge transformations
The type II supergravity Langragians are invariant under a number of local symmetries, such as diffeomorphisms and local supersymmetry transformations. In addition the various form-fields transform under Neveu–Schwarz and Ramond–Ramond gauge transformations.
In the democratic formulation the Ramond–Ramond gauge transformations of the gauge potentials that leave the action invariant are
where H is the Neveu-Schwarz 3-form field strength and the gauge parameters are q-forms. As the gauge transformations mix various 's, it is necessary that each RR form be transformed simultaneously, using the same set of gauge parameters. The H-dependent terms, which have no analogue in electro-magnetism, are required to preserve the contribution to the action of the Chern–Simons terms that are present in type II supergravity theories.
Notice that there are multiple gauge parameters corresponding to the same gauge transformation, in particular we may add any (d + H)-closed form to Lambda. Thus in the quantum theory we must also gauge the gauge transformations, and then gauge those, on so on until the dimensions are sufficiently low. In the Fadeev–Popov quantization this corresponds to adding a tower of ghosts. Mathematically, in the case in which H vanishes, the resulting structure is the Deligne cohomology of the spacetime. For nontrivial H, after including the Dirac quantization condition, it has been conjectured to correspond instead to differential K-theory.
Notice that, thanks to the H terms in the gauge transformations, the field strengths also transform nontrivially
The improved field strengths
One often introduces improved field strengths
that are gauge-invariant.
Although they are gauge-invariant, the improved field strengths are neither closed nor quantized, instead they are only twisted-closed. This means that they satisfy the equation of motion , which is just the Bianchi identity . They are also "twisted-quantized" in the sense that one can transform back to the original field strength whose integrals over compact cycles are quantized. It is the original field strengths that are sourced by D-brane charge, in the sense that the integral of the original p-form field strength Gp over any contractible p-cycle is equal to the D(8-p)-brane charge linked by that cycle.
Since D-brane charge is quantized, Gp, and not the improved field strength, is quantized.
Field equations
Equations and Bianchi identities
As usual in p-form gauge theories, the form fields must obey the classical field equations and Bianchi identities. The former express the condition that variations of the action with respect to the various fields must be trivial. We will now restrict our attention to those field equations that come from the variation of the Ramond–Ramond (RR) fields, but in practice these need to be supplemented with the field equations coming from the variations of the Neveu–Schwarz B-field, the graviton, the dilaton and their superpartners the gravitinos and the dilatino.
In the democratic formulation, the Bianchi identity for the field strength Gp+1 is the classical field equation for its Hodge dual G9−p, and so it will suffice to impose the Bianchi identities for each RR field. These are just the conditions that the RR potentials Cp are locally defined, and that therefore the exterior derivative acting on them is nilpotent
D-branes are sources for RR fields
In many applications one wishes to add sources for the RR fields. These sources are called D-branes. As in classical electromagnetism one may add sources by including a coupling Cp of the p-form potential to a (10-p)-form current in the Lagrangian density. The usual convention in the string theory literature appears to be to not write this term explicitly in the action.
The current modifies the equation of motion that comes from the variation of Cp. As is the case with magnetic monopoles in electromagnetism, this source also invaliditates the dual Bianchi identity as it is a point at which the dual field is not defined. In the modified equation of motion appears on the left hand side of the equation of motion instead of zero. For future simplicity, we will also interchange p and 7 − p, then the equation of motion in the presence of a source is
The (9-p)-form is the Dp-brane current, which means that it is Poincaré dual to the worldvolume of a (p + 1)-dimensional extended object called a Dp-brane. The discrepancy of one in the naming scheme is historical and comes from the fact that one of the p + 1 directions spanned by the Dp-brane is often timelike, leaving p spatial directions.
The above Bianchi identity is interpreted to mean that the Dp-brane is, in analogy with magnetic monopoles in electromagnetism, magnetically charged under the RR p-form C7−p. If instead one considers this Bianchi identity to be a field equation for Cp+1, then one says that the Dp-brane is electrically charged under the (p + 1)-form Cp+1.
The above equation of motion implies that there are two ways to derive the Dp-brane charge from the ambient fluxes. First, one may integrate dG8−p over a surface, which will give the Dp-brane charge intersected by that surface. The second method is related to the first by Stokes' theorem. One may integrate G8−p over a cycle, this will yield the Dp-brane charge linked by that cycle. The quantization of Dp-brane charge in the quantum theory then implies the quantization of the field strengths G, but not of the improved field strengths F.
Twisted K-theory interpretation
It has been conjectured that RR fields, as well as D-branes, are classified by twisted K-theory. In this framework, the above equations of motion have natural interpretations. The source free equations of motion for the improved field strengths F imply that the formal sum of all of the Fp's is an element of the H-twisted de Rham cohomology. This is a version of De Rham cohomology in which the differential is not the exterior derivative d, but instead (d+H) where H is the Neveu-Schwarz 3-form. Notice that (d+H), as is necessary for the cohomology to be well-defined, squares to zero.
The improved field strengths F live in the classical theory, where the transition from quantum to classical is interpreted as tensoring by the rationals. So the F's must be some rational version of twisted K-theory. Such a rational version, in fact a characteristic class of twisted K-theory, is already known. It is the twisted Chern class defined in Twisted K-theory and the K-theory of Bundle Gerbes by Peter Bouwknegt, Alan L. Carey, Varghese Mathai, Michael K. Murray and Danny Stevenson and extended in Chern character in twisted K-Theory: Equivariant and holomorphic cases. The authors have shown that twisted Chern characters are always elements of the H-twisted de Rham cohomology.
Unlike the improved field strengths, the original field strengths G's are untwisted, integral cohomology classes. In addition the G's are not gauge-invariant, which means that they are not uniquely defined but instead may only be defined as equivalence classes. These correspond to the cohomology classes in the Atiyah Hirzebruch Spectral Sequence construction of twisted K-theory, which are only defined up to terms which are closed under any of a series of differential operators.
The source terms appear to be obstructions to the existence of a K-theory class. The other equations of motion, such as those obtained by varying the NS B-field, do not have K-theory interpretations. The incorporation of these corrections in the K-theory framework is an open problem. For more on this problem, click here.
See also
Kalb–Ramond field
Notes
References
A good introduction to the various field strengths in theories with Chern–Simons terms is Chern-Simons terms and the Three Notions of Charge by Donald Marolf.
The democratic formulation of 10-dimensional supergravities can be found in New Formulations of D=10 Supersymmetry and D8-O8 Domain Walls by Eric Bergshoeff, Renata Kallosh, Tomás Ortín, Diederik Roest and Antoine Van Proeyen. It includes many details absent in Townsend's original paper, but restricts attention to a topologically trivial Neveu-Schwarz 3-form.
String theory | Ramond–Ramond field | [
"Astronomy"
] | 2,787 | [
"String theory",
"Astronomical hypotheses"
] |
3,079,185 | https://en.wikipedia.org/wiki/Cartesian%20Perceptual%20Compression | Cartesian Perceptual Compression (abbreviated CPC, with filename extension .cpc) is a proprietary image file format. It was designed for high compression of black-and-white raster Document Imaging for archival scans.
CPC is lossy, has no lossless mode, and is restricted to bi-tonal images. The company which controls the patented format claims it is highly effective in the compression of text, black-and-white (halftone) photographs, and line art. The format is intended for use in the web distribution of legal documents, design plans, and geographical plot maps.
Viewing and converting documents in the CPC format currently requires the download of proprietary software. Although viewing CPC documents is free, as is converting CPC images to other formats, conversion to CPC format requires a purchase.
JSTOR, a United States–based online system for archiving academic journals, converted its online archives to CPC in 1997. The CPC files are used to reduce storage requirements for its online collection, but are temporarily converted on their servers to GIF for display, and to PDF for printing. JSTOR still scans to TIFF G4 and considers those files its preservation masters.
See also
Image file formats
References
External links
The company's website Additional technical information about the format
Library of Congress Analysis of format by the LoC.
RLG DigiNews Vol. 7 No. 2 Comparison of rivals to TIFF for Document Imaging purposes
Graphics file formats | Cartesian Perceptual Compression | [
"Technology"
] | 293 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
3,079,231 | https://en.wikipedia.org/wiki/Carothers%20equation | In step-growth polymerization, the Carothers equation (or Carothers' equation) gives the degree of polymerization, , for a given fractional monomer conversion, .
There are several versions of this equation, proposed by Wallace Carothers, who invented nylon in 1935.
Linear polymers: two monomers in equimolar quantities
The simplest case refers to the formation of a strictly linear polymer by the reaction (usually by condensation) of two monomers in equimolar quantities. An example is the synthesis of nylon-6,6 whose formula is
from one mole of hexamethylenediamine, , and one mole of adipic acid, . For this case
In this equation
is the number-average value of the degree of polymerization, equal to the average number of monomer units in a polymer molecule. For the example of nylon-6,6 ( diamine units and diacid units).
is the extent of reaction (or conversion to polymer), defined by
is the number of molecules present initially as monomer
is the number of molecules present after time . The total includes all degrees of polymerization: monomers, oligomers and polymers.
This equation shows that a high monomer conversion is required to achieve a high degree of polymerization. For example, a monomer conversion, , of 98% is required for = 50, and = 99% is required for = 100.
Linear polymers: one monomer in excess
If one monomer is present in stoichiometric excess, then the equation becomes
r is the stoichiometric ratio of reactants, the excess reactant is conventionally the denominator so that r < 1. If neither monomer is in excess, then r = 1 and the equation reduces to the equimolar case above.
The effect of the excess reactant is to reduce the degree of polymerization for a given value of p. In the limit of complete conversion of the limiting reagent monomer, p → 1 and
Thus for a 1% excess of one monomer, r = 0.99 and the limiting degree of polymerization is 199, compared to infinity for the equimolar case. An excess of one reactant can be used to control the degree of polymerization.
Branched polymers: multifunctional monomers
The functionality of a monomer molecule is the number of functional groups which participate in the polymerization. Monomers with functionality greater than two will introduce branching into a polymer, and the degree of polymerization will depend on the average functionality fav per monomer unit. For a system containing N0 molecules initially and equivalent numbers of two functional groups A and B, the total number of functional groups is N0fav.
And the modified Carothers equation is
, where p equals to
Related equations
Related to the Carothers equation are the following equations (for the simplest case of linear polymers formed from two monomers in equimolar quantities):
where:
Xw is the weight average degree of polymerization,
Mn is the number average molecular weight,
Mw is the weight average molecular weight,
Mo is the molecular weight of the repeating monomer unit,
Đ is the dispersity index. (formerly known as polydispersity index, symbol PDI)
The last equation shows that the maximum value of the Đ is 2, which occurs at a monomer conversion of 100% (or p = 1). This is true for step-growth polymerization of linear polymers. For chain-growth polymerization or for branched polymers, the Đ can be much higher.
In practice the average length of the polymer chain is limited by such things as the purity of the reactants, the absence of any side reactions (i.e. high yield), and the viscosity of the medium.
References
Polymer chemistry
Equations | Carothers equation | [
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 783 | [
"Equations",
"Mathematical objects",
"Materials science",
"Polymer chemistry"
] |
3,079,947 | https://en.wikipedia.org/wiki/Hilario%20Fern%C3%A1ndez%20Long | Hilario Fernández Long (12 September 1918 – 23 December 2002) was an Argentine structural engineer and educator.
He was born in Bahía Blanca and was of Spanish and Volga German descent. He graduated as a Civil Engineer from the University of Buenos Aires in 1941 and his professional life was centered on structural engineering. He participated in, among other projects, the construction of the Argentine National Library, the Buenos Aires IBM Building and the Zárate–Brazo Largo and Chaco-Corrientes bridges. He pioneered the use of computer tools in his discipline.
He coauthored many technical articles, and published an Introduction to Go and a Go Manual that were instrumentals in the introduction of the game in Argentina.
Besides his teaching activities in several universities, he was the dean of the Engineering School and Rector (i.e. President) of the University of Buenos Aires. After Juan Carlos Onganía's coup d'état in 1966, he resigned as the University autonomy was violated in the Noche de los Bastones Largos (Long Stick's Night) when the police entered by force in the campuses, beating students and professors, effectively ending the so-called "Golden Age" of the Buenos Aires University. His political commitment was reaffirmed when President Raúl Alfonsín nominated him as a member of the CONADEP commission that investigated the fate of the desaparecidos.
After retirement, he was distinguished as Emeritus Professor of both the University of Buenos Aires and the Pontifical Catholic University of Argentina and Doctor Honoris Causa of the University of Buenos Aires. He died in Necochea.
He belonged to several organizations:
Structural Engineer Association (honor member)
National Academy of Exact, Physical and Natural Sciences
National Education Academy
American Society of Civil Engineering (life member)
Argentine Go Association (founding member)
Argentine Center of Engineers
Asamblea Permanente pro Derechos Humanos – APDH (Permanent Assembly for Human Rights – member of the Presidency council)
External links
Engineering of a Golden Age Interview to Hilario Fernández Long. In Spanish
The first version of this article was translated from Spanish from Enciclopedia Libre Universal under GFDL.
1918 births
2002 deaths
20th-century Argentine engineers
Structural engineers
People from Bahía Blanca
Argentine people of Volga German descent
Rectors of the University of Buenos Aires
Argentine civil engineers
University of Buenos Aires alumni | Hilario Fernández Long | [
"Engineering"
] | 475 | [
"Structural engineering",
"Structural engineers"
] |
3,080,073 | https://en.wikipedia.org/wiki/Penicillium%20camemberti | Penicillium camemberti is a species of fungus in the genus Penicillium. It is used in the production of Camembert, Brie, Langres, Coulommiers, and Cambozola cheeses, on which colonies of P. camemberti form a hard, white crust. It is responsible for giving these cheeses their distinctive flavors. An allergy to the antibiotic penicillin does not necessarily imply an allergy to cheeses made using P. camemberti.
When making soft cheese that involves P. camemberti, the mold may be mixed into the ingredients before being placed in the molds, or it may be added to the outside of the cheese after it is removed from the cheese molds. P. camemberti is responsible for the soft, buttery texture of Brie and Camembert, but a too high concentration may lead to an undesirable bitter taste.
Using PCR techniques, cheese manufacturers can control cheesemaking by monitoring the mycelial growth of P. camemberti. This is particularly significant, as controlling the growth is important to maintain desirable levels of compounds for flavor and to keep toxicity at a safe level.
History
The fungus was first described by Dr. Charles Thom in 1906. It is considered to be a great subject for experiments and tests, as the fungus thrives well in artificial situations, creates dense, enzymatic mycelia, and is readily available in markets from cheeses. P. camemberti is also important economically for the cheese industry.
The fungus originated through artificial selection of P. biforme around 1900, which in turn was a result of artificial selection among P. fuscoglaucum.
The complete genome sequence of P. camemberti was published in 2014.
In 2024, the French National Centre for Scientific Research warned that the spore-producing ability of albino strains of P. camemberti has declined due to prolonged vegetative reproduction. The Norman cheese industry now struggles to find enough spores to inoculate their cheese with.
Taxonomy
Twenty-four isolates of Penicillium species are known, resulting in “considerable taxonomic confusion”. However, these strains are only antigenically related, having similarities in micromorphology, growth rates, toxin production, and the ability to grow in water and at low temperatures. These isolates can be grouped into nine subdivisions below the species level.
There is some degree of disagreement on how to deliminate P. camemberti from closely-related species, namely P. biforme, P. fuscoglaucum, and P. caseifulvum.
In a traditional "lumping" scheme, P. biforme and P. fusoglaucum are united in P. commune .
In the MycoBank scheme, as of February 2024, P. commune (includes P. fusoglaucum), P. biforme, P. camemberti, and P. caseifulvum are each "current".
Ropars et al. (2020) recognizes P. fusoglaucum, P. biforme, and P. camemberti. They list two varieties under P. camemberti:
P. camemberti var. "camemberti", the lineage found in Camembert and Brie. White colonies, slow radial growth, fluffy mycelia. Produces cyclopiazonic acid (CPA), a mycotoxin.
P. camemberti var. "caseifulvum", the lineage found in cheeses other than Camembert, such as St. Marcellin and Rigotte de Condrieu. Grey-green colonies, faster rate of growth on cheese (comparable to P. biforme), unable to produce CPA.
Toxic properties
As a fungus, P. camemberti can produce toxins, in this case, cyclopiazonic acid. The amount of the mycotoxin produced depends on the strain of P. camemberti, as well as the temperature at which the culture is grown. Additionally, the toxin is typically more concentrated on the crust of the fungus rather than the inner part. In regard to safety, generally, consumers would only receive lower than a 4 μg dose of cyclopiazonic acid. Still, using weaker strains of the fungus is advised, since the secretion of the toxin appears to be natural and necessary, but unhealthy for cheese consumers.
Use in other foods
Since P. camemberti is responsible for the main flavor and odor of popular cheeses, the fungus can be used for the flavoring of other foods, such as dry, fermented sausages. José M. Bruna and his team saw that the flavor comes from compounds produced by the fungus, such as ammonia, methyl ketones, primary and secondary alcohols, esters, and aldehydes, and decided to superficially inoculate P. camemberti on dry, fermented sausages to improve its sensory properties. P. camemberti promotes proteolysis and lipolysis, which is the breakdown of proteins and lipids, resulting in free amino acids, free fatty acids, and volatile compounds that allow for the ripened flavor. The fungus created a mycelium, protecting the lipids within, allowing for better flavor and odor of sausages. This is a potential starter culture for dry, fermented sausages.
See also
List of Penicillium species
References
camemberti
Cheese
Fungi described in 1906
Molds used in food production
Taxa named by Charles Thom
Fungi in cultivation
Fungus species | Penicillium camemberti | [
"Biology"
] | 1,165 | [
"Fungi",
"Fungus species"
] |
3,080,510 | https://en.wikipedia.org/wiki/High-level%20design | High-level design (HLD) explains the architecture that would be used to develop a system. The architecture diagram provides an overview of an entire system, identifying the main components that would be developed for the product and their interfaces.
The HLD can use non-technical to mildly technical terms which should be understandable to the administrators of the system. In contrast, low-level design further exposes the logical detailed design of each of these elements for use by engineers and programmers. HLD documentation should cover the planned implementation of both software and hardware.
Purpose
Preliminary design: In the preliminary stages of system development, the need is to size the project and to identify those parts which might be risky or time-consuming.
Design overview: As the project proceeds, the need is to provide an overview of how the various sub-systems and components of the system fit together.
In both cases, the high-level design should be a complete view of the entire system, breaking it down into smaller parts that are more easily understood. To minimize the maintenance overhead as construction proceeds and the lower-level design is done, it is best that the high-level design is elaborated only to the degree needed to satisfy these needs.
High-level design document
A high-level design document or HLDD adds the necessary details to the current project description to represent a suitable model for building. This document includes a high-level architecture diagram depicting the structure of the system, such as the hardware, database
architecture, application architecture (layers), application flow (navigation), security architecture and technology architecture.
Design overview
A high-level design provides an overview of a system, product, service, or process.
Such an overview helps supporting components be compatible to others.
The highest-level design should briefly describe all platforms, systems, products, services, and processes that it depends on, and include any important changes that need to be made to them.
In addition, there should be brief consideration of all significant commercial, legal, environmental, security, safety, and technical risks, along with any issues and assumptions.
The idea is to mention every work area briefly, clearly delegating the ownership of more detailed design activity whilst also encouraging effective collaboration between the various project teams.
Today, most high-level designs require contributions from a number of experts, representing many distinct professional disciplines.
Finally, every type of end-user should be identified in the high-level design and each contributing design should give due consideration to customer experience.
See also
Software development process
Systems development life cycle
References
External links
High Level Design Document sample format
Design
Software development
Software design | High-level design | [
"Technology",
"Engineering"
] | 524 | [
"Computer occupations",
"Software engineering",
"Design",
"Software design",
"Software development"
] |
3,080,690 | https://en.wikipedia.org/wiki/Processing%20medium | In industrial engineering, a processing medium is a gaseous, vaporous, fluid or shapeless solid material that plays an active role in manufacturing processes - comparable to that of a tool.
Examples
A processing medium for washing is a soap solution, a processing medium for steel melting is a plasma, and a processing medium for steam drying is superheated steam.
Synonyms
Operating medium
Working medium.
Engineering concepts | Processing medium | [
"Engineering"
] | 82 | [
"Mechanical engineering stubs",
"Mechanical engineering",
"nan"
] |
3,080,819 | https://en.wikipedia.org/wiki/Twin-boom%20aircraft | A twin-boom aircraft has two longitudinal auxiliary booms. These may contain ancillary items such as fuel tanks and/or provide a supporting structure for other items. Typically, twin tailbooms support the tail surfaces, although on some types such as the Rutan Model 72 Grizzly the booms run forward of the wing. The twin-boom configuration is distinct from twin-fuselage designs in that it retains a central fuselage.
Design
The twin-boom configuration is distinct from the twin fuselage type in having a separate, short fuselage housing the pilot and payload. It has been adopted to resolve various design problems with the conventional empennage for aircraft in different roles.
Engine mounting
For a single engine with a propeller in the pusher configuration or a jet engine, a conventional tail requires the propeller or exhaust to be moved far aft, requiring either a very long driveshaft or jet pipe and thus reducing propulsive efficiency. The twin-boom configuration allows a much shorter and more efficient installation. The Saab 21 was originally built as a pusher type and was later adapted to jet power as the 21R.
In these designs, the tailplane (horizontal stabilizer) is typically high-mounted on twin tail fins to keep it clear of the engine wake. The Scaled Composites SpaceShipOne and SpaceShipTwo sub-orbital spaceplanes adopted twin booms with outboard tails or outboard horizontal stabilizers (OHS) to keep the airframe clear of the more widely-spreading rocket engine exhaust.
Twin booms have also been adopted for twin-engined designs where the engine system includes bulky additional items such as turbochargers and heat exchangers, taking up a large volume of space. Examples include the Lockheed P-38 Lightning.
Field of view
For a rear observation or gunnery position to have an unobstructed field of view, placing it at the rear of a conventional tail moves it so far aft that problems arise with the centre of mass and balancing the aircraft. Getting rid of the conventional empennage allows the rear position to be located more forward, resolving the balance problem. An example is provided by the Focke-Wulf Fw 189.
However the twin booms and bridging tailplane still obstruct the field of view to some extent and guns in this position are especially restricted in firing to the side.
Transport access
Loading and unloading large freight or cargo items such as vehicles and containers requires large access doors. In conventional designs these doors must be located at the nose or side of the fuselage, necessitating heavy reinforcement of the main structure. Side doors limit the length of an item to the width of the door and access may also be obstructed by engines or undercarriage. The twin-boom configuration allows a large door to be placed at the rear of the fuselage, free from obstruction by the tail assembly, as on the Armstrong Whitworth AW.660 Argosy.
However access to the rear door remains limited, especially for trucks backing up to it, and a high-mounted conventional rear fuselage is often preferred.
Efficiency
Twin booms typically offer greater drag than a conventional arrangement. They are also typically shallower than the fuselage and thus inherently less stiff, requiring additional reinforcement to maintain a rigid tail position in pitch. On the other hand, tip effects on the tailplane are avoided and it is supported at both ends, allowing it to be made smaller and lighter. Moreover, span loading along the wing can reduce the structural forces between the booms and thus overall weight.
Some modern high-efficiency designs have twin booms which distribute the load along the wing span and/or stiffen the overall structure. Capable of flying non-stop round the world, the Rutan Voyager was a canard design with tractor propeller, in which the twin booms extended forwards to brace the foreplane as well as aft to support twin fins. The later Virgin Atlantic GlobalFlyer was jet propelled but with a similar range, still with large twin booms to accommodate the jet fuel in a lightweight span-loaded structure, but with a small conventional tail on each boom.
History
Twin boom designs can trace their history back to the lattices of booms used on many early boxkite aircraft. With the recognition of the tremendous drag these imposed, more compact structures covered in fabric were developed during the World War I. Prime examples include the Caproni series of trimotor bombers.
Around the same time, the first wooden monocoque fuselages appeared, and it wasn't long before this technique was applied to provide twin booms. Possibly the first of these was the pre-war Nieuport pusher, which used paper impregnated with Bakelite however the most successful were the AGO C.I and C.II which used a more conventional wooden shell, built up from strips of wood glued over a form.
With the development of aluminium stressed skin monocoques later in World War I, the same technique was extended to twin boom designs, beginning in the 1920s.
Most of the early designs used twin booms to clear a rear mounted propeller, however even in World War I, several larger aircraft used them to provide a gunner with the ability to cover the underside of the tail without having to have the weight at the very extreme end of the aircraft where it posed balance and control problems.
Only in World War II, with the increasing prevalence of transporting bulky items and vehicles by air was the utility of a rear door, in line with the cabin to ease loading realized, and with it, the utility of moving the rear fuselage structure to the sides to avoid excessive height in the rear fuselage as on the Gotha Go 242 glider.
With the beginning of the jet age, the need for clearance for the propeller was replaced with the need to provide a clear path for hot exhaust gases. Jet engine efficiency was hampered by long intake and exhaust trunks, as were used on many early designs, and one solution was to use twin booms to shorten the exhaust trunking to the minimum, such as de Havilland used on their successful Vampire and Venom jet fighters.
A small number of designs used twin booms for other reasons, most notable being the Lockheed P-38 Lightning, whose booms contained the overly lengthy engine turbo-superchargers, which would have made for an unusually long nacelle. The final use for a twin boom to be developed was in tying together very high aspect ratio wings and canards as on the Rutan Voyager, to reduce flexing, and the weight needed to otherwise constrain it. Also, by having the mass from most of the fuel mid-span, it reduces the forces on the wings considerably, much in the same manner mounting the engines mid-span on most jet transports does.
Despite these anticipated benefits, twin booms remain unusual. For most cases, the booms are less efficient structurally in providing pitch stiffness, and produce more drag. In the case of those using twin booms to improve the field of fire downwards, it severely reduces it laterally, and often directly astern. For transports, the booms may facilitate access to the fuselage, but trucks then have to be extremely careful to not hit parts of the aircraft that they are then getting closer to. As a result, the C-119 remained an anomaly, and most successful post-war transports, such as the C-130 Hercules, reverted to a single rear fuselage.
List of twin-boom aircraft
|-
|AAI RQ-7 Shadow||US||UAV||UAV||1991|| || ||
|-
|Abrams P-1 Explorer||US||Propeller||Survey||1937||Prototype||1||
|-
|AD Seaplane Type 1000||UK||Propeller||Bomber||1916||Prototype||2||
|-
|Adam A500||US||Propeller||Transport||2002||Prototype||7||
|-
|Adam A700||US||Jet||Transport||2003 ||Prototype||2||
|-
|ADI Condor||US||Propeller||Motor glider||1981 ||Prototype||1||
|-
|AeroRIK Dingo||Russia||Propeller||Utility||1997||Prototype||1-5||
|-
|AGO C.I||Germany||Propeller||Reconnaissance||1915 ||Production||64||
|-
|AGO C.II||Germany||Propeller||Reconnaissance||1915 ||Production||15||
|-
|AHRLAC Holdings Ahrlac||South Africa||Propeller||Attack||2014||Prototype||1||
|-
|Air Utility AU-18||US||Propeller||Transport||1945||Prototype||1||
|-
|Airmaster Avalon 680||US||Propeller||Transport||1983||Prototype|||1||
|-
|Airsport Song||Czech Republic ||Propeller||Ultralight||2009 ||Production|| ||
|-
|AISA GN||Spain||Autogyro||Utility||1982||Prototype||1||
|-
|Akaflieg Stuttgart fs28 Avispa||Germany||Propeller||Utility||1972||Prototype||1||
|-
|Alaparma Baldo||Italy||Propeller||Utility||1949||Production||35 ca.||
|-
|Alenia Aermacchi Sky-Y||Italy||UAV||UAV||2007|| || ||
|-
|American Gyro AG-4 Crusader||US||Propeller||Utility||1935||Prototype||1||
|-
|Antonov LEM-2/OKA-33||USSR||Propeller||Transport||1937||Prototype||1||
|-
|Anderson Greenwood AG-14||US||Propeller||Utility||1947||Prototype||5||
|-
|ANTEX-M||Portugal||UAV||UAV||2002|| || ||
|-
|Antonov A-40||USSR||Glider||Transport||1942||Prototype||1||Air towed gliding tank
|-
|Arado E.340||Germany||Propeller||Bomber||n/a||Project||0||
|-
|Armstechno NITI||Bulgaria||UAV||UAV||2006|| || ||
|-
|Armstrong Whitworth AW.660 Argosy||UK||Propeller||Transport||1959||Production||74||
|-
|Arpin A-1||UK||Propeller||Utility||1938||Prototype||1||
|-
|AVE Mizar||US||Propeller||Flying car||1973||Prototype||2||
|-
|BAE Systems Phoenix||UK||UAV||UAV||1986|| || ||
|-
|BAE Systems SkyEye||UK||UAV||UAV||1973|| || ||
|-
|BAT Crow||UK||Propeller||Ultralight||1920||Prototype||1||
|-
|Baykar Bayraktar TB2||Turkey||UAV||UAV||2014|| || ||
|-
|Baykar Bayraktar TB3||Turkey||UAV||UAV||2022||Project|| ||
|-
|Bell XP-52||US||Propeller||Fighter||1940||Project||0||
|-
|Belyayev EOI||USSR||Propeller||Fighter||1939||Project||0||
|-
|Bendix 51 & 51A||US||Propeller||Utility||1945||Prototype||2||
|-
|Bestetti BN.1||Italy||Propeller||Utility||1940||Prototype||1||
|-
|Blériot 125||France||Propeller||Transport||1931||Prototype||1||
|-
|Blohm & Voss BV 138||Germany||Propeller||Reconnaissance||1937||Production||297||
|-
|Boeing Insitu RQ-21 Blackjack||US||UAV||UAV||2012|| || ||
|-
|Bryan Autoplane||US||Propeller||Flying car||1953||Prototype||2||
|-
|Bryant Dole Racer Angel of Los Angeles||US||Propeller||Racer||1927||Prototype||1||
|-
|Burnelli CBY-3||Canada||Propeller||Transport||1944||Prototype||1||
|-
|Burnelli GX-3||US||Propeller||Experimental||1929||Prototype||1||
|-
|Burnelli UB-14||US||Propeller||Transport||1934||Prototype||2||
|-
|Burnelli UB-20||US||Propeller||Transport||1930||Prototype||1||
|-
|Buscaylet-de Monge 7-4||France||Propeller||Experimental||1923||Prototype||1||
|-
|Buscaylet-de Monge 7-5||France||Propeller||Transport||1925||Prototype||1||
|-
|Campbell Model F||US||Propeller||Utility||1935||Prototype||1||
|-
|Canaero Toucan||Canada||Propeller||Ultralight||1983||Production||41+||
|-
|Caproni Ca.1||Italy||Propeller||Bomber||1914||Production||162||
|-
|Caproni Ca.2||Italy||Propeller||Bomber||1915||Production||9||
|-
|Caproni Ca.3||Italy||Propeller||Bomber||1916||Production||269-383||
|-
|Caproni Ca.4||Italy||Propeller||Bomber||1917||Production||44-53||
|-
|Caproni Ca.5||Italy||Propeller||Bomber||1917||Production||662||
|-
|Caproni Ca.37||Italy||Propeller||Attack||1916||Prototype||1||
|-
|Caproni Ca.61||Italy||Propeller||Bomber||1922||Prototype||1-2||
|-
|CarterCopter||US||Autogyro||Transport||1998||Prototype||1||
|-
|Celier Xenon 2||Poland||Autogyro||Utility||2005||Production||100+||
|-
|Cessna Skymaster||US||Propeller||Transport||1961||Production||2,993||
|-
|Cessna XMC||US||Propeller||Experimental||1971||Prototype||1||
|-
|Commuter Craft Innovator||US||Propeller||Transport||2015||Prototype||1||
|-
|Conroy Stolifter||US||Propeller||Utility||1968||Prototype||1||
|-
|Continental KB-1||US||Propeller||Reconnaissance||1916||Prototype||1||
|-
|Convair 106 Skycoach||US||Propeller||Utility||1946||Prototype||1||
|-
|Convair Model 48 Charger||US||Propeller||Attack||1964||Prototype||1||
|-
|Creative Flight Aerocat||Canada||Propeller||Transport||2001||Prototype||1||
|-
|Cunliffe-Owen OA-1||UK||Propeller||Transport||1939||Prototype||1||
|-
|Curtis Wright 21||US||Propeller||Utility||1947||Prototype||1||
|-
|Curtiss Autoplane||US||Propeller||Flying car||1917||Project||1||
|-
|Curtiss CT||US||Propeller||Bomber||1921||Prototype||1||
|-
|De Havilland Sea Vixen||UK||Jet||Fighter||1951||Production||145||
|-
|De Havilland Vampire||UK||Jet||Fighter||1943||Production||3,268||
|-
|De Havilland Venom & Sea Venom||UK||Jet||Fighter||1952||Production||1,431||
|-
|De Schelde S.20||Netherlands||Propeller||Trainer||1940||Prototype||1||
|-
|De Schelde S.21||Netherlands||Propeller||Fighter||1940||Project||1||
|-
|Diemert Defender||Canada||Propeller||Counter-insurgency||1989||Prototype||1||
|-
|Difoga 421||Netherlands||Propeller||Utility||1946||Prototype||1||
|-
|Dyle et Bacalan DB-70||France||Propeller||Transport||1929||Prototype||1||
|-
|Doblhoff WNF 342||Germany||Helicopter||Reconnaissance||1943||Prototype||3||
|-
|DRDO Nishant||India||UAV||UAV||1996|| || ||
|-
|Edgley Optica||UK||Propeller||Reconnaissance||1979||Production||22||
|-
|Eldred Flyer's Dream||US||Propeller||Utility||1946||Prototype||1||
|-
|Emsco B-8 Flying Wing||US||Propeller||Record||1930||Prototype||1||
|-
|Fairchild C-82 Packet||US||Propeller||Transport||1944||Production||223||
|-
|Fairchild C-119 Flying Boxcar||US||Propeller||Transport||1947||Production||1,183||
|-
|Fairchild XC-120 Packplane||US||Propeller||Transport||1950||Prototype||1||
|-
|Fieseler Fi 168||Germany||Propeller||Attack||1938||Project|| ||
|-
|Focke-Wulf Fw 189||Germany||Propeller||Reconnaissance||1938||Production||864||
|-
|Focke-Wulf Flitzer||Germany||Jet||Fighter||1944||Project||0||
|-
|Focke-Wulf Project VIII||Germany||Propeller||Fighter||n/a||Project||0||
|-
|Fokker D.XXIII||Netherlands||Propeller||Fighter||1939||Prototype||1||
|-
|Fokker F.25||Netherlands||Propeller||Utility||1946||Production||20||
|-
|Fokker G.I||Netherlands||Propeller||Fighter||1937||Production||63||
|-
|Friedrichshafen FF.34||Germany||Propeller||Bomber||1916||Prototype||1||
|-
|General Airborne XCG-16||US||Glider||Transport||1943||Prototype||2||
|-
|General Aircraft Cagnet||UK||Propeller||Trainer||1939||Prototype||1||
|-
|General Aircraft GAL.47||UK||Propeller||Reconnaissance||1940||Prototype||1||
|-
|Ghods Mohajer||Iran||UAV||UAV||1981|| || ||
|-
|Gotha Go 242||Germany||Glider||Transport||1941||Production||1,528||
|-
|Gotha Go 244||Germany||Propeller||Transport||1942||Production||174||
|-
|Gotha WD.3||Germany||Propeller||Reconnaissance||1915||Prototype||1||
|-
|Grahame-White Ganymede||UK||Propeller||Bomber||1918||Prototype||1||
|-
|Groen Hawk 4||US||Autogyro||Utility||1997||Prototype||3||
|-
|Grokhovsky G-37||USSR||Propeller||Transport||1934||Prototype||1||
|-
|Grokhovsky G-38||USSR||Propeller||Fighter-bomber||1934||Project||0||
|-
|Grokhovsky G-39 Cucaracha||USSR||Propeller||Fighter||1935||Prototype||1||
|-
|Häfeli DH-1||Switzerland||Propeller||Reconnaissance||1916||Production||6||
|-
|Hanriot H.110 & H.115||France||Propeller||Fighter||1933||Prototype||1||
|-
|Harbin BZK-005||China||UAV||UAV||2006||Production||100+||
|-
|Henderson H.S.F.1||UK||Propeller||Transport||1929||Prototype||1||
|-
|Heston JC.6||UK||Propeller||Reconnaissance||1947||Prototype||2||
|-
|Hughes D-2||US||Propeller||Fighter-bomber||1942||Prototype||1||
|-
|Hughes XF-11||US||Propeller||Reconnaissance||1946||Prototype||2||
|-
|HWL Pegaz||Poland||Propeller||Motor glider||1949||Prototype||1||
|-
|Hydra Technologies Ehécatl||Mexico||UAV||UAV||2006|| || ||
|-
|IAI Arava||Israel||Propeller||Transport||1969||Production||103||
|-
|IAI Heron||Israel||UAV||UAV||1994|| || ||
|-
|IAI Scout||Israel||UAV||UAV||1981|| || ||
|-
|IAI Searcher||Israel||UAV||UAV||1992|| || ||
|-
|Ikarus 452M||Yugoslavia||Jet||Experimental||1953||Prototype||2||
|-
|Ion Aircraft Ion||US||Propeller||Utility||2007||Prototype||1||
|-
|I.S.T. XL-15 Tagak||Philippines||Propeller||Utility||1954||Prototype||1||
|-
|Johns Multiplane||US||Propeller||Bomber||1919||Prototype||1||
|-
|Kalinin K-7||USSR||Propeller||Experimental||1933||Prototype||1||
|-
|Kaman HH-43 Huskie||US||Helicopter||Utility||1947||Production||193||
|-
|Kamov Ka-26||USSR||Helicopter||Utility||1965||Production||816||
|-
|Kamov Ka-126||USSR||Helicopter||Utility||1988||Production||17||
|-
|Kamov Ka-226||Russia||Helicopter||Utility||1997||Production||69||
|-
|Kingsford Smith PL.7||Australia||Propeller||Agricultural||1956||Prototype||1||
|-
|Kokusai Ki-105 Otori||Japan||Propeller||Transport||1945||Prototype||9||
|-
|Kokusai Ku-7||Japan||Glider||Transport||1942||Prototype||2||
|-
|Kortenbach & Rauh Kora 1||Germany||Propeller||Motor glider||1973||Prototype||2||
|-
|Larkin Skylark||US||Propeller||Utility||1973||Prototype||1||
|-
|Lawrence Special||US||Propeller||Racer||1949||Prototype||1||
|-
|Levasseur PL.200/201||France||Propeller||Reconnaissance||1935||Prototype||1||
|-
|Lockheed P-38 Lightning||US||Propeller||Fighter||1939||Production||10,037||
|-
|Lockheed XP-49||US||Propeller||Fighter||1942||Prototype||1||
|-
|Lockheed XP-58 Chain Lightning||US||Propeller||Fighter||1944||Prototype||1||
|-
|Lloyd 40.08 Luftkreuzer||Germany||Propeller||Bomber||1916||Prototype||1||
|-
|LWF model H Owl||US||Propeller||Transport||1919||Prototype||1||
|-
|Maeda Ku-1||Japan||Glider||Trainer||1941||Production||100||
|-
|Macchi M.12||Italy||Propeller||Bomber||1918||Production||10 ca.||
|-
|Mansyū Ki-98||Japan||Propeller||Attack||1945||Prototype||1||
|-
|McCulloch J-2||US||Autogyro||Utility||1962||Production||83+||
|-
|McDonnell XV-1||US||Autogyro||Experimental||1954||Prototype||2||
|-
|McGaffey Aviate||US||Propeller||Utility||1935||Prototype||1||
|-
|Millet Lagarde ML-10||France||Propeller||Utility||1949||Prototype||2||
|-
|Mikoyan MiG-110||Russia||Propeller||Transport||1995||Project||0||
|-
|Mirach 26||Italy||UAV||UAV||1992|| || ||
|-
|Mitsubishi J4M||Japan||Propeller||Fighter||n/a||Project||0||
|-
|Moskalyev SAM-13||USSR||Propeller||Fighter||1940||Prototype||1||
|-
|Moskalyev SAM-23||USSR||Propeller||Fighter||1943||Project||0||
|-
|Myasishchev M-17 and M-55||USSR||Jet||Reconnaissance||1978||Production||8+||
|-
|NASA Mini-Sniffer (II and III)||US||Propeller||UAV||n/a||Prototype||2||
|-
|Nieuport seaplane pusher||France||Propeller||Reconnaissance||1913||Prototype||1||
|-
|Nord Noratlas||France||Propeller||Transport||1949||Production||425||
|-
|North American OV-10 Bronco||US||Propeller||Attack||1965||Production||360||
|-
|Northrop F-15 Reporter||US||Propeller||Reconnaissance||1945||Production||36||
|-
|Northrop Flying Wing||US||Propeller||Experimental||1929||Prototype||1||
|-
|Northrop Grumman Firebird||US||Propeller||Reconnaissance||2010||Prototype||1||
|-
|Northrop P-61 Black Widow||US||Propeller||Fighter||1942||Production||706||
|-
|NPO Molniya Molniya-1||Russia||Propeller||Utility||1992||Prototype||2||
|-
|OMA SUD Skycar||Italy||Propeller||Utility||2007||Prototype||1||
|-
|Otto C.I||Germany||Propeller||Reconnaissance||1915||Production||25||
|-
|PAL-V||Netherlands||Helicopter||Flying car||2012||Prototype||1||
|-
|Piper PA-7 Skycoupe||US||Propeller||Utility||1944||Prototype||1||
|-
|Pitcairn XO-61||US||Autogyro||Reconnaissance||1943||Prototype||2||
|-
|Pocino PJ.1A||France||Propeller||Ultralight||1989||Prototype||1||
|-
|Portsmouth Aerocar||UK||Propeller||Utility||1947||Prototype||1||
|-
|Potez 75||France||Propeller||Attack||1953||Prototype||1||
|-
|Praga E-51||Czechoslovakia||Propeller||Reconnaissance||1938||Prototype||1||
|-
|Puget Pacific Wheelair III-A||US||Propeller||Utility||1947||Prototype||1||
|-
|PZL M-15 Belphegor||Poland||Jet||Agricultural||1973||Production||175||
|-
|PZL M-17||Poland||Propeller||Trainer||1973||Prototype||1||
|-
|Raybird-3||Ukraine||UAV||UAV||2014||In service|| ||
|-
|Rice Knowlton Volante||US||Propeller||Flying Car||1981||Prototype||1||
|-
|Rutan Grizzly||US||Propeller||Experimental||1982||Prototype||1||
|-
|Rocheville Arctic Tern||US||Propeller||Record||1932||Prototype||1||
|-
|Rotor Flight Dynamics LFINO||US||Autogyro||Experimental||2006||Prototype||1||
|-
|RTAF-5||Thailand||Propeller||Trainer||1984||Prototype||1||
|-
|RUAG Ranger||Switzerland / Israel||UAV||UAV||1999|| || ||
|-
|Rutan Voyager||US||Propeller||Record||1984||Production||1||
|-
|S-TEC Sentry||US||UAV||UAV||1986|| || ||
|-
|Saab 21||Sweden||Propeller||Fighter||1943||Production||298||
|-
|Saab 21R||Sweden||Jet||Fighter||1947||Production||64||
|-
|SAB AB-20 & 21||France||Propeller||Bomber||1932||Prototype||2||
|-
|Sadler Vampire||US||Propeller||Ultralight||1982||Production|| ||
|-
|SAIMAN LB.2||Italy||Propeller||Utility||1937||Prototype||1||
|-
|Savoia-Marchetti S.64||Italy||Propeller||Record||1928||Prototype||2||
|-
|Savoia-Marchetti S.65||Italy||Propeller||Racer||1929||Prototype||1||
|-
|Savoia-Marchetti SM.88||Italy||Propeller||Fighter||1939||Prototype||1||
|-
|Savoia-Marchetti SM.91||Italy||Propeller||Fighter-bomber|| 1943||Prototype||1||
|-
|Scaled Composites ARES||US||Jet||Attack||1990||Prototype||1||
|-
|Scaled Composites ATTT||US||Propeller||Transport||1986||Prototype||1||
|-
|Scaled Composites Pond Racer||US||Propeller||Racer||1991||Prototype||1||
|-
|Scaled Composites Proteus||US||Jet||Experimental||1991||Prototype||1||
|-
|Scaled Composites SpaceShipOne||US||Rocket||Spaceplane||2003||Prototype||1||
|-
|Scaled Composites SpaceShipTwo||US||Rocket||Spaceplane||2010||Prototype||2||
|-
|Scaled Composites White Knight||US||Jet||Transport||2002||Prototype||1||
|-
|Schneider Sch-10M||France||Propeller||Bomber||1925||Prototype||1||
|-
|Schwade Kampfeinsitzer Nr 2||Germany||Propeller||Fighter||1916||Prototype||1||
|-
|Schweizer RU-38 Twin Condor||US||Propeller||Reconnaissance||1995||Prototype||5||
|-
|SECAN Courlis||France||Propeller||Utility||1946||Production||144||
|-
|Selex ES Falco||Italy||UAV||UAV||2003|| || ||
|-
|SIAI-Marchetti FN.333 Riviera||Italy||Propeller||Utility||1952||Production||29||
|-
|Siemens-Schuckert L.I||Germany||Propeller||Bomber||1918||Prototype||3||
|-
|Siemens-Schuckert R.I||Germany||Propeller||Bomber||1915||Prototype||1||
|-
|Sikorsky S-38||US||Propeller||Transport||1928||Production||101||
|-
|Sikorsky S-39||US||Propeller||Transport||1929||Production||23+||
|-
|Sikorsky S-40||US||Propeller||Transport||1931||Production||3||
|-
|Sikorsky S-41||US||Propeller||Transport||1930||Production||7||
|-
|SIPA S.200 Minijet||France||Jet||Trainer||1952||Prototype||7||
|-
|Škoda Kauba Sk V6||Czechoslovakia||Propeller||Experimental||1944||Prototype||1||
|-
|SNCAC NC.1070||France||Propeller||Attack||1947||Prototype||1||
|-
|SNCAC NC.1071||France||Jet||Attack||1948||Prototype||1||
|-
|SNCASO SO.8000 Narval||France||Propeller||Fighter||1949||Prototype||2||
|-
|SPCA 30||France||Propeller||Bomber||1931||Prototype||2||
|-
|Spectrum SA-550||US||Propeller||Utility||1983||Prototype||2+||
|-
|Stearman-Hammond Y-1||US||Propeller||Utility||1931||Production||20 ca.||
|-
|Stout Skycar||US||Propeller||Transport||1941||Prototype||4||
|-
|Sukhoi Su-12||USSR||Propeller||Reconnaissance||1947||Prototype||1||
|-
|Sukhoi Su-80||Russia||Propeller||Transport||2001||Prototype||8||
|-
|Tachikawa Ki-94-I||Japan||Propeller||Fighter||n/a||Project||0||
|-
|TAI Baykuş||Turkey||UAV||UAV||2003|| || ||
|-
|Teledyne Ryan Model 410||US||UAV||UAV||1988|| || ||
|-
|Tengden TB-001||China||UAV||UAV||2017||Production|| ||
|-
|Terrafugia Transition||US||Propeller||Flying car||2009||Prototype||2||
|-
|THK-11||Turkey||Propeller||Utility||1947||Prototype||1||
|-
|Thomas-Morse MB-4||US||Propeller||Transport||1920||Prototype||4||
|-
|Transavia PL-12 Airtruk||Australia||Propeller||Agricultural||1965||Production||118||
|-
|Trella T-106||US||Propeller||Utility||1949||Prototype||1||
|-
|Trella T-107||US||Propeller||Transport||1954||Project||0||
|-
|Tupolev I-12/ANT-23||USSR||Propeller||Fighter||1931||Prototype||1||
|-
|Vance Viking||US||Propeller||Racer||1932||Prototype||1||
|-
|Virgin Atlantic GlobalFlyer||US||Jet||Record||2005||Production||1||
|-
|Voisin E.28||France||Propeller||Bomber||1919||Prototype||1||
|-
|Voisin Triplane||France||Propeller||Bomber||1915||Prototype||1||
|-
|Vultee XP-54||US||Propeller||Fighter||1943||Prototype||2||
|-
|Vultee XP-68 Tornado||US||Propeller||Fighter||n/a||Project||0||
|-
|Wagner Aerocar||Germany||Helicopter||Flying car||1965||Prototype||1||
|-
|Weick W-1||US||Propeller||Experimental||1934||Prototype||1||
|-
|Weymann 66||France||Propeller||Transport||1933||Prototype||1||
|-
|Willoughby Delta 8||UK||Propeller||Experimental||1939||Prototype||1||
|-
|Willoughby Delta 9||UK||Propeller||Transport||1939||Project||0||
|-
|WLT Sparrow||Czech Republic||Propeller||Ultralight||2010||Production||13||
|-
|WNF Wn 16||Austria||Propeller||Experimental||1939||Prototype||1||
|-
|Yakovlev Yak-58||Russia||Propeller||Utility||1993||Prototype||7||
|-
|Yakovlev Yak-141||Russia||Jet||Fighter||1987||Prototype||4||
|}
See also
Twin tail
References
Citations
Bibliography
Green, W. and Swanborough, G.; The complete book of fighters, Salamander, 1994.
Aircraft configurations | Twin-boom aircraft | [
"Engineering"
] | 8,791 | [
"Aircraft configurations",
"Aerospace engineering"
] |
3,080,821 | https://en.wikipedia.org/wiki/Alcan%20Lynemouth%20Aluminium%20Smelter | The Alcan Lynemouth Aluminium Smelter was an industrial facility near Ashington, Northumberland, on the coast of North East England, south of the village of Lynemouth. The smelter was owned by the Canadian aluminium company Alcan, which is part of Rio Tinto. The smelter was opened in 1974 at a cost, which exceeded its budgeted estimate of £54 million, of $156 million. The plant ceased production in March 2012, and demolition of the facility was completed in March 2018.
Factors determining the smelter's site
A variety of factors determined the smelter's position:
The first was a source of electric power to smelt the aluminium. One tonne of aluminium requires the same amount of electricity that an average family uses in 20 years, so cheap power was needed. In 1972, Alcan commissioned Lynemouth Power Station, less than from the smelter's site, to fulfil its power needs. The station's site was convenient for access to the Ellington and Lynemouth coal mines nearby, which were also the fundamental reason for the nearby village's creation. The power station has a 420 megawatt (MW) capacity, more than enough to meet the load requirements of the smelter. The spare electricity is sold to the National Grid.
Another factor was finding a labour force. Many coal mines in the area had shut down, leaving thousands of people there unemployed. Aluminium smelting is very labour-intensive, but the workforce in the local area was used to heavy work because of working in the mines. The British government also granted £28 million to the company to help reduce unemployment in the area.
Transport was another major factor as bauxite could not be found in the United Kingdom, only in places such as Jamaica and Australia. The smelter's location had to be near a port with good transport links to the site. The town of Blyth, which is south of the smelter, already had a deep sea port. There was also a railway link from the port going directly to the power station, which was connected to the Alcan facility. The site also has good road links.
Facts
The smelter had two of the most efficient ring burners in the world, costing around £17 million each.
The smelter was the only aluminium smelting site in Europe which rebuilt the smelter whilst still in production. It was a 100-day process which took place every seven years.
The smelter was provided with alumina by two trains a day from Blyth, each consisting of 21 wagons. The alumina was shipped to Blyth from Limerick in the Republic of Ireland.
Coke was shipped to Blyth from Louisiana in the U.S. and was transported to the smelter by heavy goods vehicles.
Worries
When work started on the site, local farmers were worried that pollution from the smelter would ruin their crops and harm their livestock. To address their concerns, Alcan decided to buy the land from them. Alcan now owns over of land in the local area and employs a farming director. The land is still used to grow crops and raise livestock.
In early 2005, residents of nearby villages were worried about the fate of the smelter when the only remaining local coal mine, situated at Ellington, closed. However, the smelter did not close and imports its coal from overseas or from mines in other parts of the country.
The emissions of the power plant connected to the smelter were another concern for the environment. In April 2010, the European Court of Justice decided that, contrary to the claim of the UK government, the power plant was subject to the emission limit values laid down in the 2001 Large Combustion Plant Directive. As a consequence, emissions of air polluting substances of the plant had to be reduced.
Closure
Production at the Lynemouth Smelter ended at 14:00 on 29 March 2012, following a 90-day consultation period. It closed in May 2012 putting 515 people out of work and causing a knock-on effect in its local supply chain. Alcan cited rising energy costs due to emerging European environmental legislation as the reason.
The 420MW coal power station continues to operate under new ownership.
In 2015, the site was sold by Rio Tinto to Harworth Estates who plan to turn the site into an 'employment park.' In June 2016, all eight chimneys at the site were demolished and the site had been decommissioned. Demolition of the former smelter was completed in March 2018. The rest of the buildings are now rented to other businesses. It has now been repurposed as a biomass power plant. The area is now a wind farm site.
See also
Anglesey Aluminium
List of aluminium smelters
References
External links
UK Business Park News on Alcan
AME Research
Brief description
Company invests in a safe staff future
Aluminium smelters
Non-ferrous metallurgical works in the United Kingdom
Former Rio Tinto (corporation) subsidiaries
Alcan
Newbiggin-by-the-Sea | Alcan Lynemouth Aluminium Smelter | [
"Chemistry"
] | 1,043 | [
"Non-ferrous metallurgical works in the United Kingdom",
"Metallurgical facilities"
] |
3,080,885 | https://en.wikipedia.org/wiki/Engaged%20column | An engaged column is an architectural element in which a column is embedded in a wall and partly projecting from the surface of the wall, which may or may not carry a partial structural load. Sometimes defined as semi- or three-quarter detached, engaged columns are rarely found in classical Greek architecture, and then only in exceptional cases, but in Roman architecture they exist in abundance, most commonly embedded in the cella walls of pseudoperipteral Roman temples and other buildings.
In the temples it is attached to the cella walls, repeating the columns of the peristyle, and in the theatres and amphitheatres, where they subdivided the arched openings: in all these cases engaged columns are utilized as a decorative feature, and as a rule the same proportions are maintained as if they had been isolated columns. In Romanesque work the classic proportions were no longer adhered to; the engaged column, attached to the piers, has always a special function to perform, either to support subsidiary arches, or, raised to the vault, to carry its transverse or diagonal ribs. The same constructional object is followed in the earlier Gothic styles, in which they become merged into the mouldings. Being virtually always ready made, so far as their design is concerned, they were much affected by the Italian revivalists.
Gallery
See also
Hexastyle
Hypostyle
Octastyle
Peristyle
Pilaster
Portico
Tetrastyle
References
Stierlin, Henri The Roman Empire: From the Etruscans to the Decline of the Roman Empire, TASCHEN, 2002
Columns and entablature | Engaged column | [
"Technology"
] | 321 | [
"Structural system",
"Columns and entablature"
] |
3,080,986 | https://en.wikipedia.org/wiki/The%20Apprentice%20%28Libby%20novel%29 | The Apprentice is a novel by Lewis Libby, former Chief of Staff to United States Vice President Dick Cheney, first published in hardback in 1996, reprinted in trade paperback in 2002, and reissued in mass market paperback in 2005 after Libby's indictment in the CIA leak grand jury investigation. It is set in northern Japan in winter 1903, and centers on a group of travelers stranded at a remote inn due to a smallpox epidemic. It has been described as "a thriller ... that includes references to bestiality, pedophilia and rape." It is the first and only novel that Libby has written.
Publication history
After being published in hardback by Graywolf Press (St. Paul, Minnesota) in August 1996 (now out of print), it was published as a trade paperback by St. Martin's Thomas Dunne Books in February 2002, and then reissued as a mass market paperback reprint of 25,000 copies by St. Martin's Griffin imprint in December 2005, after Libby's indictment that October, as a result of the CIA leak grand jury investigation.
First edition publicity
In 2002, during an interview on Larry King Live promoting his novel's first publication in paperback, King asked Libby: "Are you a novelist working part-time for the vice president?" Libby told King, "Well, I've never quite figured that out. ... I'm a great fan of the Vice President. I think he's one of the smartest, most honorable people I've ever met. So, I'd like to consider myself fully on his team, but there's always a novel kicking around in the back somewhere." After hearing a brief plot summary, King wondered why Libby had set the novel in Japan, and Libby responded:
At that time, Libby also appeared on The Diane Rehm Show on National Public Radio to talk about the novel. Libby said that he had chosen to set the novel in Japan in 1903, because it was a pivotal time in its history that had intrigued him.
Plot summary
According to the description of the book by St. Martin's Press:
Reprint publicity
Following his indictment on October 28, 2005, for obstruction of justice, perjury, and making false statements to federal investigators in Special Counsel Patrick Fitzgerald's CIA leak grand jury investigation, relating to the Plame affair, after the novel was reissued and promoted by its publisher and Libby in media interviews and the subject of subsequent reviews, it gained renewed attention.
Notably, The Apprentice and Lewis Libby were the focus of the following week's New Yorker "Talk of the Town" column, by Lauren Collins, entitled "Scooter's Sex Shocker". Observing that "Libby has a lot to live up to as a conservative author of erotic fiction," Collins compares the novel to other so-called "sex shockers" written by conservative politicians and pundits and discusses themes of homoeroticism and incest in The Apprentice. She documents her view that "Like his predecessors, Libby does not shy from the scatological" with quotations from the book, regarding it as "Libby’s 1996 entry in the long and distinguished annals of the right-wing dirty novel."
Reception
Although the sexual passages and references make up only a few pages of the novel, one passage in particular — combining bestiality, pedophilia, prostitution, biastophilia, and voyeurism in just three sentences—has received wide attention:
Another sentence in the book introduces necrophilia in addition to bestiality, as a hunter copulates with a freshly killed deer: "The man called out to the others that the deer was still warm. He asked if they should fuck the deer" (127).
In his June 7, 2007 Wall Street Journal op-ed calling for Presidential pardon of Scooter Libby, conservative academic Fouad Ajami praised The Apprentice as a "remarkably lyrical novel ... [which] bears witness to an eye for human folly and disappointment."
References
External links
The Apprentice at Google Book Search. ("Preview" includes scans of front and back covers; title page; copyright page (back of title page); and selected pages from chapters 2, 4 and 5, with excluded pages identified. This preview includes pages cited above.)
1996 American novels
George W. Bush administration controversies
Novels set in Japan
Human–animal interaction
Fiction set in 1903
Graywolf Press books
1996 debut novels
Japan in non-Japanese culture | The Apprentice (Libby novel) | [
"Biology"
] | 916 | [
"Human–animal interaction",
"Animals",
"Humans and other species"
] |
3,081,101 | https://en.wikipedia.org/wiki/Chitobiose | Chitobioses are a group of related disaccharides of β-1,4-linked glucosamine units. The term chitobiose is sometimes used to refer to different members of the group, depending on the method by which it was first isolated, resulting in some ambiguity as to which chemical compound the name is referring to.
Chitobiose is the condensed form of 4-O-(2-amino-2-deoxy-β-D-glucopyranosyl)-2-amino-2-deoxy-D-glucose and is an acetylation disaccharide found in chitin. It is formed by the depolymerization of chitin either enzymatically or chemically.
Chitobiose is utilized by Borrelia burgdorferi to produce N-acetylglucosamine, a component of the bacterial cell wall, and is regularized by the response regulator rrp1. A mutant strain of rrp1 has been found to cause growth deficits with Borrelia burgdorferi.
In Escherichia coli, the chb operon is involved with the utilization of cellobiose and β-glucosides chitobiose. The chbG gene of the chb Operon encodes a chitooligosaccharide deacetylase. The chb operon is also responsible for coding the ChbR regulator, which is responsible for activating transcription when chitobiose is available.
See also
Glucosamine
Lysozyme
References
Amino sugars
Disaccharides | Chitobiose | [
"Chemistry"
] | 339 | [
"Amino sugars",
"Carbohydrates"
] |
3,081,452 | https://en.wikipedia.org/wiki/Durrell%20Institute%20of%20Conservation%20and%20Ecology | The Durrell Institute of Conservation and Ecology (DICE) is a subdivision of the University of Kent, started in 1989 and named in honour of the famous British naturalist Gerald Durrell. It was the first institute in the United Kingdom to award undergraduate and postgraduate degrees and diplomas in the fields of conservation biology, ecotourism, and biodiversity management. It comprises 22 academic staff and an advisory board of 14 conservationists from the government, business, and the NGO sectors.
History
DICE's graduate degree programme began in 1991 with a class of seven international students. Since then, it has trained over 1,200 people from 101 countries, including 322 people from Lower- and Middle-Income countries in Africa, Asia, Oceania, and South America. The founder of DICE is Professor Ian Swingland, who retired from the University of Kent in 1999, and the first Director was Dr. Mike Walkey, who retired in 2002.
Awards
In 2019, DICE was awarded a Queen's Anniversary Prize for "pioneering education, capacity building and research in global nature conservation to protect species and ecosystems and benefit people".
Alumni
Notable alumni include:
Bahar Dutt, Indian television journalist and environmental editor
Sanjay Gubbi, Indian conservation biologist
Rachel Ikemeh, Nigerian conservationist
Winnie Kiiru, Kenyan biologist and elephant conservationist
Patricia Medici, Brazilian conservation biologist
Jeanneney Rabearivony, Malagasy ecologist and herpetologist
Rajeev Raghavan, Indian conservation biologist
Alexandra Zimmermann, wildlife conservationist
References
Ecology organizations
Environmental research institutes
Research institutes in Kent
University of Kent | Durrell Institute of Conservation and Ecology | [
"Environmental_science"
] | 316 | [
"Environmental research institutes",
"Environmental research"
] |
3,081,780 | https://en.wikipedia.org/wiki/Peroxisomal%20targeting%20signal | In biochemical protein targeting, a peroxisomal targeting signal (PTS) is a region of the peroxisomal protein that receptors recognize and bind to. It is responsible for specifying that proteins containing this motif are localised to the peroxisome.
Overview
All peroxisomal proteins are synthesized in the cytoplasm and must be directed to the peroxisome. The first step in this process is the binding of the protein to a receptor. The receptor then directs the complex to the peroxisome. Receptors recognize and bind to a region of the peroxisomal protein called a peroxisomal targeting signal, or PTS.
Peroxisomes consist of a matrix surrounded by a specific membrane. Most peroxisomal matrix proteins contain a short sequence, usually three amino acids at the extreme carboxy tail of the protein, that serves as the PTS. The prototypic sequence (many variations exist) is serine-lysine-leucine (-SKL in the one-letter amino acid code). This motif, and its variations, is known as the PTS1, and the receptor is termed the PTS1 receptor.
It was found that the PTS1 receptor is encoded by the PEX5 gene. PEX5 imports folded proteins into the peroxisome, shuttling between the peroxisome and cytosol. PEX5 interacts with a large number of other proteins, including Pex8p, 10p, 12p, 13p, 14p.
A few peroxisomal matrix proteins have a different, and less conserved sequence, at their amino termini. This PTS2 signal is recognized by the PTS2 receptor, encoded by the PEX7 gene.
"PEX" refers to a group of genes that were identified as being important for peroxisomal synthesis. The numerical attributions, such as PEX5, generally refer to the order in which they were first discovered.
A distinct motif is used for proteins destined for the peroxisomal membrane called the "mPTS" motif, which is more poorly defined and may consist of discontinuous subdomains. One of these usually is a cluster of basic amino acids (arginines and lysines) within a loop of protein (i.e., between membrane spans) that will face the matrix. The mPTS receptor is the product of PEX19.
References
External links
Protein targeting
Signal transduction
Short linear motifs | Peroxisomal targeting signal | [
"Chemistry",
"Biology"
] | 511 | [
"Biotechnology stubs",
"Protein targeting",
"Signal transduction",
"Biochemistry stubs",
"Cellular processes",
"Biochemistry",
"Neurochemistry"
] |
3,082,038 | https://en.wikipedia.org/wiki/P%E2%80%93n%20junction%20isolation | p–n junction isolation is a method used to electrically isolate electronic components, such as transistors, on an integrated circuit (IC) by surrounding the components with reverse biased p–n junctions.
Introduction
By surrounding a transistor, resistor, capacitor or other component on an IC with semiconductor material which is doped using an opposite species of the substrate dopant, and connecting this surrounding material to a voltage which reverse-biases the p–n junction that forms, it is possible to create a region which forms an electrically isolated "well" around the component.
Operation
Assume that the semiconductor wafer is p-type material. Also assume a ring of n-type material is placed around a transistor, and placed beneath the transistor. If the p-type material within the n-type ring is now connected to the negative terminal of the power supply and the n-type ring is connected to the positive terminal, the 'holes' in the p-type region are pulled away from the p–n junction, causing the width of the nonconducting depletion region to increase. Similarly, because the n-type region is connected to the positive terminal, the electrons will also be pulled away from the junction.
This effectively increases the potential barrier and greatly increases the electrical resistance against the flow of charge carriers. For this reason there will be no (or minimal) electric current across the junction.
At the middle of the junction of the p–n material, a depletion region is created to stand-off the reverse voltage. The width of the depletion region grows larger with higher voltage. The electric field grows as the reverse voltage increases. When the electric field increases beyond a critical level, the junction breaks down and current begins to flow by avalanche breakdown. Therefore, care must be taken that circuit voltages do not exceed the breakdown voltage or electrical isolation ceases.
History
In an article entitled "Microelectronics", published in Scientific American, September 1977 Volume 23, Number 3, pp. 63–9, Robert Noyce wrote:
"The integrated circuit, as we conceived and developed it at Fairchild Semiconductor in 1959, accomplishes the separation and interconnection of transistors and other circuit elements electrically rather than physically. The separation is accomplished by introducing pn diodes, or rectifiers, which allow current to flow in only one direction. The technique was patented by Kurt Lehovec at the Sprague Electric Company".Sprague Electric Company engineer Kurt Lehovec filed for p–n junction isolation in 1959, and was granted the patent in 1962. He is reported (during his lectures on semiconductor memory cells) to have said "I never got a dime out of it [the patent]." However, I T History states he was paid (pro forma) at least one dollar for what is possibly the most important invention in history, as it also was instrumental in the invention of the LED and the solar cell, both of which Lau Wai Shing says Lehovec also pioneered the research of.
When Robert Noyce invented the monolithic integrated circuit in 1959, his idea of p–n junction isolation was based on Hoerni's planar process. In 1976, Noyce stated that, in January 1959, he did not know about the work of Lehovec.
See also
LOCOS
Shallow trench isolation
References
Semiconductor structures
Integrated circuits
Czech inventions | P–n junction isolation | [
"Technology",
"Engineering"
] | 705 | [
"Computer engineering",
"Integrated circuits"
] |
3,082,158 | https://en.wikipedia.org/wiki/Multileaf%20collimator | A multileaf collimator (MLC) is a Collimator or beam-limiting device that is made of individual "leaves" of a high atomic numbered material, usually tungsten, that can move independently in and out of the path of a radiotherapy beam in order to shape it and vary its intensity.
MLCs are used in external beam radiotherapy to provide conformal shaping of beams. Specifically, conformal radiotherapy and Intensity Modulated Radiation Therapy (IMRT) can be delivered using MLCs.
The MLC has improved rapidly since its inception and the first use of leaves to shape structures in 1965 to modern day operation and use. MLCs are now widely used and have become an integral part of any radiotherapy department. MLCs were primarily used for conformal radiotherapy, and have allowed the cost-effective implementation of conformal treatment with significant time saving, and also have been adapted for use for IMRT treatments. For conformal radiotherapy the MLC allows conformal shaping of the beam to match the borders of the target tumour. For intensity modulated treatments the leaves of a MLC can be moved across the field to create IMRT distributions (MLCs really provide a fluence modulation rather than intensity modulation).
The MLC is an important tool for radiation therapy dose delivery. It was originally used as a surrogate for alloy block field shaping and is now widely used for IMRT. As with any tool used in radiotherapy the MLC must undergo commissioning and quality assurance. Additional commissioning measurements are completed to model a MLC for treatment planning. Various MLCs are provided by different vendors and they all have unique design features as determined by specifications of design, and these differences are quite significant.
References
Medical equipment
Medical physics
Particle accelerators
Radiation therapy | Multileaf collimator | [
"Physics",
"Biology"
] | 359 | [
"Medical equipment",
"Applied and interdisciplinary physics",
"Medical physics",
"Medical technology"
] |
3,082,309 | https://en.wikipedia.org/wiki/Earthquake%20Baroque | Earthquake Baroque, or Seismic Baroque, is a style of Baroque architecture found in the former Spanish East Indies (now the Philippines and some nearby Pacific islands) and in Guatemala, which were Spanish-ruled territories that suffered destructive earthquakes during the 17th and the 18th centuries. Large public buildings, such as churches, were then rebuilt in a Baroque style during the Spanish colonial periods in those countries.
Similar events led to the Pombaline architecture in Lisbon following the 1755 Lisbon earthquake and Sicilian Baroque in Sicily following the 1693 earthquake.
Characteristics
In the Spanish East Indies, destruction of earlier churches from frequent earthquakes have made the church proportion lower and wider; side walls were made thicker and heavily buttressed for stability during shaking. The upper structures were made with lighter materials. Instead of lighter materials thinner walls were introduced by progressively decreasing in thickness to the topmost levels.
Bell towers are usually lower and stouter compared to towers in less seismically active regions of the world. Towers are thicker in the lower levels, progressively narrowing to the topmost level. In some churches of the Philippines, aside from functioning as watchtowers against pirates, some bell towers are detached from the main church building to avoid damage in case of a falling bell tower due to an earthquake.
See also
Church architecture
Spanish Colonial architecture
Churrigueresque
Plateresque
References
External links
"Earthquake Baroque: Paoay Church in the Ilocos" from the Heritage Conservation Society
San Pedro de las Huertas, an Earthquake Baroque church in Guatemala
Earthquake baroque churches of the Philippines
Baroque architectural styles
Baroque architecture in the Philippines
Earthquake engineering | Earthquake Baroque | [
"Engineering"
] | 316 | [
"Earthquake engineering",
"Civil engineering",
"Structural engineering"
] |
3,082,931 | https://en.wikipedia.org/wiki/Origination%20of%20Organismal%20Form | Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology is an anthology published in 2003 edited by Gerd B. Müller and Stuart A. Newman. The book is the outcome of the 4th Altenberg Workshop in Theoretical Biology on "Origins of Organismal Form: Beyond the Gene Paradigm", hosted in 1999 at the Konrad Lorenz Institute for Evolution and Cognition Research. It has been cited over 200 times and has a major influence on extended evolutionary synthesis research.
Description of the book
The book explores the multiple factors that may have been responsible for the origination of biological form in multicellular life. These biological forms include limbs, segmented structures, and different body symmetries.
It explores why the basic body plans of nearly all multicellular life arose in the relatively short time span of the Cambrian Explosion. The authors focus on physical factors (structuralism) other than changes in an organism's genome that may have caused multicellular life to form new structures. These physical factors include differential adhesion of cells and feedback oscillations between cells.
The book also presents recent experimental results that examine how the same embryonic tissues or tumor cells can be coaxed into forming dramatically different structures under different environmental conditions.
One of the goals of the book is to stimulate research that may lead to a more comprehensive theory of evolution. It is frequently cited as foundational to the development of the extended evolutionary synthesis.
List of contributions
Origination of Organismal Form: The Forgotten Cause in Evolutionary Theory, Gerd B. Müller and Stuart A. Newman
The Cambrian "Explosion" of Metazoans, Simon Conway Morris
Convergence and Homoplasy in the Evolution of Organismal Form, Pat Willmer
Homology:The Evolution of Morphological Organization, Gerd B. Müller
Only Details Determine, Roy J. Britten
The Reactive Genome, Scott F. Gilbert
Tissue Specificity: Structural Cues Allow Diverse Phenotypes from a Constant Genotype, Mina J. Bissell, I. Saira Mian, Derek Radisky and Eva Turley
Genes, Cell Behavior, and the Evolution of Form, Ellen Larsen
Cell Adhesive Interactions and Tissue Self-Organization, Malcolm Steinberg
Gradients, Diffusion, and Genes in Pattern Formation, H. Frederik Nijhout
A Biochemical Oscillator Linked to Vertebrate Segmentation, Olivier Pourquié
Organization through Intra-Inter Dynamics, Kunihiko Kaneko
From Physics to Development: The Evolution of Morphogenetic Mechanisms, Stuart A. Newman
Phenotypic Plasticity and Evolution by Genetic Assimilation, Vidyanand Nanjundiah
Genetic and Epigenetic Factors in the Origin of the Tetrapod Limb, Günter P. Wagner and Chi-hua Chiu
Epigenesis and Evolution of Brains: From Embryonic Divisions to Functional Systems, Georg F. Striedter
Boundary Constraints for the Emergence of Form, Diego Rasskin-Gutman
References
2003 non-fiction books
Biology books
Books about evolution
Extended evolutionary synthesis
Developmental biology
Evolutionary developmental biology | Origination of Organismal Form | [
"Biology"
] | 605 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
3,082,957 | https://en.wikipedia.org/wiki/Coaxial-rotor%20aircraft | A coaxial-rotor aircraft is an aircraft whose rotors are mounted one above the other on concentric shafts, with the same axis of rotation, but turning in opposite directions (contra-rotating).
This rotor configuration is a feature of helicopters produced by the Russian Kamov helicopter design bureau.
History
The idea of coaxial rotors originates with Mikhail Lomonosov. He had developed a small helicopter model with coaxial rotors in July 1754 and demonstrated it to the Russian Academy of Sciences.
In 1859, the British Patent Office awarded the first helicopter patent to Henry Bright for his coaxial design. From this point, coaxial helicopters developed into fully operational machines as we know them today.
Two pioneering helicopters, the Corradino D'Ascanio-built "D'AT3" of 1930, and the generally more successful French mid-1930s Gyroplane Laboratoire, both used coaxial rotor systems for flight.
Design considerations
Having two coaxial sets of rotors provides symmetry of forces around the central axis for lifting the vehicle and laterally when flying in any direction. Because of the mechanical complexity, many helicopter designs use alternative configurations to avoid problems that arise when only one main rotor is used. Common alternatives are single-rotor helicopters or tandem rotor arrangements.
Torque
One of the problems with any single set of rotor blades is the torque (rotational force) exerted on the helicopter fuselage in the direction opposite to the rotor blades. This torque causes the fuselage to rotate in the direction opposite to the rotor blades. In single rotor helicopters, the antitorque rotor or tail rotor counteracts the main rotor torque and controls the fuselage rotation.
Coaxial rotors solve the problem of main rotor torque by turning each set of rotors in opposite directions. The opposite torques from the rotors cancel each other out. Rotational maneuvering, yaw control, is accomplished by increasing the collective pitch of one rotor and decreasing the collective pitch on the other. This causes a controlled dissymmetry of torque.
Dissymmetry of lift
Dissymmetry of lift is an aerodynamic phenomenon caused by the rotation of a helicopter's rotors in forward flight. Rotor blades provide lift proportional to the amount of air flowing over them. When viewed from above, the rotor blades move in the direction of flight for half of the rotation (advancing half), and then move in the opposite direction for the remainder of the rotation (retreating half). A rotor blade produces more lift in the advancing half. As a blade moves toward the direction of flight, the forward motion of the aircraft increases the speed of the air flowing around the blade until it reaches a maximum when the blade is perpendicular to the relative wind. At the same time, a rotor blade in the retreating half produces less lift. As a blade moves away from the direction of flight, the speed of the airflow over the rotor blade is reduced by an amount equal to the forward speed of the aircraft, reaching its maximum effect when the rotor blade is again perpendicular to the relative wind.
Coaxial rotors avoid the effects of dissymmetry of lift through the use of two rotors turning in opposite directions, causing blades to advance on either side at the same time.
Other benefits
Another benefit arising from a coaxial design includes increased payload for the same engine power; a tail rotor typically wastes some of the available engine power that would be fully devoted to lift and thrust with a coaxial design. Reduced noise is the main advantage of the configuration; some of the loud "spanking" sound associated with conventional helicopters arises from interaction between the airflows from the main and tail rotors, which in some designs can be severe. Also, helicopters using coaxial rotors tend to be more compact (with a smaller footprint on the ground), though at the price of increased height, and consequently have uses in areas where space is at a premium; several Kamov designs are used in naval roles, being capable of operating from confined spaces on the decks of ships, including ships other than aircraft carriers (an example being the Kara-class cruisers of the Russian navy, which carry a Ka-25 'Hormone' helicopter as part of their standard equipment). Another benefit is increased safety on the ground; the absence of a tail rotor eliminates the major source of injuries and fatalities to ground crews and bystanders.
Disadvantages
There is an increased mechanical complexity of the rotor hub. The linkages and swashplates for two rotor systems need to be assembled atop the mast, which is more complex because of the need to drive two rotors in opposite directions. Because of the greater number of moving parts and complexity, the coaxial rotor system is more prone to mechanical faults and possible failure. Coaxial helicopters are also more prone to the "whipping" of blades and blade self-collision, according to critics.
Coaxial models
The system's inherent stability and quick control response make it suitable for use in small radio-controlled helicopters. These benefits come at the cost of a limited forward speed, and higher sensitivity to wind. These two factors are especially limiting in outdoor use. Such models are usually fixed-pitch (i.e., the blades cannot be rotated on their axes for different angles of attack), simplifying the model but eliminating the ability to compensate with collective input. Compensating for even the slightest breeze causes the model to climb rather than to fly forward even with full application of cyclic.
Coaxial multirotors
Multirotor type unmanned aerial vehicles exist in numerous configurations including duocopter, tricopter, quadcopter, hexacopter and octocopter. All of them can be upgraded to coaxial configuration in order to bring more stability and flight time while allowing carrying much more payload without gaining too much weight. Indeed, coaxial multirotors are made by having each arm carrying two motors facing in opposite directions (one up and one down). Therefore, it is possible to have a quad-axis, octorotor airframe thanks to coaxial configuration. Duocopters are characterised by two motors aligned in a vertical axis. The control is performed by the appropriate acceleration of a single rotor blade for targeted thrust generation during revolution. Having more lifting power for a greater payload explains why coaxial multirotors are preferred for nearly all large-payload commercial applications of UAS.
Reduced hazards of flight
The U.S. Department of Transportation has published a “Basic Helicopter Handbook”. One of the chapters in it is titled, “Some Hazards of Helicopter Flight'. Ten hazards have been listed to indicate what a typical single rotor helicopter has to deal with. The coaxial rotor design either reduces or completely eliminates many of these hazards. The following list indicates which:
Settling with power — Reduced
Retreating blade stall — Reduced
Medium frequency vibrations — Reduced
High frequency vibrations — None
Anti torque system failure in forward flight — Eliminated
Anti torque system failure while hovering — Eliminated
The reduction and elimination of these hazards are the strong points for the safety of coaxial rotor design.
List of coaxial rotor helicopters
Bendix Model K (1945)
Brantly B-1 (1946)
Bendix Model J (1946)
Bréguet G.111 (1949)
Bréguet-Dorand Gyroplane Laboratoire (1936)
Cierva CR Twin (1969)
Cranfield Vertigo (1987)
Eagle's Perch (1998)
EDM Aerotec CoAX 2D/2R
Gyrodyne QH-50 DASH
Hiller XH-44 (1944)
Kamov Ka-8 (1947)
Kamov Ka-10 (1949)
Kamov Ka-15 (1953)
Kamov Ka-18 (1955)
Kamov Ka-25 (1963)
Kamov Ka-26 (1965)
Kamov Ka-27 (1974)
Kamov Ka-50 (1982 and 1997 for Ka-52)
Kamov Ka-92
Kamov Ka-126 (1988)
Kamov Ka-226 (1997)
Manzolini Libellula (1952)
Phoenix Skyblazer (2011)
Sikorsky S-69 (1973)
Sikorsky X2 (2008)
Sikorsky S-97
Sikorsky/Boeing SB-1 Defiant
Wagner Aerocar
Mars helicopter Ingenuity (2021)
See also
Intermeshing rotors
Contra-rotating propellers
Tandem rotors
Transverse rotors
Mars helicopter Ingenuity
References
External links
Helicopter History Site
Aerodynamics
Coaxial rotor helicopters
Helicopter components
Rotorcraft | Coaxial-rotor aircraft | [
"Chemistry",
"Engineering"
] | 1,744 | [
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
17,460,357 | https://en.wikipedia.org/wiki/Co-occurrence%20network | Co-occurrence network, sometimes referred to as a semantic network, is a method to analyze text that includes a graphic visualization of potential relationships between people, organizations, concepts, biological organisms like bacteria or other entities represented within written material. The generation and visualization of co-occurrence networks has become practical with the advent of electronically stored text compliant to text mining.
By way of definition, co-occurrence networks are the collective interconnection of terms based on their paired presence within a specified unit of text. Networks are generated by connecting pairs of terms using a set of criteria defining co-occurrence. For example, terms A and B may be said to “co-occur” if they both appear in a particular article. Another article may contain terms B and C. Linking A to B and B to C creates a co-occurrence network of these three terms. Rules to define co-occurrence within a text corpus can be set according to desired criteria. For example, a more stringent criteria for co-occurrence may require a pair of terms to appear in the same sentence. Co-occurrence networks were found to be particularly useful to analyze large text and big data, when identifying the main themes and topics (such as in a large number of social media posts), revealing biases in the text (such as biases in news coverage), or even mapping an entire research field.
Methods and development
The process of constructing co-occurrence networks includes identifying keywords in the text, calculating the frequencies of co-occurrences, and analyzing the networks to find central words and clusters of themes in the network.
Co-occurrence networks can be created for any given list of terms (any dictionary) in relation to any collection of texts (any text corpus). Co-occurring pairs of terms can be called “neighbors” and these often group into “neighborhoods” based on their interconnections. Individual terms may have several neighbors. Neighborhoods may connect to one another through at least one individual term or may remain unconnected.
Individual terms are, within the context of text mining, symbolically represented as text strings. In the real world, the entity identified by a term normally has several symbolic representations. It is therefore useful to consider terms as being represented by one primary symbol and up to several synonymous alternative symbols. Occurrence of an individual term is established by searching for each known symbolic representations of the term. The process can be augmented through NLP (natural language processing) algorithms that interrogate segments of text for possible alternatives such as word order, spacing and hyphenation. NLP can also be used to identify sentence structure and categorize text strings according to grammar (for example, categorizing a string of text as a noun based on a preceding string of text known to be an article).
Graphic representation of co-occurrence networks allow them to be visualized and inferences drawn regarding relationships between entities in the domain represented by the dictionary of terms applied to the text corpus. Meaningful visualization normally requires simplifications of the network. For example, networks may be drawn such that the number of neighbors connecting to each term is limited. The criteria for limiting neighbors might be based on the absolute number of co-occurrences or more subtle criteria such as “probability” of co-occurrence or the presence of an intervening descriptive term.
Quantitative aspects of the underlying structure of a co-occurrence network might also be informative, such as the overall number of connections between entities, clustering of entities representing sub-domains, detecting synonyms, etc.
Applications and use
Some working applications of the co-occurrence approach are available to the public through the internet. PubGene is an example of an application that addresses the interests of biomedical community by presenting networks based on the co-occurrence of genetics related terms as these appear in MEDLINE records. PubGene's CoreMine Medical has been used in studies relating genes/proteins to potentially effective drugs and drug candidates in multiple sclerosis,
fibrosis,
and hepatitis.
CoreMine Medical was also used in a study of genes implicated in post-traumatic stress disorder.
The website NameBase is an example of how human relationships can be inferred by examining networks constructed from the co-occurrence of personal names in newspapers and other texts (as in Ozgur et al.).
Networks of information are also used to facilitate efforts to organize and focus publicly available information for law enforcement and intelligence purposes (so called "open source intelligence" or OSINT). Related techniques include co-citation networks as well as the analysis of hyperlink and content structure on the internet (such as in the analysis of web sites connected to terrorism).
See also
Topic spotting
Social network analysis
References
Biological databases
Computational linguistics
Data mining
Medical search engines
Intelligence gathering disciplines
Open-source intelligence | Co-occurrence network | [
"Technology",
"Biology"
] | 966 | [
"Bioinformatics",
"Natural language and computing",
"Computational linguistics",
"Biological databases"
] |
17,460,900 | https://en.wikipedia.org/wiki/NOD-like%20receptor | The nucleotide-binding oligomerization domain-like receptors, or NOD-like receptors (NLRs) (also known as nucleotide-binding leucine-rich repeat receptors), are intracellular sensors of pathogen-associated molecular patterns (PAMPs) that enter the cell via phagocytosis or pores, and damage-associated molecular patterns (DAMPs) that are associated with cell stress. They are types of pattern recognition receptors (PRRs), and play key roles in the regulation of innate immune response. NLRs can cooperate with toll-like receptors (TLRs) and regulate inflammatory and apoptotic response.
NLRs primarily recognize Gram-positive bacteria, whereas TLRs primarily recognize Gram-negative bacteria. They are found in lymphocytes, macrophages, dendritic cells and also in non-immune cells, for example in epithelium. NLRs are highly conserved through evolution. Their homologs have been discovered in many different animal species (APAF1) and also in the plant kingdom (disease-resistance R protein).
Structure
NLRs contain 3 domains – central NACHT (NOD or NBD – nucleotide-binding domain) domain, which is common to all NLRs, most of NLRs have also C-terminal leucine-rich repeat (LRR) and variable N-terminal interaction domain. NACHT domain mediates ATP-dependent self-oligomerization and LRR senses the presence of ligand. N-terminal domain is responsible for homotypic protein-protein interaction and it can consist of caspase recruitment domain (CARD), pyrin domain (PYD), acidic transactivating domain or baculovirus inhibitor repeats (BIRs).
Nomenclature and system
Names as CATERPILLER, NOD, NALP, PAN, NACHT, PYPAF were used to describe the NLRs family. The nomenclature was unified by the HUGO Gene Nomenclature Committee in 2008. The family was characterized as NLRs to provide description of the families features – NLR means nucleotide-binding domain and leucine-rich repeat containing gene family.
This system divides NLRs into 4 subfamilies based on the type of N-terminal domain:
NLRA (A for acidic transactivating domain): CIITA
NLRB (B for BIRs): NAIP
NLRC (C for CARD): NOD1, NOD2, NLRC3, NLRC4, NLRC5
NLRP (P for PYD): NLRP1, NLRP2, NLRP3, NLRP4, NLRP5, NLRP6, NLRP7, NLRP8, NLRP9, NLRP10, NLRP11, NLRP12, NLRP13, NLRP14
There is also an additional subfamily NLRX which doesn't have significant homology to any N-terminal domain. A member of this subfamily is NLRX1.
On the other hand, NLRs can be divided into 3 subfamilies with regard to their phylogenetic relationships:
NODs: NOD1, NOD2, NOD3 (NLRC3), NOD4 (NLRC5), NOD5 (NLRX1), CIITA
NLRPs (also called NALPs): NLRP1, NLRP2, NLRP3, NLRP4, NLRP5, NLRP6, NLRP7, NLRP8, NLRP9, NLRP10, NLRP11, NLRP12, NLRP13, NLRP14
IPAF: IPAF (NLRC4), NAIP
Subfamily NODs
NODs subfamily consists of NOD1, NOD2, NOD3, NOD4 with CARD domain, CIITA containing acidic transactivator domain and NOD5 without any N-terminal domain.
Signalling
The well-described receptors are NOD1 and NOD2. The recognition of their ligands recruits oligomerization of NACHT domain and CARD-CARD interaction with CARD-containing serine-threonin kinase RIP2 which leads to activation of RIP2.
RIP2 mediates the recruitment of kinase TAK1 which phosphorylates and activates IκB kinase. The activation of IκB kinase results in the phosphorylation of inhibitor IκB which releases NF-κB and its nuclear translocation. NF-κB then activates expression of inflammatory cytokines.
Mutations in NOD2 are associated with Crohn's disease or Blau syndrome.
Ligands
NOD1 and NOD2 recognize peptidoglycan motifs from bacterial cell which consists of N-acetylglucosamine and N-acetylmuramic acid. These sugar chains are cross-linked by peptide chains that can be sensed by NODs. NOD1 recognizes a molecule called meso-diaminopimelic acid (meso-DAP) mostly found in Gram-negative bacteria (for example Helicobacter pylori, Pseudomonas aeruginosa). NOD2 proteins can sense intracellular muramyl dipeptide (MDP), typical for bacteria such as Streptococcus pneumoniae or Mycobacterium tuberculosis.
Subfamilies NLRPs and IPAF
NLRPs subfamily contains NLRP1-NLRP14 that are characterized by the presence of PYD domain. IPAF subfamily has two members – IPAF with CARD domain and NAIP with BIR domain.
Signalization
NLRPs and IPAF subfamilies are involved in the formation of the inflammasome. The best characterized inflammasome is NLRP3, the activation through PAMPs or DAMPs leads to the oligomerization. The pyrin domain of NLRs binds to an adaptor protein ASC (PYCARD) via PYD-PYD interaction. ASC contains PYD and CARD domain and links the NLRs to inactive form of caspase 1 through the CARD domain.
All these protein-protein interaction form a complex called the inflammasome. The aggregation of the pro-caspase-1 causes the autocleavage and formation of an active enzyme. Caspase-1 is important for the proteolytic processing of the pro-inflammatory cytokines IL-1β and IL-18.
NLRP3 mutations are responsible for the autoinflammatory disease familial cold autoinflammatory syndrome or Muckle–Wells syndrome.
Ligands
There are three well-characterized inflammasomes – NLRP1, NLRP3 and IPAF. The formation of NLRP3 inflammasome can be activated by PAMPs such as microbial toxins (for example alpha-toxin of Staphylococcus aureus) or whole pathogens, for instance Candida albicans, Saccharomyces cerevisiae, Sendai virus, Influenza. NLRP3 recognize also DAMPs which indicate stress in the cell. The danger molecule can be extracellular ATP, extracellular glucose, monosodium urate (MSU) crystals, calcium pyrophosphate dihydrate (CPPD), alum, cholesterol or environmental irritants – silica, asbestos, UV irradiation and skin irritants. The presence of these molecules causes a production of ROS and K+ efflux. NLRP1 recognizes lethal toxin from Bacillus anthracis and muramyl dipeptide. IPAF senses flagellin from Salmonella typhimurium, Pseudomonas aeruginosa, Listeria monocytogenes.
See also
Toll-like receptor
Inflammasome
RIG-I-like receptor
References
External links
PTHR14074 (filter for human)
Intracellular receptors
LRR proteins
NOD-like receptors
Immune system | NOD-like receptor | [
"Biology"
] | 1,634 | [
"Immune system",
"Organ systems"
] |
17,463,475 | https://en.wikipedia.org/wiki/Micro-donation | Micro-donation or microphilanthropy is a form of charitable donation that is small in the donated amount. In the past, micro-donations have been used most effectively by companies collecting spare change at registers and checkouts. Recently, this form of philanthropy has become more popular with the advent of online and mobile donating.
In addition to the more traditional forms of donating, like giving directly from person to person, both the internet and mobile-phones have become more accepted by the public for collecting donations.
Micro-donations of $200 or less have made up an ever-larger share of nomination fundraising in the three United States presidential primary elections since 2000. (In this measurement a person who donates $190 twice to a candidate has given two micro-donations, but is not a micro-donor). Micro-donations accounted for 25% of the total donations for the United States presidential election in 2000. This figure rose to 34% in 2004 and 38.8% in 2008.
Supporting technology
Microphilanthropy requires the ability to deal with a large number of small interactions efficiently. If a successful approach also includes implementing a fundraising drive that utilizes microphilanthropic resources connected to a specific charity, the approach must also include a structure or "middleman" technology that allows for an effective, efficient aggregation and distribution of microphilanthropic donations. For example, the US-based nonprofit Zidisha offers an eBay-style peer-to-peer microlending platform, which uses internet and mobile phone technology to deliver services between lenders and borrowers directly across international borders without local intermediaries.
See also
Microcredit
Mobile donating
References
Micropayment
Philanthropy
Activism by type
Giving
Peer-to-peer charities
Private aid programs | Micro-donation | [
"Biology"
] | 362 | [
"Philanthropy",
"Behavior",
"Altruism",
"Private aid programs"
] |
17,464,907 | https://en.wikipedia.org/wiki/Allowance%20%28engineering%29 | In engineering and machining, an allowance is a planned deviation between an exact dimension and a nominal or theoretical dimension, or between an intermediate-stage dimension and an intended final dimension. The unifying abstract concept is that a certain amount of difference allows for some known factor of compensation or interference. For example, an area of excess metal may be left because it is needed to complete subsequent machining. Common cases are listed below. An allowance, which is a planned deviation from an ideal, is contrasted with a tolerance, which accounts for expected but unplanned deviations.
Allowance is basically the size difference between components that work together. Allowance between parts that are assembled is very important. For example, the axle of a car has to be supported in a bearing otherwise it will fall to the ground. If there was no gap between the axle and the bearing then there would be a lot of friction and it would be difficult to get the car to move. If there was too much of a gap then the axle would be jumping around in the bearing. It is important to get the allowance between the axle and the bearing correct so that the axle rotates smoothly and easily without juddering.
Examples of engineering and machining allowances
Outer dimensions (such as the length of a bar) may be cut intentionally oversize, or inner dimensions (such as the diameter of a hole) may be cut intentionally undersize, to allow for a predictable dimensional change following future cutting, grinding, or heat-treating operations. For example:
the outer diameter of a pin may be ground to oversize because it is known that subsequent heat-treatment of the pin is going to cause it to shrink by .
A hole may be drilled undersize to allow for the material that will be removed by subsequent reaming.
Outer dimensions (such as the diameter of a railroad car's axle) may be cut intentionally oversize, or inner dimensions (such as the diameter of the railroad car's wheel hub) may be cut intentionally undersize, to allow for an interference fit (press fit).
A part may be cast intentionally too big when it is desired to later machine the surface. This ensures that the roughness that the casting process leaves is removed, and a smooth machined surface is produced. This machining allowance may be e.g. 1mm, but this depends on the size of the part and the accuracy of the casting process.
A chain segment may be oversized so that at the end of its useful service life (around 20 years), the corroded chain is still above the minimum diameter required to meet the minimum break strength. This is called a corrosion allowance and accounts for the steel molecules lost through oxidation, erosion and wear, and microbially influenced corrosion.
Confounding of the engineering concepts of allowance and tolerance
Often the terms allowance and tolerance are used inaccurately and are improperly interchanged in engineering contexts. This is because both words generally can relate to the abstract concept of permission — that is, of a limit on what is acceptable. However, in engineering, separate meanings are enforced, as explained below.
A tolerance is the expected limit of acceptable unintended deviation from a nominal or theoretical dimension. Therefore, a pair of tolerances, upper and lower, defines a range within which an actual dimension may fall while still being acceptable.
In contrast,
an allowance is a planned deviation from the nominal or theoretical dimension. In other words, it is an intended difference between the maximum material conditions of mating parts.
Example
An example of the concept of tolerance is that a shaft for a machine is intended to be precisely 10 mm in diameter: 10 mm is the nominal dimension. The engineer designing the machine knows that in reality, the grinding operation that produces the final diameter may introduce a certain small-but-unavoidable amount of random error. Therefore, the engineer specifies a tolerance of ±0.01 mm ("plus-or-minus" 0.01 mm).
As long as the grinding machine operator can produce a shaft with actual diameter somewhere between 9.99 mm and 10.01 mm, the shaft is acceptable. Understanding how much error is predictable in a process and how much is easily avoidable; how much is unavoidable (or whose avoidance is possible but simply too expensive to justify); and how much is truly acceptable involves considerable judgment, intelligence, and experience.
An example of the concept of allowance can be shown in relation to the hole that this shaft must enter. It is evident that the above shaft cannot be certain to freely enter a hole that is also 10 mm with the same tolerance. It might, if the actual shaft diameter is 9.99 mm and the actual hole diameter is 10.01 mm, but it would not if conversely the actual shaft diameter is 10.01 mm and the actual hole diameter is 9.99 mm.
To be sure that there will be enough clearance between the shaft and its hole, taking account of the tolerance, an allowance is intentionally introduced in the dimensions specified. The hole diameter might be specified as 10.03 mm with a manufacturing tolerance of ±0.01 mm ("plus-or-minus" 0.01 mm). This means that the smallest acceptable hole diameter will be 10.02 mm while the largest acceptable shaft diameter will be 10.01 mm, leaving an "allowance" of 0.01 mm. The minimum clearance between the hole and the shaft will then be 0.01 mm. This will occur when both the shaft and the hole are at maximum material condition.
References
Engineering concepts | Allowance (engineering) | [
"Engineering"
] | 1,127 | [
"nan"
] |
17,466,114 | https://en.wikipedia.org/wiki/NGC%207522 | NGC 7522 is an astronomical object whose identity is in question. It was originally catalogued as a faint nebula. Although SIMBAD identifies it as PGC 70742, this is unlikely, due to the faintness of the galaxy. Instead, it is possible that a nearby star was accidentally catalogued as a nebula.
References
7522
Unidentified astronomical objects
Aquarius (constellation)
70742 | NGC 7522 | [
"Astronomy"
] | 81 | [
"Constellations",
"Aquarius (constellation)"
] |
17,467,553 | https://en.wikipedia.org/wiki/Weierstrass%20transform | In mathematics, the Weierstrass transform of a function , named after Karl Weierstrass, is a "smoothed" version of obtained by averaging the values of , weighted with a Gaussian centered at .
Specifically, it is the function defined by
the convolution of with the Gaussian function
The factor is chosen so that the Gaussian will have a total integral of 1, with the consequence that constant functions are not changed by the Weierstrass transform.
Instead of one also writes . Note that need not exist for every real number , when the defining integral fails to converge.
The Weierstrass transform is intimately related to the heat equation (or, equivalently, the diffusion equation with constant diffusion coefficient). If the function describes the initial temperature at each point of an infinitely long rod that has constant thermal conductivity equal to 1, then the temperature distribution of the rod time units later will be given by the function . By using values of different from 1, we can define the generalized Weierstrass transform of .
The generalized Weierstrass transform provides a means to approximate a given integrable function arbitrarily well with analytic functions.
Names
Weierstrass used this transform in his original proof of the Weierstrass approximation theorem. It is also known as the Gauss transform or Gauss–Weierstrass transform after Carl Friedrich Gauss and as the Hille transform after Einar Carl Hille who studied it extensively. The generalization mentioned below is known in signal analysis as a Gaussian filter and in image processing (when implemented on ) as a Gaussian blur.
Transforms of some important functions
Constant Functions
Every constant function is its own Weierstrass transform.
Polynomials
The Weierstrass transform of any polynomial is a polynomial of the same degree, and in fact has the same leading coefficient (the asymptotic growth is unchanged).
Indeed, if denotes the (physicist's) Hermite polynomial of degree , then the Weierstrass transform of is simply . This can be shown by exploiting the fact that the generating function for the Hermite polynomials is closely related to the Gaussian kernel used in the definition of the Weierstrass transform.
Exponentials, Sines, and Cosines
The Weierstrass transform of the exponential function (where is an arbitrary constant) is . The function is thus an eigenfunction of the Weierstrass transform, with eigenvalue .
Using Weierstrass transform of with where is an arbitrary real constant and is the imaginary unit, and applying Euler's identity, one sees that the Weierstrass transform of the function is and the Weierstrass transform of the function is .
Gaussian Functions
The Weierstrass transform of the function is
Of particular note is when is chosen to be negative. If , then is a Gaussian function and its Weierstrass transform is also a Gaussian function, but a "wider" one.
General properties
The Weierstrass transform assigns to each function a new function ; this assignment is linear. It is also translation-invariant, meaning that the transform of the function is . Both of these facts are more generally true for any integral transform defined via convolution.
If the transform exists for the real numbers and , then it also exists for all real values in between and forms an analytic function there; moreover, will exist for all complex values of with and forms a holomorphic function on that strip of the complex plane. This is the formal statement of the "smoothness" of mentioned above.
If is integrable over the whole real axis (i.e. ), then so is its Weierstrass transform , and if furthermore for all , then also for all and the integrals of and are equal. This expresses the physical fact that the total thermal energy or heat is conserved by the heat equation, or that the total amount of diffusing material is conserved by the diffusion equation.
Using the above, one can show that for and , we have and . The Weierstrass transform consequently yields a bounded operator .
If is sufficiently smooth, then the Weierstrass transform of the -th derivative of is equal to the -th derivative of the Weierstrass transform of .
There is a formula relating the Weierstrass transform W and the two-sided Laplace transform . If we define
then
Low-pass filter
We have seen above that the Weierstrass transform of is , and analogously for . In terms of signal analysis, this suggests that if the signal contains the frequency (i.e. contains a summand which is a combination of and ), then the transformed signal will contain the same frequency, but with an amplitude multiplied by the factor . This has the consequence that higher frequencies are reduced more than lower ones, and the Weierstrass transform thus acts as a low-pass filter. This can also be shown with the continuous Fourier transform, as follows. The Fourier transform analyzes a signal in terms of its frequencies, transforms convolutions into products, and transforms Gaussians into Gaussians. The Weierstrass transform is convolution with a Gaussian and is therefore multiplication of the Fourier transformed signal with a Gaussian, followed by application of the inverse Fourier transform. This multiplication with a Gaussian in frequency space blends out high frequencies, which is another way of describing the "smoothing" property of the Weierstrass transform.
The inverse transform
The following formula, closely related to the Laplace transform of a Gaussian function, and a real analogue to the Hubbard–Stratonovich transformation, is relatively easy to establish:
Now replace u with the formal differentiation operator D = d/dx and utilize the Lagrange shift operator
,
(a consequence of the Taylor series formula and the definition of the exponential function), to obtain
to thus obtain the following formal expression for the Weierstrass transform ,
where the operator on the right is to be understood as acting on the function f(x) as
The above formal derivation glosses over details of convergence, and the formula is thus not universally valid; there are several functions which have a well-defined Weierstrass transform, but for which cannot be meaningfully defined.
Nevertheless, the rule is still quite useful and can, for example, be used to derive the Weierstrass transforms of polynomials, exponential and trigonometric functions mentioned above.
The formal inverse of the Weierstrass transform is thus given by
Again, this formula is not universally valid but can serve as a guide. It can be shown to be correct for certain classes of functions if the right-hand side operator is properly defined.
One may, alternatively, attempt to invert the Weierstrass transform in a slightly different way: given the analytic function
apply to obtain
once more using a fundamental property of the (physicists') Hermite polynomials .
Again, this formula for is at best formal, since one didn't check whether the final series converges. But if, for instance, , then knowledge of all the derivatives of at suffices to yield the coefficients ; and to thus reconstruct as a series of Hermite polynomials.
A third method of inverting the Weierstrass transform exploits its connection to the Laplace transform mentioned above, and the well-known inversion formula for the Laplace transform. The result is stated below for distributions.
Generalizations
We can use convolution with the Gaussian kernel instead of thus defining an operator , the generalized Weierstrass transform.
For small values of , is very close to , but smooth. The larger , the more this operator averages out and changes . Physically, corresponds to following the heat (or diffusion) equation for time units, and this is additive,
corresponding to "diffusing for time units, then time units, is equivalent to diffusing for time units". One can extend this to by setting to be the identity operator (i.e. convolution with the Dirac delta function), and these then form a one-parameter semigroup of operators.
The kernel used for the generalized Weierstrass transform is sometimes called the Gauss–Weierstrass kernel, and is Green's function for the diffusion equation on .
can be computed from : given a function , define a new function then a consequence of the substitution rule.
The Weierstrass transform can also be defined for certain classes of distributions or "generalized functions". For example, the Weierstrass transform of the Dirac delta is the Gaussian
In this context, rigorous inversion formulas can be proved, e.g.,
where is any fixed real number for which exists, the integral extends over the vertical line in the complex plane with real part , and the limit is to be taken in the sense of distributions.
Furthermore, the Weierstrass transform can be defined for real- (or complex-) valued functions (or distributions) defined on . We use the same convolution formula as above but interpret the integral as extending over all of and the expression as the square of the Euclidean length of the vector ; the factor in front of the integral has to be adjusted so that the Gaussian will have a total integral of 1.
More generally, the Weierstrass transform can be defined on any Riemannian manifold: the heat equation can be formulated there (using the manifold's Laplace–Beltrami operator), and the Weierstrass transform is then given by following the solution of the heat equation for one time unit, starting with the initial "temperature distribution" .
Related transforms
If one considers convolution with the kernel instead of with a Gaussian, one obtains the Poisson transform which smoothes and averages a given function in a manner similar to the Weierstrass transform.
See also
Gaussian blur
Gaussian filter
Husimi Q representation
Heat equation#Fundamental solutions
Notes
References
Integral transforms
Mathematical physics | Weierstrass transform | [
"Physics",
"Mathematics"
] | 2,063 | [
"Applied mathematics",
"Theoretical physics",
"Mathematical physics"
] |
17,468,384 | https://en.wikipedia.org/wiki/Cleavable%20detergent | Cleavable detergents, also known as cleavable surfactants, are special surfactants (detergents) that are used in biochemistry and especially in proteomics to enhance protein denaturation and solubility. The detergent is rendered inactive by cleavage, usually under acidic conditions, in order to make the sample compatible with a following procedure or in order to selectively remove the cleavage products.
Applications for cleavable detergents include protease digestion of proteins such as in-gel digestion with trypsin after SDS PAGE and peptide extractions from electrophoresis gels. Cleavable detergents are mainly used in sample preparations for mass spectrometry.
PPS
PPS, available as PPS Silent Surfactant from Expedeon, is the abbreviation for sodium 3-(4-(1,1-bis(hexyloxy)ethyl)pyridinium-1-yl)propane-1-sulfonate. This acetalic detergent is split under acidic conditions into hexanol and the zwitterionic 3-acetyl-1-(3-sulfopropyl)pyridinium.
ProteaseMAX
ProteaseMAX'is the brandname of Promega for sodium 3-((1-(furan-2-yl)undecyloxy)carbonylamino)propane-1-sulfonate. This cleavable detergent is sensitive to heat and acid and is degraded during a typical trypsin digestion into the uncharged lipophilic compound 1-(furan-2-yl)undecan-1-ol and the zwitterionic 3-aminopropane-1-sulfonic acid (homotaurine), which can be removed by C18 solid phase extraction during sample work-up.
RapiGest SF
RapiGest SF, the brand-name for sodium 3-[(2-methyl-2-undecyl-1,3-dioxolan-4-yl)methoxy]-1-propanesulfonate, is an acid-cleavable anionic detergent marketed by Waters Corporation and AOBIOUS INC.
Others
MALDI matrix compounds such as α-cyano-4-hydroxycinnamic acid have been linked through a linker consisting of an unsymmetric formaldehyde acetals to dodecanol. This type of cleavable detergent is inherently compatible with MALDI and does not have to be removed prior to analysis.
UV light- or fluoride-cleavable surfactants have also been developed but are not in current use.
References
Sulfonic acids
Surfactants
Biochemistry
Proteomics | Cleavable detergent | [
"Chemistry",
"Biology"
] | 588 | [
"Biochemistry",
"Functional groups",
"nan",
"Sulfonic acids"
] |
17,469,534 | https://en.wikipedia.org/wiki/Hersh%20Shefrin | Hersh Shefrin (born in Winnipeg, Manitoba) is a Canadian economist best known for his pioneering work in behavioral finance.
Shefrin received his B.S. from University of Manitoba in 1970. At the University of Waterloo in 1971 he received his M.S. in mathematics. He then obtained a Ph.D. in economics from the London School of Economics in 1974, after which he became assistant professor at the University of Rochester. In 1979 he was appointed professor at the Santa Clara University, first in the department of economics and then in 1990 to the department of finance.
Shefrin holds an honorary doctorate from the University of Oulu, Finland.
Shefrin's research articles have been published in many economics and finance journals, in particular: the Journal of Finance, the Journal of Financial Economics, the Review of Financial Studies, the Journal of Financial and Quantitative Analysis, Financial Management, the Financial Analysts Journal, and the Journal of Portfolio Management. Shefrin has written a number of influential books on behavioral finance and its applications to corporate finance and corporate culture.
Shefrin and his wife, Arna, were married in 1970.
Books
Behavioral Corporate Finance, 2006. McGraw-Hill/Irwin
A Behavioral Approach to Asset Pricing Theory, 2005. Elsevier; 2nd edition in 2008.
Ending the Management Illusion, 2008. McGraw-Hill
Beyond Greed and Fear: Understanding Behavioral Finance and the Psychology of Investing, Boston, MA: Harvard Business School Press, 1999. Revised version published 2002, New York: Oxford University Press
Selected articles
"An Economic Theory of Self-Control", with R. Thaler, Journal of Political Economy, 1981.
"The Disposition to Sell Winners Too Early and Ride Losers Too Long: Theory and Evidence", with M. Statman, Journal of Finance, Vol. XL, No. 3, 1985.
"Behavioral Capital Asset Pricing Theory", with M. Statman, Journal of Financial and Quantitative Analysis, September, 1994.
"The Contributions of Daniel Kahneman and Amos Tversky", with Meir Statman. Journal of Behavioral Finance, 2003.
See also
List of University of Waterloo people
External links
Curriculum Vitae
Home page at the Santa Clara University
Living people
Alumni of the London School of Economics
Behavioral economists
Behavioral finance
Canadian economists
People from Winnipeg
Santa Clara University faculty
Santa Clara University people
University of Rochester faculty
University of Waterloo alumni
Year of birth missing (living people) | Hersh Shefrin | [
"Biology"
] | 488 | [
"Behavioral finance",
"Behavior",
"Human behavior"
] |
17,469,697 | https://en.wikipedia.org/wiki/List%20of%20welding%20codes | This page lists published welding codes, procedures, and specifications.
American Society of Mechanical Engineers (ASME) Codes
The American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (BPVC) covers all aspects of design and manufacture of boilers and pressure vessels. All sections contain welding specifications, however most relevant information is contained in the following:
American Welding Society (AWS) Standards
The American Welding Society (AWS) publishes over 240 AWS-developed codes, recommended practices and guides which are written in accordance with American National Standards Institute (ANSI) practices. The following is a partial list of the more common publications:
American Petroleum Institute (API) Standards
The American Petroleum Institute (API) oldest and most successful programs is in the development of API standards which started with its first standard in 1924. API maintains over 500 standards covering the oil and gas field. The following is a partial list specific to welding:
Australian / New Zealand (AS/NZS) Standards
Standards Australia is the body responsible for the development, maintenance and publication of Australian Standards. The following is a partial list specific to welding:
Canadian Standards Association (CSA) Standards
The Canadian Standards Association (CSA) is responsible for the development, maintenance and publication of CSA standards. The following is a partial list specific to welding:
British Standards (BS)
British Standards are developed, maintained and published by BSI Standards which is UK's National Standards Body. The following is a partial list of standards specific to welding:
International Organization for Standardization (ISO) Standards
International Organization for Standardization (ISO) has developed over 18500 standards and over 1100 new standards are published every year. The following is a partial list of the standards specific to welding:
European Union (CEN) standards
The European Committee for Standardization (CEN) had issued numerous standards covering welding processes, which unified and replaced former national standards. Of the former national standards, those issued by BSI and DIN were widely used outside their countries of origin. After the Vienna Agreement with ISO, CEN has replaced most of them with equivalent ISO standards (EN ISO series).
Additional requirements for welding exist in CEN codes and standards for specific products, like EN 12952, EN 12953, EN 13445, EN 13480, etc.
German Standards (DIN and others)
NA 092 is the Standards Committee for welding and allied processes (NAS) at DIN Deutsches Institut für Normung e. V. The following is a partial list of DIN welding standards:
Japanese Standards (JIS and others)
Japanese Industrial Standards (JIS)
Japanese Industrial Standards are the standards used for industrial activities in Japan, coordinated by the Japanese Industrial Standards Committee (JISC) and published by the Japanese Standards Association (JSA).
JIS Z 3001-1 Welding and allied processes-Vocabulary-Part 1: General
JIS Z 3001-2 Welding and allied processes-Vocabulary-Part 2: Welding processes
JIS Z 3001-3 Welding and allied processes-Vocabulary-Part 3: Soldering and brazing
JIS Z 3001-4 Welding and allied processes-Vocabulary-Part 4: Imperfections in welding
JIS Z 3001-5 Welding and allied processes-Vocabulary-Part 5: Laser welding
JIS Z 3001-6 Welding and allied processes-Vocabulary-Part 6: Resistance welding
JIS Z 3001-7 Welding and allied processes-Vocabulary-Part 7: Arc welding
JIS Z 3011 Welding positions defined by means of angles of slope and rotation
JIS Z 3021 Welding and allied processes -- Symbolic representation
Japan Welding Society Standard (WES)
As a WES standard, it is defined in the following classification.
Fundamentals
Tests, inspections and their equipment
Base material
welding material
Welding and cutting equipment and accessories
Welding design and construction
Welding-related certifications and certifications
Safety, health and environment
See also
Welding
List of welding processes
Welder certification
Welding Procedure Specification
Welder
Notes
References
Structural steel
Further reading and external links
Overview poster of CEN & ISO welding standards
Welding codes
Codes | List of welding codes | [
"Engineering"
] | 808 | [
"Structural engineering",
"Structural steel",
"Welding",
"Mechanical engineering"
] |
17,469,797 | https://en.wikipedia.org/wiki/Phallus%20duplicatus | Phallus duplicatus (common name, netted stinkhorn or wood witch) is a species of fungus in the stinkhorn family. The bell-shaped to oval cap is green-brown and the cylindrical stalk is white. When mature, the cap becomes sticky with a slimy green coating, which attracts flies that disperse its spores, and a distinct, "netted" universal veil. It often grows in public lawns, and can also be found in meadows. The fungus is edible when still in the "egg" stage, before the fruit body has expanded.
Taxonomy
The species was first described in 1811 by French botanist Louis Bosc. Synonyms include Dictyophora duplicata and Hymenophallus duplicatus.
It is commonly known as the netted stinkhorn or the wood witch.
Description
Immature fruit bodies are roughly spherical, whitish to pink in color, and have thick rhizomorphs at the base. Fully grown and matured, the fruit body is cylindrical and up to tall, with the stalk accounting for of its height. The cap is bell-shaped to ovaline, tall and about wide. The cap is initially covered with a foetid greenish slime, the gleba. Its surface is covered with chambers and pits, and there is a perforation at the tip with a white rim. Below the cap hangs a white, lacy, skirt-like veil, or indusium. The spores are cylindrical, hyaline (translucent), smooth, and measure 3.5–4.2 by 1–1.5 μm.
The species resembles Phallus indusiatus, but that species has a longer indusium and smaller spores.
Habitat and distribution
A saprobic species, the fruit bodies of Phallus duplicatus grow singly or in small groups on the ground in woods, gardens, and landscaped areas. The smelly gleba coating the cap attracts flies and other insects, which consume it and help to disperse the spores.
Phallus duplicatus is known from Asia (China and Japan), eastern North America (between August and November), and South America (Brazil). Although it has been widely recorded from Europe, some of these may be misidentifications with the similar Phallus impudicus var. togatus. P. duplicatus is Red Listed in Ukraine.
Uses
The fruit bodies are edible when still in the "egg" stage.
In culture
The species was featured on a Paraguayan postage stamp in 1986.
References
External links
Phallus duplicatus at Mushroom Expert
Phallales
Fungi of Europe
Fungi of North America
Fungi of South America
Fungi described in 1811
Taxa named by Louis Augustin Guillaume Bosc
Fungus species | Phallus duplicatus | [
"Biology"
] | 571 | [
"Fungi",
"Fungus species"
] |
17,469,833 | https://en.wikipedia.org/wiki/Phallus%20hadriani | Phallus hadriani, commonly known as the dune stinkhorn or the sand stinkhorn, is a species of fungus in the Phallaceae (stinkhorn) family. The stalk of the fruit body reaches up to tall by thick, and is spongy, fragile, and hollow. At the top of the stem is a ridged and pitted, thimble-like cap over which is spread olive-colored spore slime (gleba). Shortly after emerging, the gleba liquefies and releases a fetid odor that attracts insects, which help disperse the spores. P. hadriani may be distinguished from the similar P. impudicus (the common stinkhorn) by the presence of a pink or violet-colored volva at the base of the stem, and by differences in odor.
It is a widely distributed species, and is native to Eurasia and North America. In Australia, it is probably an introduced species. It typically grows in public lawns, yards and gardens, usually in sandy soils. It is said to be edible in its immature egg-like stage.
Taxonomy
The species was first described scientifically by the French botanist Étienne Pierre Ventenat in 1798, and sanctioned by Christiaan Hendrik Persoon under that name in his 1801 Synopsis Methodica Fungorum. Christian Gottfried Daniel Nees von Esenbeck called the species Hymenophallus hadriani in 1817; this name is a synonym. According to the taxonomical database Index Fungorum, additional synonyms include: Phallus iosmus, named by Berkeley in 1836; Phallus imperialis, Schulzer, 1873; Ithyphallus impudicus var. imperialis and Ithyphallus impudicus var. iosmos, De Toni, date unknown.
The specific epithet hadriani is named after the Dutch botanist Hadrianus Junius (1512–1575), who wrote a pamphlet on stinkhorn mushrooms in 1564 (Phalli, ex fungorum genere, in Hollandiae sabuletis passim crescentis descriptio).
Description
The immature fruiting bodies of P. hadriani in the egg stage have dimensions of by , and are colored rosy-pink to violet. They are typically submerged in the ground with rhizomorphs (aggregations of mycelium resembling plant roots) at the base. The eggs are enclosed in a tough covering and a gelatinous layer that breaks down as the stinkhorn emerges. Mature fruiting bodies may be tall by thick. The white- or cream-colored stipe is up to 3 cm wide, hollow, spongy and honeycombed. The head is reticulate, ridged and pitted, and covered with olive green glebal mass. The volva is cuplike, and typically retains its pink color although it may turn brownish with age. Fruit bodies are short-lived, typically lasting only one or two days.
Although the odor of P. hadriani has been described by some authors as faint and pleasant or like violets, others describe the smell as fetid or putrid. The gleba is known to attract insects, including flies, bees, and beetles, some of which consume the spore-containing slime. It is thought that long-distance spore dispersal is facilitated by these insects, who may deposit in their feces intact spores that survive the passage through the digestive tract.
The spores are cylindrical, smooth, and hyaline (translucent), with dimensions of 3–4 by 1–2 μm. The basidia (spore-bearing cells) are cylindrical, with dimensions of 20–25 by 3–4 μm. They have eight sterigmata (slender extensions that attach to the spores), as well as a clamp at their base.
Similar species
Phallus impudicus has the same overall appearance as P. hadriani, but is distinguished by its white volva. Another similar stinkhorn, P. ravenelii has a smooth, not reticulate head.
Habitat and distribution
Phallus hadriani is known to be in Australia (where it is thought to be an introduced species imported on woodchip mulch used in gardening and landscaping), North America, Europe (including Denmark, Ireland, Latvia, The Netherlands, Norway, Poland, Slovakia, Sweden, Ukraine, and Wales) Turkey (Iğdır Province), Japan, and China (Jilin Province).
Phallus hadriani is a saprobic species, and thus obtains nutrients by decomposing organic matter. In North America, it is commonly associated with tree stumps, or roots of stumps that are decomposing in the ground. In Great Britain, its distribution is more or less restricted to coastal dunes, while in Poland, it has been noted to avoid humid and humic forest soils, and live in symbiosis with xerophilous grasses and the black locust tree, Robinia pseudoacacia. The mushroom is one of three species protected by the Red Data Book of Latvia.
Edibility
Like many other stinkhorns, this species is thought to be edible when in its egg form. Central Europeans and the Chinese consider the eggs a delicacy. Regarding the edibility of mature specimens, one author commented on the genus in general, "No one with his sense of smell developed would think of eating the members of this group."
References
Phallales
Fungi of Asia
Fungi of Australia
Fungi of Europe
Fungi of North America
Fungi described in 1798
Fungus species | Phallus hadriani | [
"Biology"
] | 1,153 | [
"Fungi",
"Fungus species"
] |
17,469,893 | https://en.wikipedia.org/wiki/Phallus%20rubicundus | Phallus rubicundus is a species of fungus in the stinkhorn family. First described in 1811, it has a wide distribution in tropical regions. It has the typical stinkhorn structure consisting of a spongy stalk up to tall arising from a gelatinous "egg" up to in diameter. Atop the stalk is a pitted, conical cap that has a foul-smelling, gelatinous, green spore mass spread over it.
Taxonomy
The species was first described under the name Satyrus rubicundus by French botanist Louis Augustin Guillaume Bosc in 1811, from collections made in South Carolina. It was later transferred to the genus Phallus in 1823 by Elias Fries. Synonyms include binomials resulting from the transfer to Ithyphallus by Eduard Fischer in 1888, and to Leiophallus by Émile-Victor Mussat in 1900.
Description
Immature (unopened) specimens of Phallus rubicundus are spherical to egg-shaped, whitish, and measure long by wide. They occur singly or in groups of two to six eggs that are formed from a common mycelium. They are attached to the substrate by a cordlike rhizomorph. After expanding, the fruit bodies are up to tall, and consist of a hollow cylindrical stalk supporting a conical to bell-shaped cap. The orange to scarlet stalk tapers towards to top, and has a pitted surface. The wrinkled cap is scarlet red, and measures high by wide. It is initially covered with a foetid, slimy grayish-olive gleba. The egg case remains at the base of the stalk as a volva. The spores are smooth, elliptical, and measure 3.6–4.2 by 1.6–2.0 μm.
Phallus rubicundus is often confused with the similar Mutinus elegans, but the latter species does not have a clearly separated cap, and instead bears its gleba on the apex of its pointed stalk.
Uses
In the Indian state of Madhya Pradesh, where it is known locally as , it is used by two primitive forest tribes, the Bharia and the Baiga, as a treatment against typhoid, and also by the Baiga to treat labour pain. The fungus is prepared by grinding and mixing with sugar-cake, and one teaspoon is administered three times daily. The fungus has been reported to have been used by Aboriginal Australians as an aphrodisiac.
One study noted that mosquitoes, attracted to the smell of the gleba, perish after consuming it, and so the fungus may be suitable for further investigating as a biocontrol agent.
Ecology and distribution
The fungus is saprobic, and grows in sandy soils, lawns, gardens, and yards, especially those that are well-manured or use wood mulch. It is widely distributed in southern and eastern United States (including Hawaii), having possibly been spread through the use of imported wood mulch in landscaping. In Australia it grows mainly in the tropics and subtropics, in areas where rotten wood and/or mulch are present. In Asia, it has been recorded from China, Japan, Korea, India, and Thailand. African locales include Ghana, Nigeria, Congo, Kenya, and South Africa. It is also known from South America (Argentina and Brazil) and the Caribbean. The fungus was featured on a Sierra Leonean postage stamp in 1993.
References
Phallales
Fungi of Africa
Fungi of Asia
Fungi of Australia
Fungi of North America
Fungi of South America
Medicinal fungi
Fungi described in 1811
Taxa named by Louis Augustin Guillaume Bosc
Fungi of Hawaii
Fungi without expected TNC conservation status
Fungus species | Phallus rubicundus | [
"Biology"
] | 765 | [
"Fungi",
"Fungus species"
] |
17,469,925 | https://en.wikipedia.org/wiki/VS-30 | The VS-30 is a Brazilian sounding rocket, developed by the Instituto de Aeronáutica e Espaço and derived from the Sonda 3 sounding rocket first stage. It consists of a single, solid-fuelled stage, and has been launched from Alcântara, Maranhão and Barreira do Inferno, Rio Grande do Norte, in Brazil, and Andøya and Svalbard Rocket Range in Norway.It has been launched both on its own, or in the VS-30 Orion configuration, with an American Orion upper stage. On its own, it can reach an apogee of 140 kilometres, and with an Orion upper stage, it can reach an apogee of 434 kilometres. The VS-30 is also used as the upper stage of the VSB-30 rocket.
Flights
List of VS-30 and VS-30 Orion flights:
On December 2, 2023, AEB's president announced that a VS-30 flight from Barreira do Inferno Launch Center is planned for 2024.
Characteristics
Length (mm) 7428
Payload Mass (kg) 260
Diameter (mm) 557
Total takeoff mass (kg) 1460
Apogee (km) 160
See also
VSB-30
VS-40
References
Sounding rockets of Brazil
Space program of Brazil
Space programme of Argentina | VS-30 | [
"Astronomy"
] | 266 | [
"Rocketry stubs",
"Astronomy stubs"
] |
17,471,070 | https://en.wikipedia.org/wiki/N-Prize | The N-Prize (the "N" stands for "Nanosatellite" or "Negligible Resources".) is an inducement prize contest intended to "encourage creativity, originality and inventiveness in the face of severe odds and impossible financial restrictions" and thus stimulate innovation directed towards obtaining cheap access to space. The competition was launched in 2008 by Cambridge biologist Paul H. Dear, and is intended specifically to spur amateur involvement in spaceflight as it is "aimed at amateurs, enthusiasts, would-be boffins and foolhardy optimists."
Dr. Dear died on 11 March 2020, and the prize was subsequently closed.
The challenge posed by the N-Prize is to launch a satellite weighing between 9.99 and 19.99 grammes into Earth orbit, and to track it for a minimum of nine orbits. Most importantly the launch budget must be under £999.99 including the launch vehicle, all of the required non-reusable launch equipment hardware, and propellant.
In order to be eligible for the awards the challenge initially had to be completed before 19:19:09 (GMT) on 19 September 2013, however later it was decided that the prize will remain open until won. Doing so will earn the winning team a prize of £9,999.99.
List of competing teams
The official site of the N-Prize includes an animated page listing over fifty teams together with contact information and links to any team websites. Examples of teams that have entered the competition at one time or another and who also have or had web pages include:
Nebula
Epsilon Vee
Vulcan
Microlaunchers
Cambridge University Spaceflight
Potent Voyager
Team Prometheus
Team 9.99
Kiwi 2 Space
Qi Spacecraft
Aerosplice
WikiSat
See also
Ansari X Prize
Rockoon
List of space technology awards
References
External links
N-Prize main website
N-Prize official forum
Space Fellowship official N-Prize forum
Challenge awards
Space-related awards | N-Prize | [
"Technology"
] | 400 | [
"Science and technology awards",
"Space-related awards"
] |
17,472,089 | https://en.wikipedia.org/wiki/Hamilton%20House%2C%20East%20Lothian | Hamilton House, also known as Magdalen's House, is a 17th-century "Laird's House" in the town of Prestonpans in East Lothian, Scotland. It is an exemplar of this type of architecture and has retained its crow-stepped gables and corner towers. It is owned by the National Trust for Scotland and is a Category A Listed Building.
History
The house was built in 1626 as a replacement for Preston Tower for Sir John Hamilton, Lord Magdalens, who was a Senator of the College of Justice and the brother of the 1st Earl of Haddington. The property was vacated by the Hamiltons in the 1740s, and in the 19th century it was converted into a barracks and later used as a tavern. Around 1880 the Hislop family are recorded as being the owners of the building. Eventually the building fell into dereliction and was about to be demolished when it was sold to The National Trust of Scotland in 1937 and the NTS restored it as a private residence. It is now a Category A Listed Building.
Location
Hamilton House is located in the small town of Prestonpans in East Lothian, from Edinburgh. It sits at the corner of West Loan and Preston Road within the part of the town known as Preston near the Preston Tower, Prestonpans Mercat cross and Northfield House.
Features
Hamilton House comprises a two-storey main block with projecting wings at either end. The date 1628 (or 1626) appears in a panel above the former main entrance, with the initials IH and KS representing Sir John Hamilton, Lord Magdalen, and Katherine Sympson, while thee three pediments of the dormers have the Hamilton's coat of arms, their impaled initials and the date 1628, as well as the arms of Katherine Sympson. The exterior is harled and whitewashed, has chamfered stone edges and crow-stepped gables, known in Scots as corbie-stepped. The chimney stacks have stripped quoins. The house has a hexagonal stair tower which stands in the corner of the main block with south wing and which contains the main entrance, above which is the inscription "Praised be the Lord My Strenth and My Redeimer", while at the other end of the main block there is a round conical roofed, corbelled-out turret. The house has a steep roof tiled with stones. When the road outside was widened an outbuilding and a passageway were demolished, reducing the size of the courtyard. The adjacent road is also around 500mm higher than the original dirt road, from when the surface was macadamed in the 19th century. The interior of the building has been completely re-ordered in relation to its original form.
Controversy
The National Trust for Scotland wanted to sell the building in 2007 but changed their mind to find a "restoring tenant" but in 2017 concerns were raised from the local community about the condition of the building.
Photo gallery
See also
Duke of Hamilton
James Hamilton, 1st Duke of Hamilton
List of places in East Lothian
References
Country houses in East Lothian
Prestonpans
Stepped gables | Hamilton House, East Lothian | [
"Engineering"
] | 633 | [
"Stepped gables",
"Architecture"
] |
17,473,433 | https://en.wikipedia.org/wiki/Comparison%20of%20disk%20cloning%20software | Disk cloning software facilitates a disk cloning operation by using software techniques to copy data from a source to a destination drive or to a disk image.
List
See also
Concepts
Disk image
Disk cloning
Backup
Lists
List of backup software
List of data recovery software
List of disk partitioning software
Comparison
Comparison of disc image software
References
Disk cloning software | Comparison of disk cloning software | [
"Technology"
] | 69 | [
"Software comparisons",
"Computing comparisons"
] |
2,232,502 | https://en.wikipedia.org/wiki/CESU-8 | The Compatibility Encoding Scheme for UTF-16: 8-Bit (CESU-8) is a variant of UTF-8 that is described in Unicode Technical Report #26. A Unicode code point from the Basic Multilingual Plane (BMP), i.e. a code point in the range U+0000 to U+FFFF, is encoded in the same way as in UTF-8. A Unicode supplementary character, i.e. a code point in the range U+10000 to U+10FFFF, is first represented as a surrogate pair, like in UTF-16, and then each surrogate code point is encoded in UTF-8. Therefore, CESU-8 needs six bytes (3 bytes per surrogate) for each Unicode supplementary character while UTF-8 needs only four. Though not specified in the technical report, unpaired surrogates are also encoded as 3 bytes each, and CESU-8 is exactly the same as applying an older UCS-2 to UTF-8 converter to UTF-16 data.
The encoding of Unicode non-BMP characters works out to 11101101 1010yyyy 10xxxxxx 11101101 1011xxxx 10xxxxxx (yyyy represents the top five bits of the character minus one). The byte values 0xF0—0xF4 will not appear in CESU-8, as they start the 4-byte encodings used by UTF-8.
CESU-8 is not an official part of the Unicode Standard, because Unicode Technical Reports are informative documents only. It should be used exclusively for internal processing and never for external data exchange.
Supporting CESU-8 in HTML documents is prohibited by the W3C and WHATWG HTML standards, as it would present a cross-site scripting vulnerability.
Java's Modified UTF-8 is CESU-8 with a special overlong encoding of the NUL character (U+0000) as the two-byte sequence C0 80.
The Oracle database uses CESU-8 for its "UTF8" character set. Standard UTF-8 can be obtained using the character set "AL32UTF8" (since Oracle version 9.0).
Examples
References
External links
Unicode Technical Report #26
Modified UTF-8 definition
Graphical View of CESU-8 in ICU's Converter Explorer
Unicode Transformation Formats
Character encoding | CESU-8 | [
"Technology"
] | 513 | [
"Natural language and computing",
"Character encoding"
] |
2,232,929 | https://en.wikipedia.org/wiki/Chronotope | In literary theory and philosophy of language, the chronotope is how configurations of time and space are represented in language and discourse. The term was taken up by Russian literary scholar Mikhail Bakhtin who used it as a central element in his theory of meaning in language and literature. The term itself comes from the Russian , which in turn is derived from the Greek ('time') and ('space'); it thus can be literally translated as "time-space." Bakhtin developed the term in his 1937 essay "Forms of Time and of the Chronotope in the Novel" («»). Here Bakhtin showed how different literary genres operated with different configurations of time and space, which gave each genre its particular narrative character.
Overview
For Bakhtin, chronotope is the conduit through which meaning enters the logosphere. Genre is rooted in how one perceives the flow of events and its representation of particular worldviews or ideologies.
Bakhtin scholars Caryl Emerson and Michael Holquist state that the chronotope is "a unit of analysis for studying language according to the ratio and characteristics of the temporal and spatial categories represented in that language". They argue that Bakhtin's concept differs from other uses of time and space in literary analysis because neither category is given a privileged status: they are inseparable and entirely interdependent. Bakhtin's concept is a way of analyzing literary texts that reveals the forces operating in the cultural system from which they emanate. Specific chronotopes are said to correspond to particular genres, or relatively stable ways of speaking, which themselves represent particular worldviews or ideologies.
In the essay Forms of Time and of the Chronotope in the Novel, Bakhtin describes his use of the term thus:
Unlike Kant, who saw time and space as transcendental pre-conditions of experience, Bakhtin regards them as "forms of the most immediate reality". They are not mere "mathematical" abstractions, but have a concrete and, depending on context, qualitatively variable form. This is particularly noticeable in Bakhtin's own object of study—that of artistic cognition in literary genres—but he implies that it is applicable in other contexts as well. Different structures or orders of the universe cannot be assumed to operate within the same chronotope. For example, the chronotope of a biological organism like an ant will be qualitatively different from that of an organism like an elephant, or from that of a structure of a different order entirely, such as a star or a galaxy. Within the human world itself there is a huge variety of social activities that are defined by qualitatively different time/space fusions.
Examples and use in other sciences
The concept of the chronotope has been widely used in literary studies. The scholar Timo Müller for example argued that analysis of chronotopes highlights the environmental dimension of literary texts because it draws attention to the concrete physical spaces in which stories take place. Müller discusses the chronotope of the road, which for Bakhtin was a meeting place but in recent literature no longer brings people together in this way because automobiles have changed the way we perceive the time and space of the road. Car drivers want to minimize the time they spend on the road. They are rarely interested in the road as a physical space, the natural environment around the road, or the environmental implications of their driving. This contrasts with earlier literary examples such as Robert Frost's poem "The Road Not Taken" or John Steinbeck's novel The Grapes of Wrath, where the road is described as part of the natural environment and the travelers are interested in that environment.
Linguistic anthropologist Keith Basso invoked "chronotopes" in discussing Western [Apache] stories linked with places. In the 1980s when Basso was writing, geographic features reminded the Western Apache of "the moral teachings of their history" by recalling to mind events that occurred there in important moral narratives. By merely mentioning "it happened at [the place called] 'men stand above here and there,'" storyteller Nick Thompson could remind locals of the dangers of joining "with outsiders against members of their own community." Geographic features in the Western Apache landscape are chronotopes, Basso says, in precisely the way Bakhtin defines the term when he says they are "points in the geography of a community where time and space intersect and fuse. Time takes on flesh and becomes visible for human contemplation; likewise, space becomes charged and responsive to the movements of time and history and the enduring character of a people. ...Chronotopes thus stand as monuments to the community itself, as symbols of it, as forces operating to shape its members' images of themselves" (qtd. in Basso 1984: 44–45).
Anthropologist of syncretism Safet HadžiMuhamedović built upon Bakhtin’s term in his ethnography of the Field of Gacko in the southeastern Bosnian highlands. In Waiting for Elijah: Time and Encounter in a Bosnian Landscape, he argued that people and landscapes may sometimes be trapped between timespaces and thus "schizochronotopic" (from the Greek (): "to split"). He described two overarching chronotopes as "collective timespace themes", both of which relied on certain kinds of past and laid claims to the Field’s future. One was told through proximities, the other through distances between religious communities. For HadžiMuhamedović, schizochronotopia is a rift occurring within the same body/landscape, through which the past and the present of place have rendered each other unbidden.
The concept of chronotope is also used in tourism research. Sociologist Hasso Spode explains the emergence of tourism in the 18th century as "time travel backwards". The tourist space thus functions as a romantic chronotopia. Anthropologist Antonio Nogués-Pedregal regards the touristic consuming and shaping of places as a chronotope.
The chronotope has also been adopted for the analysis of classroom events and conversations, for example by Raymond Brown and Peter Renshaw in order to view "student participation in the classroom as a dynamic process constituted through the interaction of past experience, ongoing involvement, and yet-to-be-accomplished goals" (2006: 247–259). Kumpulainen, Mikkola, and Jaatinen (2013) examined the space–time configurations of students’ technology-mediated creative learning practices over a year-long school musical project in a Finnish elementary school. The findings of their study suggest that "blended practices appeared to break away from traditional learning practices, allowing students to navigate in different time zones, spaces, and places with diverse tools situated in their formal and informal lives" (2013: 53).
See also
Mikhail Bakhtin
The Dialogic Imagination: Four Essays by M.M. Bakhtin
Logosphere
Literary theory
Philosophy of language
Spacetime
Notes
References
Bakhtin, M. M. (1981). The Dialogic Imagination: Four Essays by M.M.. Translated by Caryl Emerson & Michael Holquist, University of Texas Press.
Basso, K. (1984). "Stalking with Stories: Names, Places, and Moral Narratives among the Western Apache," in Text, Play and Story: The Construction and Reconstruction of Self and Society. Ed. Edward Bruner, Washington: American Ethnological Society.
Brown, R. & Renshaw, P. (2006). "Positioning Students as Actors and Authors: A Chronotopic Analysis of Collaborative Learning Activities," in Mind, Culture and Activity 13(3), 247–259.
Dentith, Simon. (2001). "Chronotope," in The Literary Encyclopedia.
HadžiMuhamedović, S. (2018) Waiting for Elijah: Time and Encounter in a Bosnian Landscape. New York and Oxford: Berghahn Books.
Kumpulainen, K., Mikkola, A., & Jaatinen, A-M. (2014). "The chronotopes of technology-mediated creative learning practices in an elementary school community," in Learning, Media and Technology 39(1), 53–74.
Morson, Gary S. (1984). Narrative and Freedom: The Shadows of Time. New Haven: Yale University Press.
Müller, Timo (2010). "Notes Toward an Ecological Conception of Bakhtin’s 'Chronotope'." Ecozon@: European Journal of Literature, Culture and Environment 1(1).
Müller, Timo (2016). "The Ecology of Literary Chronotopes." Handbook of Ecocriticism and Cultural Ecology. Berlin: de Gruyter, 590-604.
Hasso Spode (2013): "Zur Genese des Tourismus" (On the Birth of Tourism), in: Entwicklung der Psyche in der Geschichte der Menschheit. Ed. Gerd Jüttemann, Lengerich: Pabst Publ., ISBN 978-3-89967-859-8.
Jan Pezda (2021): "Tourism. Retropian Time-Travel", in UR Journal of Humanities and Social Sciences vol. 2, ISSN 2543-8379 (on Spode's theory)
Time in linguistics
Literary theory
Philosophy of language | Chronotope | [
"Physics"
] | 1,978 | [
"Spacetime",
"Time in linguistics",
"Physical quantities",
"Time"
] |
2,233,137 | https://en.wikipedia.org/wiki/Tyrosine%20sulfation | In biochemistry, tyrosine sulfation is a posttranslational modification where a sulfate group () is added to a tyrosine residue of a protein molecule. Secreted proteins and extracellular parts of membrane proteins that pass through the Golgi apparatus may be sulfated. Sulfation was first discovered by Bettelheim in bovine fibrinopeptide B in 1954 and later found to be present in animals and plants but not in prokaryotes or in yeast.
Function
Sulfation plays a role in strengthening protein-protein interactions. Types of human proteins known to undergo tyrosine sulfation include adhesion molecules, G-protein-coupled receptors, coagulation factors, serine protease inhibitors, extracellular matrix proteins, and hormones.
Tyrosine O-sulfate is a stable molecule and is excreted in urine in animals. No enzymatic mechanism of tyrosine sulfate desulfation is known to exist.
By knock-out of TPST genes in mice, it may be observed that tyrosine sulfation has effects on the growth of the mice, such as body weight, fecundity, and postnatal viability.
Mechanism
Sulfation is catalyzed by tyrosylprotein sulfotransferase (TPST) in the Golgi apparatus. The reaction catalyzed by TPST is a transfer of sulfate from the universal sulfate donor 3'-phosphoadenosine-5'-phosphosulfate (PAPS) to the side-chain hydroxyl group of a tyrosine residue. Sulfation sites are tyrosine residues exposed on the surface of the protein typically surrounded by acidic residues; a detailed description of the characteristics of the sulfation site was available from PROSITE and predicted by an on-line tool named the Sulfinator. Two types of tyrosylprotein sulftotransferases (TPST-1 and TPST2) have been identified.
Regulation
There is very limited evidence that the TPST genes are subject to transcriptional regulation and tyrosine O-sulfate is very stable and cannot be easily degraded by mammalian sulfatases. Tyrosine O-sulfation is an irreversible process in vivo.
Clinical Significance
It has been shown that the sulfation of Tyr1680 in Factor VIII is essential for effective binding to vWF. Thus, when this is mutated, patients may suffer mild haemophiliac symptoms due to increased turnover. Sulfation of the three tyrosines found in human PSGL-1 not only enhance its binding to P-selectin but also enables its pH-selective binding to the immune checkpoint protein VISTA.
Antibody for detection of tyrosine-sulfated epitopes
In 2006, an article was published in the Journal of Biological Chemistry describing the production and characterization of an antibody called PSG2. This antibody shows exquisite sensitivity and specificity for epitopes containing sulfotyrosine independent of the sequence context.
References
Post-translational modification | Tyrosine sulfation | [
"Chemistry"
] | 646 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
2,233,179 | https://en.wikipedia.org/wiki/Edward%20Routh | Edward John Routh (; 20 January 18317 June 1907) was an English mathematician, noted as the outstanding coach of students preparing for the Mathematical Tripos examination of the University of Cambridge in its heyday in the middle of the nineteenth century. He also did much to systematise the mathematical theory of mechanics and created several ideas critical to the development of modern control systems theory.
Biography
Early life
Routh was born of an English father and a French-Canadian mother in Quebec, at that time the British colony of Lower Canada. His father's family could trace its history back to the Norman conquest when it acquired land at Routh near Beverley, Yorkshire. His mother's family, the Taschereau family, was well-established in Quebec, tracing their ancestry back to the early days of the French colony. His parents were Sir Randolph Isham Routh (1782–1858) and his second wife, Marie Louise Taschereau (1810–1891). Sir Randolph was Commissary General of the British Army 1826, Chairman of the Irish Famine Relief Commission (1845–48) and Deputy Commissary General, the senior Commissariat officer at the Battle of Waterloo, and Marie Louise was the daughter of Judge Jean-Thomas Taschereau and the sister of Judge Jean-Thomas and Cardinal Elzéar-Alexandre Taschereau.
Routh came to England aged eleven and attended University College School and then entered University College, London in 1847, having won a scholarship. There he studied under Augustus De Morgan, whose influence led to Routh to decide on a career in mathematics.
Routh obtained his BA (1849) and MA (1853) in London. He attended Peterhouse, Cambridge, where he was taught by Isaac Todhunter and coached by "senior wrangler maker" William Hopkins. While at Peterhouse, Routh rowed for Peterhouse Boat Club. In 1854, Routh graduated just above James Clerk Maxwell, as Senior Wrangler, sharing the Smith's prize with him. Routh was elected fellow of Peterhouse in 1856.
Mathematics tutor
On graduation, Routh took up work as a private mathematics tutor in Cambridge and took on the pupils of William John Steele during the latter's fatal illness, though insisting that Steele take the fees. Routh inherited Steele's pupils, going on to establish an unbeaten record as a coach. He coached over 600 pupils between 1855 and 1888, 28 of them making Senior wrangler, as to Hopkins' 17 with 43 of his pupils winning Smith's Prize.
Routh worked conscientiously and systematically, taking rigidly timetabled classes of ten pupils during the day and spending the evenings preparing extra material for the ablest men. "His lectures were enlivened by mathematical jokes of a rather heavy kind."
Routh was a staunch defender of the Cambridge competitive system and despaired when the university started to publish examination results in alphabetical order, observing "They will want to run the Derby alphabetically next".
Private life
Astronomer Royal George Biddell Airy sought to entice Routh to work at the Royal Observatory, Greenwich. Though Airy did not succeed, at Greenwich Routh met Airy's eldest daughter Hilda (1840–1916) whom he married in 1864. At the time, the university had a celibacy requirement, forcing Routh to vacate his fellowship and move out of Peterhouse. On the reformation of the college statutes, removing the celibacy requirement, Routh was the first person elected to an honorary fellowship by Peterhouse. The couple had five sons and a daughter. Routh was a "kindly man and a good conversationalist with friends, but with strangers he was shy and reserved."
Honours
Fellow of the Royal Society, (1872);
Adams Prize, (1877).
Work
Mechanics
Routh collaborated with Henry Brougham on the Analytical View of Sir Isaac Newton's Principia (1855). He published a textbook, Dynamics of a System of Rigid Bodies (1860, 6th ed. 1897) in which he did much to define and systematise the modern mathematical approach to mechanics. This influenced Felix Klein and Arnold Sommerfeld. In fact, Klein arranged the German translation. It also did much to influence William Thomson and Peter Guthrie Tait's Treatise on Natural Philosophy (1867).
Routh noted the importance of what he called "absent coordinates," also known as cyclic coordinates or ignorable coordinates (following the terminology of E. T. Whittaker in his Analytical Dynamics of Particles and Rigid Bodies). Such coordinates are associated with conserved momenta and as such are useful in problem solving. Routh also devised a new method for solving problems in mechanics. Although Routh's procedure does not add any new insights, it allows for more systematic and convenient analysis, especially in problems with many degrees of freedom and at least some cyclic coordinates.
Stability and control
In addition to his intensive work in teaching and writing, which had a persistent effect on the presentation of mathematical physics, he also contributed original research such as the Routh–Hurwitz theorem.
Central tenets of modern control systems theory relied upon the Routh stability criterion (though nowadays due to modern computers it is not as important), an application of Sturm's theorem to evaluate Cauchy indices through the use of the Euclidean algorithm.
Bibliography
Reprinted in 'Stability of Motion' (ed. A.T.Fuller) London 1975 (Taylor & Francis).
References
Further reading
Obituaries
The Times, 8 June 1907 (available at O'Connor & Robertson (2003))
Proceedings of the London Mathematical Society, 2nd ser., 5 (1907), xiv–xx;
Nature, 76 (1907), 200–02;
Cambridge Review, 13 June 1907, 480–81;
HHT, Monthly Notices of the Royal Astronomical Society, 68 (1907–08), 239–41
About Routh
Sneddon, I. N. (1970–1990) "Routh, Edward John", in Gillispie, C. C. (ed.) Dictionary of Scientific Biography, New York: Charles Screibner's Sons
External links
1831 births
1907 deaths
19th-century English mathematicians
20th-century English mathematicians
Edward
Alumni of Peterhouse, Cambridge
Alumni of University College London
Control theorists
Fellows of the Royal Society
Fellows of Peterhouse, Cambridge
People educated at University College School
Senior Wranglers
Taschereau family | Edward Routh | [
"Engineering"
] | 1,319 | [
"Control engineering",
"Control theorists"
] |
2,233,335 | https://en.wikipedia.org/wiki/Tone%20control%20circuit | Tone control is a type of equalization used to make specific pitches or frequencies in an audio signal softer or louder. It allows a listener to adjust the tone of the sound produced by an audio system to their liking, for example to compensate for inadequate bass response of loudspeakers or earphones, tonal qualities of the room, or hearing impairment. A tone control circuit is an electronic circuit that consists of a network of filters which modify the signal before it is fed to speakers, headphones or recording devices by way of an amplifier. Tone controls are found on many sound systems: radios, portable music players, boomboxes, public address systems, and musical instrument amplifiers.
Uses
Tone control allows listeners to adjust sound to their liking. It also enables them to compensate for recording deficiencies, hearing impairments, room acoustics or shortcomings with playback equipment. For example, older people with hearing problems may want to increase the loudness of high pitch sounds they have difficulty hearing.
Tone control is also used to adjust an audio signal during recording. For instance, if the acoustics of the recording site cause it to absorb some frequencies more than others, tone control can be used to amplify or "boost" the frequencies the room dampens.
Types
In their most basic form, tone control circuits attenuate the high or low frequencies of the signal. This is called treble or bass "cut". The simplest tone control circuits are passive circuits which utilize only resistors and capacitors or inductors. They rely on the property of capacitive reactance or inductive reactance to inhibit or enhance an AC signal, in a frequency-dependent manner. Active tone controls may also amplify or "boost" certain frequencies. More elaborate tone control circuits can boost or attenuate the middle range of frequencies.
The simplest tone control is a single knob that when turned in one direction enhances treble frequencies and the other direction enhances bass frequencies. This was the first type of tone control, typically found on radios and record players from the 1930s to the 1970s.
Graphic equalizers used for tone control provide independent elevation or attenuation of individual bands of frequencies. Wide frequency range graphic equalizers of high resolution can provide elevation or attenuation in 1/3 octave bands spanning from approximately 30 Hz to 18 kHz.
Parametric equalizers can control not only the amount of boost and cut but also the specific frequency at which the boost and cut takes place and the range of frequencies (bandwidth) affected.
Elaborate circuits may also use amplifiers. The most modern analog units use operational amplifiers, resistors and capacitors, abandoning inductors because of their size and sensitivity to ubiquitous electromagnetic interference.
Historically, tone control was achieved via analog electronics, and most tone control circuits produced today still use the analog process. Nonetheless, digital approaches are increasingly being implemented through the use of digital signal processing.
See also
Audio power amplifier
Electronic filter
Audio crossover
External links
Simple Tone Control Circuit: Bass and Treble, Cut and Lift, by E.J.James
Negative-Feedback Tone Control - Independent Variation of Bass and Treble Without Switches by P. J. Baxandall -pdf
TONE CIRCUITS, PART 3. Fender controls vs hifi controls.
Amplifiers and tone controls
Tone control for DJs
Analog circuits
Tone, EQ and filter | Tone control circuit | [
"Engineering"
] | 678 | [
"Analog circuits",
"Electronic engineering"
] |
2,233,425 | https://en.wikipedia.org/wiki/Aldrin | Aldrin is an organochlorine insecticide that was widely used until the 1990s, when it was banned in most countries. Aldrin is a member of the so-called "classic organochlorines" (COC) group of pesticides. COCs enjoyed a very sharp rise in popularity during and after World War II. Other noteworthy examples of COCs include dieldrin and DDT. After research showed that organochlorines can be highly toxic to the ecosystem through bioaccumulation, most were banned from use. Before the ban, it was heavily used as a pesticide to treat seed and soil. Aldrin and related "cyclodiene" pesticides (a term for pesticides derived from Hexachlorocyclopentadiene) became notorious as persistent organic pollutants.
Structure and Reactivity
Pure aldrin takes form as a white crystalline powder. Though it is not soluble in water (0.003% solubility), aldrin dissolves very well in organic solvents, such as ketones and paraffins. Aldrin decays very slowly once released into the environment. Though it is rapidly converted to dieldrin by plants and bacteria, dieldrin maintains the same toxic effects and slow decay of aldrin. Aldrin is easily transported through the air by dust particles. Aldrin does not react with mild acids or bases and is stable in an environment with a pH between 4 and 8. It is highly flammable when exposed to temperatures above 200 °C In the presence of oxidizing agents aldrin reacts with concentrated acids and phenols.
Synthesis
Aldrin is not formed in nature. It is named after the German chemist Kurt Alder, one of the coinventors of this kind of reaction. Aldrin is synthesized by combining hexachlorocyclopentadiene with norbornadiene in a Diels-Alder reaction to give the adduct. In 1967, the composition of technical-grade aldrin was reported to consist of 90.5% of hexachlorohexahydrodimethanonaphthalene (HHDN).
Similarly, an isomer of aldrin, known as isodrin, is produced by reaction of hexachloronobornadiene with cyclopentadiene. Isodrin is also produced as a byproduct of aldrin synthesis, with technical-grade aldrin containing about 3.5% isodrin.
An estimated 270 million kilograms of aldrin and related cyclodiene pesticides were produced between 1946 and 1976. The estimated production volume of aldrin in the US peaked in the mid-1960s at about 18 million pounds a year and then declined.
Available forms
There are multiple available forms of aldrin. One of these is the isomer isodrin, which cannot be found in nature, but needs to be synthesized like aldrin. When aldrin enters the human body or the environment it is rapidly converted to dieldrin. Degradation by ultraviolet radiation or microbes can convert dieldrin to photodieldrin and aldrin to photoaldrin.
Mechanism of action
Even though many toxic effects of aldrin have been discovered, the exact mechanisms underlying the toxicity are yet to be determined. The only toxic aldrin induced process that is largely understood is that of neurotoxicity.
Neurotoxicity
One of the effects that intoxication with aldrin gives rise to is neurotoxicity. Studies have shown that aldrin stimulates the central nervous system (CNS), which may cause hyperexcitation and seizures. This phenomenon exerts its effect through two different mechanisms.
One of the mechanisms uses the ability of aldrin to inhibit brain calcium ATPases. These ion pumps relieve the nerve terminal from calcium by actively pumping it out. However, when aldrin inhibits these pumps, the intracellular calcium levels rise. This results in an enhanced neurotransmitter release.
The second mechanism makes use of aldrin's ability to block gamma-aminobutyric acid (GABA) activity. GABA is a major inhibitory neurotransmitter in the central nervous system. Aldrin induces neurotoxic effects by blocking the GABAA receptor-chloride channel complex. By blocking this receptor, chloride is unable to move into the synapse, which prevents hyperpolarization of neuronal synapses. Therefore, the synapses are more likely to generate action potentials.
Metabolism
The metabolism of oral aldrin exposure has not been studied in humans. However, animal studies are able to provide an extensive overview of the metabolism of aldrin. This data can be related to humans.
Biotransformation of aldrin starts with epoxidation of aldrin by mixed-function oxidases (CYP-450), which forms dieldrin. This conversion happens mainly in the liver. Tissues with low CYP-450 expression use the prostaglandin endoperoxide synthase (PES) instead. This oxidative pathway bisdioxygenises the arachidonic acid to prostaglandin G2 (PGG2). Subsequently, PGG2 is reduced to prostaglandin H2 (PGH2) by hydroperoxidase.
Dieldrin can then be directly oxidized by cytochrome oxidases, which forms 9-hydroxydieldrin. An alternative for oxidation involves the opening of the epoxide ring by epoxide hydrases, which forms the product 6,7-trans-dihydroxydihydroaldrin. Both products can be conjugated to form 6,7-trans-dihydroxydihydroaldrin glucuronide and 9-hydroxydieldrin glucuronide, respectively. 6,7-trans-dihydroxydihydroaldrin can also be oxidized to form aldrin dicarboxylic acid.
Efficacy and side effects
Considering the toxicokinetics of aldrin in the environment, the efficacy of the compound has been determined. In addition, the adverse effects after exposure to aldrin are demonstrated, indicating the risk regarding the compound.
Efficacy
The ability of aldrin, in its use for the control of termites, is examined in order to determine the maximum response when applied. In 1953 US researchers tested aldrin and dieldrin on terrains with rats known to carry chiggers, at a rate of . The aldrin and dieldrin treatment demonstrated a decrease of 75 times less chiggers on rats for dieldrin treated terrains and 25 times less chiggers on the rats when treated with aldrin. The aldrin treatment indicate a high productivity, especially in comparison to other insecticides that were used at the time, such as DDT, sulfur or lindane.
Adverse effects
Exposure of aldrin to the environment leads to the localization of the chemical compound in the air, soil, and water. Aldrin gets changed quickly to dieldrin and that compound degrades slowly, which accounts for aldrin concentrations in the environment around the primary exposure and in the plants. These concentrations can also be found in animals, which eat contaminated plants or animals that reside in the contaminated water. This biomagnification can lead to a high concentrations in their fat.
There are some reported cases of workers who developed anemia after multiple dieldrin exposures. However the main adverse effect of aldrin and dieldrin is in relationship to the central nervous system. The accumulated levels of dieldrin in the body were believed to lead to convulsions. Besides that other symptoms were also reported like headaches, nausea and vomiting, anorexia, muscle twitching and myoclonic jerking and EEG distortions. In all these cases removal of the source of exposure to aldrin/dieldrin led to a rapid recovery.
Toxicity
The toxicity of aldrin and dieldrin is determined by the results of several animal studies. Reports of a significant increase in workers death in relation to aldrin has not been found, although death by anemia is reported in some cases after multiple exposure to aldrin. Immunological tests linked an antigenic response to erythrocytes coated with dieldrin in those cases. Direct dose-response relations being a cause for death are yet to be examined.
The NOAEL that was derived from rat studies:
The minimal risk level at acute oral exposure to aldrin is 0.002 mg/kg/day.
The minimal risk level at intermediate exposure to dieldrin is 0.0001 mg/kg/day.
The minimal risk level at chronic exposure to aldrin is 0.00003 mg/kg/day.
The minimal risk level at chronic exposure to dieldrin is 0.00005 mg/kg/day.
In addition to these studies, breast cancer risk studies were performed demonstrating a significant increased breast cancer risk. After comparing blood concentrations to number of lymph nodes and tumor size a 5-fold higher risk of death was determined, comparing the highest quartile range in the research to the lower quartile range.
Effects on animals
Most of the animal studies done with aldrin and dieldrin used rats. High doses of aldrin and dieldrin demonstrated neurotoxicity, but in multiple rat studies also showed a unique sensitivity of the mouse liver to dieldrin induced hepatocarcinogenicity. Furthermore, aldrin treated rats demonstrated an increased post-natal mortality, in which adults showed an increased susceptibility to the compounds compared to children in rats.
Environmental impact and regulation
Like related polychlorinated pesticides, aldrin is highly lipophilic. Its solubility in water is only 0.027 mg/L, which exacerbates its persistence in the environment. It was banned by the Stockholm Convention on Persistent Organic Pollutants. In the U.S., aldrin was cancelled in 1974. The substance is banned from use for plant protection by the EU.
Safety and environmental aspects
Aldrin has rat of 39 to 60 mg/kg (oral in rats). For fish however, it is extremely toxic, with an LC50 of 0.006–0.01 for trout and bluegill.
In the US, aldrin is considered a potential occupational carcinogen by the Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health; these agencies have set an occupational exposure limit for dermal exposures at 0.25 mg/m3 over an eight-hour time-weighted average.
Further, an IDLH limit has been set at 25 mg/m3, based on acute toxicity data in humans to which subjects reacted with convulsions within 20 minutes of exposure.
It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
References
Obsolete pesticides
IARC Group 2A carcinogens
Organochloride insecticides
Endocrine disruptors
Neurotoxins
Persistent organic pollutants under the Stockholm Convention
Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution | Aldrin | [
"Chemistry"
] | 2,407 | [
"Persistent organic pollutants under the Stockholm Convention",
"Persistent organic pollutants under the Convention on Long-Range Transboundary Air Pollution",
"Endocrine disruptors",
"Neurochemistry",
"Neurotoxins"
] |
2,233,526 | https://en.wikipedia.org/wiki/Basic%20hypergeometric%20series | In mathematics, basic hypergeometric series, or q-hypergeometric series, are q-analogue generalizations of generalized hypergeometric series, and are in turn generalized by elliptic hypergeometric series.
A series xn is called hypergeometric if the ratio of successive terms xn+1/xn is a rational function of n. If the ratio of successive terms is a rational function of qn, then the series is called a basic hypergeometric series. The number q is called the base.
The basic hypergeometric series was first considered by . It becomes the hypergeometric series in the limit when base .
Definition
There are two forms of basic hypergeometric series, the unilateral basic hypergeometric series φ, and the more general bilateral basic hypergeometric series ψ.
The unilateral basic hypergeometric series is defined as
where
and
is the q-shifted factorial.
The most important special case is when j = k + 1, when it becomes
This series is called balanced if a1 ... ak + 1 = b1 ...bkq.
This series is called well poised if a1q = a2b1 = ... = ak + 1bk, and very well poised if in addition a2 = −a3 = qa11/2.
The unilateral basic hypergeometric series is a q-analog of the hypergeometric series since
holds ().
The bilateral basic hypergeometric series, corresponding to the bilateral hypergeometric series, is defined as
The most important special case is when j = k, when it becomes
The unilateral series can be obtained as a special case of the bilateral one by setting one of the b variables equal to q, at least when none of the a variables is a power of q, as all the terms with n < 0 then vanish.
Simple series
Some simple series expressions include
and
and
The q-binomial theorem
The q-binomial theorem (first published in 1811 by Heinrich August Rothe) states that
which follows by repeatedly applying the identity
The special case of a = 0 is closely related to the q-exponential.
Cauchy binomial theorem
Cauchy binomial theorem is a special case of the q-binomial theorem.
Ramanujan's identity
Srinivasa Ramanujan gave the identity
valid for |q| < 1 and |b/a| < |z| < 1. Similar identities for have been given by Bailey. Such identities can be understood to be generalizations of the Jacobi triple product theorem, which can be written using q-series as
Ken Ono gives a related formal power series
Watson's contour integral
As an analogue of the Barnes integral for the hypergeometric series, Watson showed that
where the poles of lie to the left of the contour and the remaining poles lie to the right. There is a similar contour integral for r+1φr. This contour integral gives an analytic continuation of the basic hypergeometric function in z.
Matrix version
The basic hypergeometric matrix function can be defined as follows:
The ratio test shows that this matrix function is absolutely convergent.
See also
Dixon's identity
Rogers–Ramanujan identities
Notes
References
W.N. Bailey, Generalized Hypergeometric Series, (1935) Cambridge Tracts in Mathematics and Mathematical Physics, No.32, Cambridge University Press, Cambridge.
William Y. C. Chen and Amy Fu, Semi-Finite Forms of Bilateral Basic Hypergeometric Series (2004)
Exton, H. (1983), q-Hypergeometric Functions and Applications, New York: Halstead Press, Chichester: Ellis Horwood, , ,
Sylvie Corteel and Jeremy Lovejoy, Frobenius Partitions and the Combinatorics of Ramanujan's Summation
Victor Kac, Pokman Cheung, Quantum calculus, Universitext, Springer-Verlag, 2002.
. Section 0.2
Andrews, G. E., Askey, R. and Roy, R. (1999). Special Functions, Encyclopedia of Mathematics and its Applications, volume 71, Cambridge University Press.
Eduard Heine, Theorie der Kugelfunctionen, (1878) 1, pp 97–125.
Eduard Heine, Handbuch die Kugelfunctionen. Theorie und Anwendung'' (1898) Springer, Berlin.
External links
Q-analogs
Hypergeometric functions | Basic hypergeometric series | [
"Mathematics"
] | 908 | [
"Q-analogs",
"Combinatorics"
] |
2,233,706 | https://en.wikipedia.org/wiki/MOPAC | MOPAC is a computational chemistry software package that implements a variety of semi-empirical quantum chemistry methods based on the neglect of diatomic differential overlap (NDDO) approximation and fit primarily for gas-phase thermochemistry. Modern versions of MOPAC support 83 elements of the periodic table (H-La, Lu-Bi as atoms, Ce-Yb as ionic sparkles) and have expanded functionality for solvated molecules, crystalline solids, and proteins.
MOPAC was originally developed in Michael Dewar's research group in the early 1980s and released as public domain software on the Quantum Chemistry Program Exchange in 1983. It became commercial software in 1993, developed and distributed by Fujitsu, and Stewart Computational Chemistry took over commercial development and distribution in 2007. In 2022, it was released as open-source software on GitHub.
Functionality
MOPAC is primarily a serial command-line program. Its default behavior is to take a molecular geometry specified by an input file and perform a local optimization of the geometry to minimize the heat of formation of the molecule. The details of this process are then summarized by an output file. The behavior of MOPAC can be modified by specifying keywords on the first line of the input file, and translation vectors can be added to the geometry to specify a polymer, surface, or crystal.
MOPAC is compatible with other software to provide graphical user interfaces (GUIs), visualization of outputs, and processing of inputs. The most well-known GUIs that support MOPAC are Chem3D, WebMO, the Amsterdam Modeling Suite, and the Molecular Operating Environment. Jmol can visualize some MOPAC outputs such as molecular orbitals and partial charges. Open Babel supports conversion to and from MOPAC's input file format.
Major features
Semiempirical models: AM1, PM3, PM6, PM7
Geometry optimization
Transition-state optimization
Vibrational analysis
COSMO solvation model
Periodic boundary conditions (Gamma point only, no Brillouin zone sampling)
MOZYME for closed-shell systems (linear-scaling electronic structure algorithm)
Gas-phase thermodynamics
Molecular polarizability
Automatic hydrogenation for pre-processing of Protein Data Bank structures
INDO spectroscopy
Configuration interaction
PARAM, a companion program for parameter optimization
History
MOPAC was originally developed in Michael Dewar's research group at the University of Texas at Austin to consolidate their previous developments of MINDO/3 and MNDO models and software and to serve as the software implementation of the AM1 model. The name MOPAC was both an acronym for Molecular Orbital PACkage and a reference to the Mopac Expressway that runs alongside parts of the UT Austin campus. The first version of MOPAC was deposited in the Quantum Chemistry Program Exchange (QCPE) in 1983 as QCPE Program #455 with James Stewart as its primary author. James Stewart joined the Dewar group in 1980 as a visiting professor on leave from the University of Strathclyde, and he continued the development of MOPAC after moving to the United States Air Force Academy in 1984. In 1993, MOPAC was acquired by Fujitsu and sold as commercial software, while James Stewart continued its development as a consultant. After 2007, new versions of MOPAC were developed and sold by Stewart Computational Chemistry with support from the Small Business Innovation Research program. Concurrent with its commercial development, there was an effort to continue development of the last pre-commercial version of MOPAC as an open-source software project. In 2022, the commercial development and distribution of MOPAC ended, and it was officially re-released as an open-source software project on GitHub developed by the Molecular Sciences Software Institute.
Early versions of MOPAC distributed by the QCPE were considered to be in the public domain and were forked into several other notable software projects. After James Stewart left, other members of the Dewar group continued to develop a fork of MOPAC called AMPAC that was originally released on the QCPE before also becoming commercial software. VAMP (Vectorized AMPAC) was a parallel version of AMPAC developed by Timothy Clark's group at the University of Erlangen–Nuremberg. Donald Truhlar's group at the University of Minnesota developed both a fork of AMPAC with implicit solvent models, AMSOL, and a fork of MOPAC itself. Also, commercial versions of MOPAC distributed by Fujitsu have some proprietary features (e.g. PM5, Tomasi solvation) not available in other versions.
MOPAC used different versioning systems throughout its development, sometimes with a version number or year stylized into the name. These alternate names include MOPAC3, MOPAC4, MOPAC5, MOPAC6, MOPAC7, MOPAC93, MOPAC97, MOPAC 2000, MOPAC 2007, MOPAC 2009, MOPAC 2012, and MOPAC 2016. Open-source versions of MOPAC now use semantic versioning.
See also
List of quantum chemistry and solid-state physics software
Semi-empirical quantum chemistry methods
AMPAC
References
External links
MOPAC download page on openmopac.net
Historical archive of MOPAC source code and manuals
MOPAC 2002 Manual
MOPAC 2009 Manual
Source code and compiled binaries at the Computational Chemistry List repository:
Source code (in FORTRAN):
MOPAC 6
MOPAC 7
Compiled binaries:
MOPAC 6 for MS-DOS/Windows;
MOPAC 6 for Windows 95/NT;
MOPAC 6 with GUI (Winmostar)
MOPAC 7 for MS-DOS/Windows
MOPAC 7 for Linux
MOPAC-5.022mn (MOPAC at the University of Minnesota)
Computational chemistry software | MOPAC | [
"Chemistry"
] | 1,151 | [
"Computational chemistry",
"Computational chemistry software",
"Chemistry software"
] |
2,234,192 | https://en.wikipedia.org/wiki/Reduction%20potential | Redox potential (also known as oxidation / reduction potential, ORP, pe, , or ) is a measure of the tendency of a chemical species to acquire electrons from or lose electrons to an electrode and thereby be reduced or oxidised respectively. Redox potential is expressed in volts (V). Each species has its own intrinsic redox potential; for example, the more positive the reduction potential (reduction potential is more often used due to general formalism in electrochemistry), the greater the species' affinity for electrons and tendency to be reduced.
Measurement and interpretation
In aqueous solutions, redox potential is a measure of the tendency of the solution to either gain or lose electrons in a reaction. A solution with a higher (more positive) reduction potential than some other molecule will have a tendency to gain electrons from this molecule (i.e. to be reduced by oxidizing this other molecule) and a solution with a lower (more negative) reduction potential will have a tendency to lose electrons to other substances (i.e. to be oxidized by reducing the other substance). Because the absolute potentials are next to impossible to accurately measure, reduction potentials are defined relative to a reference electrode. Reduction potentials of aqueous solutions are determined by measuring the potential difference between an inert sensing electrode in contact with the solution and a stable reference electrode connected to the solution by a salt bridge.
The sensing electrode acts as a platform for electron transfer to or from the reference half cell; it is typically made of platinum, although gold and graphite can be used as well. The reference half cell consists of a redox standard of known potential. The standard hydrogen electrode (SHE) is the reference from which all standard redox potentials are determined, and has been assigned an arbitrary half cell potential of 0.0 V. However, it is fragile and impractical for routine laboratory use. Therefore, other more stable reference electrodes such as silver chloride and saturated calomel (SCE) are commonly used because of their more reliable performance.
Although measurement of the redox potential in aqueous solutions is relatively straightforward, many factors limit its interpretation, such as effects of solution temperature and pH, irreversible reactions, slow electrode kinetics, non-equilibrium, presence of multiple redox couples, electrode poisoning, small exchange currents, and inert redox couples. Consequently, practical measurements seldom correlate with calculated values. Nevertheless, reduction potential measurement has proven useful as an analytical tool in monitoring changes in a system rather than determining their absolute value (e.g. process control and titrations).
Explanation
Similar to how the concentration of hydrogen ions determines the acidity or pH of an aqueous solution, the tendency of electron transfer between a chemical species and an electrode determines the redox potential of an electrode couple. Like pH, redox potential represents how easily electrons are transferred to or from species in solution. Redox potential characterises the ability under the specific condition of a chemical species to lose or gain electrons instead of the amount of electrons available for oxidation or reduction.
The notion of is used with Pourbaix diagrams. is a dimensionless number and can easily be related to EH by the following relationship:
where, is the thermal voltage, with , the gas constant (), , the absolute temperature in Kelvin (298.15 K = 25 °C = 77 °F), , the Faraday constant (96 485 coulomb/mol of ), and λ = ln(10) ≈ 2.3026.
In fact, is defined as the negative logarithm of the free electron concentration in solution, and is directly proportional to the redox potential. Sometimes is used as a unit of reduction potential instead of , for example, in environmental chemistry. If one normalizes of hydrogen to zero, one obtains the relation at room temperature. This notion is useful for understanding redox potential, although the transfer of electrons, rather than the absolute concentration of free electrons in thermal equilibrium, is how one usually thinks of redox potential. Theoretically, however, the two approaches are equivalent.
Conversely, one could define a potential corresponding to pH as a potential difference between a solute and pH neutral water, separated by porous membrane (that is permeable to hydrogen ions). Such potential differences actually do occur from differences in acidity on biological membranes. This potential (where pH neutral water is set to 0 V) is analogous with redox potential (where standardized hydrogen solution is set to 0 V), but instead of hydrogen ions, electrons are transferred across in the redox case. Both pH and redox potentials are properties of solutions, not of elements or chemical compounds themselves, and depend on concentrations, temperature etc.
The table below shows a few reduction potentials, which can be changed to oxidation potentials by reversing the sign. Reducers donate electrons to (or "reduce") oxidizing agents, which are said to "be reduced by" the reducer. The reducer is stronger when it has a more negative reduction potential and weaker when it has a more positive reduction potential. The more positive the reduction potential the greater the species' affinity for electrons and tendency to be reduced. The following table provides the reduction potentials of the indicated reducing agent at 25 °C. For example, among sodium (Na) metal, chromium (Cr) metal, cuprous (Cu+) ion and chloride (Cl−) ion, it is Na metal that is the strongest reducing agent while Cl− ion is the weakest; said differently, Na+ ion is the weakest oxidizing agent in this list while molecule is the strongest.
Some elements and compounds can be both reducing or oxidizing agents. Hydrogen gas is a reducing agent when it reacts with non-metals and an oxidizing agent when it reacts with metals.
Hydrogen (whose reduction potential is 0.0) acts as an oxidizing agent because it accepts an electron donation from the reducing agent lithium (whose reduction potential is -3.04), which causes Li to be oxidized and Hydrogen to be reduced.
Hydrogen acts as a reducing agent because it donates its electrons to fluorine, which allows fluorine to be reduced.
Standard reduction potential
The standard reduction potential is measured under standard conditions: T = 298.15 K (25 °C, or 77 °F), a unity activity () for each ion participating into the reaction, a partial pressure of 1 atm (1.013 bar) for each gas taking part into the reaction, and metals in their pure state. The standard reduction potential is defined relative to the standard hydrogen electrode (SHE) used as reference electrode, which is arbitrarily given a potential of 0.00 V. However, because these can also be referred to as "redox potentials", the terms "reduction potentials" and "oxidation potentials" are preferred by the IUPAC. The two may be explicitly distinguished by the symbols and , with .
Half cells
The relative reactivities of different half cells can be compared to predict the direction of electron flow. A higher means there is a greater tendency for reduction to occur, while a lower one means there is a greater tendency for oxidation to occur.
Any system or environment that accepts electrons from a normal hydrogen electrode is a half cell that is defined as having a positive redox potential; any system donating electrons to the hydrogen electrode is defined as having a negative redox potential. is usually expressed in volts (V) or millivolts (mV). A high positive indicates an environment that favors oxidation reaction such as free oxygen. A low negative indicates a strong reducing environment, such as free metals.
Sometimes when electrolysis is carried out in an aqueous solution, water, rather than the solute, is oxidized or reduced. For example, if an aqueous solution of NaCl is electrolyzed, water may be reduced at the cathode to produce H2(g) and OH− ions, instead of Na+ being reduced to Na(s), as occurs in the absence of water. It is the reduction potential of each species present that will determine which species will be oxidized or reduced.
Absolute reduction potentials can be determined if one knows the actual potential between electrode and electrolyte for any one reaction. Surface polarization interferes with measurements, but various sources give an estimated potential for the standard hydrogen electrode of 4.4 V to 4.6 V (the electrolyte being positive).
Half-cell equations can be combined if the one corresponding to oxidation is reversed so that each electron given by the reductant is accepted by the oxidant. In this way, the global combined equation no longer contains electrons.
Nernst equation
The and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram . For a half cell equation, conventionally written as a reduction reaction (i.e., electrons accepted by an oxidant on the left side):
The half-cell standard reduction potential is given by
where is the standard Gibbs free energy change, is the number of electrons involved, and is Faraday's constant. The Nernst equation relates pH and :
where curly brackets indicate activities, and exponents are shown in the conventional manner.This equation is the equation of a straight line for as a function of pH with a slope of volt (pH has no units).
This equation predicts lower at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2:
In most (if not all) of the reduction reactions involving oxyanions with a central redox-active atom, oxide anions () being in excess are freed-up when the central atom is reduced. The acid-base neutralization of each oxide ion consumes 2 or one molecule as follows:
+ 2 ⇌
+ ⇌ 2
This is why protons are always engaged as reagent on the left side of the reduction reactions as can be generally observed in the table of standard reduction potential (data page).
If, in very rare instances of reduction reactions, the H+ were the products formed by a reduction reaction and thus appearing on the right side of the equation, the slope of the line would be inverse and thus positive (higher at higher pH).
An example of that would be the reductive dissolution of magnetite ( ≈ ·FeO with 2 and 1 ) to form 3 HFeO (in which dissolved iron, Fe(II), is divalent and much more soluble than Fe(III)), while releasing one :
where:
Note that the slope 0.0296 of the line is −1/2 of the −0.05916 value above, since . Note also that the value –0.0885 corresponds to –0.05916 × 3/2.
Biochemistry
Many enzymatic reactions are oxidation–reduction reactions, in which one compound is oxidized and another compound is reduced. The ability of an organism to carry out oxidation–reduction reactions depends on the oxidation–reduction state of the environment, or its reduction potential ().
Strictly aerobic microorganisms are generally active at positive values, whereas strict anaerobes are generally active at negative values. Redox affects the solubility of nutrients, especially metal ions.
There are organisms that can adjust their metabolism to their environment, such as facultative anaerobes. Facultative anaerobes can be active at positive Eh values, and at negative Eh values in the presence of oxygen-bearing inorganic compounds, such as nitrates and sulfates.
In biochemistry, apparent standard reduction potentials, or formal potentials, (, noted with a prime mark in superscript) calculated at pH 7 closer to the pH of biological and intra-cellular fluids are used to more easily assess if a given biochemical redox reaction is possible. They must not be confused with the common standard reduction potentials determined under standard conditions (; ) with the concentration of each dissolved species being taken as 1 M, and thus .
Environmental chemistry
In the field of environmental chemistry, the reduction potential is used to determine if oxidizing or reducing conditions are prevalent in water or soil, and to predict the states of different chemical species in the water, such as dissolved metals. pe values in water range from -12 to 25; the levels where the water itself becomes reduced or oxidized, respectively.
The reduction potentials in natural systems often lie comparatively near one of the boundaries of the stability region of water. Aerated surface water, rivers, lakes, oceans, rainwater and acid mine water, usually have oxidizing conditions (positive potentials). In places with limitations in air supply, such as submerged soils, swamps and marine sediments, reducing conditions (negative potentials) are the norm. Intermediate values are rare and usually a temporary condition found in systems moving to higher or lower pe values.
In environmental situations, it is common to have complex non-equilibrium conditions between a large number of species, meaning that it is often not possible to make accurate and precise measurements of the reduction potential. However, it is usually possible to obtain an approximate value and define the conditions as being in the oxidizing or reducing regime.
In the soil there are two main redox constituents: 1) anorganic redox systems (mainly ox/red compounds of Fe and Mn) and measurement in water extracts; 2) natural soil samples with all microbial and root components and measurement by direct method.
Water quality
The oxido-reduction potential (ORP) can be used for the systems monitoring water quality with the advantage of a single-value measure for the disinfection potential, showing the effective activity of the disinfectant rather than the applied dose. For example, E. coli, Salmonella, Listeria and other pathogens have survival times of less than 30 seconds when the ORP is above 665 mV, compared to more than 300 seconds when ORP is below 485 mV.
A study was conducted comparing traditional parts per million (ppm) chlorination reading and ORP in Hennepin County, Minnesota. The results of this study presents arguments in favor of the inclusion of ORP above 650 mV in the local health regulation codes.
Geochemistry and mineralogy
Eh–pH (Pourbaix) diagrams are commonly used in mining and geology for assessment of the stability fields of minerals and dissolved species. Under the conditions where a mineral (solid) phase is predicted to be the most stable form of an element, these diagrams show that mineral. As the predicted results are all from thermodynamic (at equilibrium state) evaluations, these diagrams should be used with caution. Although the formation of a mineral or its dissolution may be predicted to occur under a set of conditions, the process may practically be negligible because its rate is too slow. Consequently, kinetic evaluations at the same time are necessary. Nevertheless, the equilibrium conditions can be used to evaluate the direction of spontaneous changes and the magnitude of the driving force behind them.
See also
Electrochemical potential
Electrolytic cell
Electromotive force
Fermi level
Galvanic cell
Oxygen radical absorbance capacity
Pourbaix diagram
Redox
Redox gradient
Solvated electron
Standard electrode potential
Table of standard electrode potentials
Standard apparent reduction potentials in biochemistry at pH 7
References
External links
Online Calculator Redoxpotential ("Redox Compensation")
Notes
Additional notes
External links
Redox potential exercices in biological systems
Oxidizing and Reducing Agents in Redox Reactions
Electrochemical concepts | Reduction potential | [
"Chemistry"
] | 3,214 | [
"Electrochemistry",
"Electrochemical concepts"
] |
2,234,210 | https://en.wikipedia.org/wiki/Electrospray | The name electrospray is used for an apparatus that employs electricity to disperse a liquid or for the fine aerosol resulting from this process. High voltage is applied to a liquid supplied through an emitter (usually a glass or metallic capillary). Ideally the liquid reaching the emitter tip forms a Taylor cone, which emits a liquid jet through its apex. Varicose waves on the surface of the jet lead to the formation of small and highly charged liquid droplets, which are radially dispersed due to Coulomb repulsion.
History
In the late 16th century William Gilbert set out to describe the behaviour of magnetic and electrostatic phenomena. He observed that, in the presence of a charged piece of amber, a drop of water deformed into a cone. This effect is clearly related to electrosprays, even though Gilbert did not record any observation related to liquid dispersion under the effect of the electric field.
In 1750 the French clergyman and physicist Jean-Antoine (Abbé) Nollet noted water flowing from a vessel would aerosolize if the vessel was electrified and placed near electrical ground.
In 1882, Lord Rayleigh theoretically estimated the maximum amount of charge a liquid droplet could carry; this is now known as the "Rayleigh limit". His prediction that a droplet reaching this limit would throw out fine jets of liquid was confirmed experimentally more than 100 years later.
In 1914, John Zeleny published work on the behaviour of fluid droplets at the end of glass capillaries. This report presents experimental evidence for several electrospray operating regimes (dripping, burst, pulsating, and cone-jet). A few years later, Zeleny captured the first time-lapse images of the dynamic liquid meniscus.
Between 1964 and 1969 Sir Geoffrey Ingram Taylor produced the theoretical underpinning of electrospraying. Taylor modeled the shape of the cone formed by the fluid droplet under the effect of an electric field; this characteristic droplet shape is now known as the Taylor cone. He further worked with J. R. Melcher to develop the "leaky dielectric model" for conducting fluids.
The number of publications about electrospray started rising significantly around 1990 (as shown in the figure on the right) when John Fenn (2002 Nobel Prize in Chemistry) and others discovered electrospray ionization for mass spectrometry.
Mechanism
To simplify the discussion, the following paragraphs will address the case of a positive electrospray with the high voltage applied to a metallic emitter. A classical electrospray setup is considered, with the emitter situated at a distance from a grounded counter-electrode. The liquid being sprayed is characterized by its viscosity , surface tension , conductivity , and relative permittivity .
Effect of small electric fields on liquid menisci
Under the effect of surface tension, the liquid meniscus assumes a semi-spherical shape at the tip of the emitter. Application of the positive voltage will induce the electric field:
where is the liquid radius of curvature. This field leads to liquid polarization: the negative/positive charge carriers migrate toward/away from the electrode where the voltage is applied. At voltages below a certain threshold, the liquid quickly reaches a new equilibrium geometry with a smaller radius of curvature.
The Taylor cone
Voltages above the threshold draw the liquid into a cone. Sir Geoffrey Ingram Taylor described the theoretical shape of this cone based on the assumptions that (1) the surface of the cone is equipotential and (2) the cone exists in a steady state equilibrium. To meet both of these criteria the electric field must have azimuthal symmetry and have dependence to balance the surface tension and produce the cone. The solution to this problem is:
where (equipotential surface) exists at a value of (regardless of R) producing an equipotential cone. The angle necessary for for all R values is a zero of the Legendre polynomial of order 1/2, . There is only one zero between 0 and at 130.7099°, which is the complement of the Taylor's now famous 49.3° angle.
Singularity development
The apex of the conical meniscus cannot become infinitely small. A singularity develops when the hydrodynamic relaxation time becomes larger than the charge relaxation time . The undefined symbols stand for characteristic length and vacuum permittivity . Due to intrinsic varicose instability, the charged liquid jet ejected through the cone apex breaks into small charged droplets, which are radially dispersed by the space-charge.
Closing the electrical circuit
The charged liquid is ejected through the cone apex and captured on the counter electrode as charged droplets or positive ions. To balance the charge loss, the excess negative charge is neutralized electrochemically at the emitter. Imbalances between the amount of charge generated electrochemically and the amount of charge lost at the cone apex can lead to several electrospray operating regimes. For cone-jet electrosprays, the potential at the metal/liquid interface self-regulates to generate the same amount of charge as that lost through the cone apex.
Applications
Electrospray ionization
Electrospray became widely used as ionization source for mass spectrometry after the Fenn group successfully demonstrated its use as ion source for the analysis of large biomolecules.
Liquid metal ion source
A liquid metal ion source (LMIS) uses electrospray in conjunction with liquid metal to form ions. Ions are produced by field evaporation at tip of the Taylor cone. Ions from a LMIS are used in ion implantation and in focused ion beam instruments.
Electrospinning
Similarly to the standard electrospray, the application of high voltage to a polymer solution can result in the formation of a cone-jet geometry. If the jet turns into very fine fibers instead of breaking into small droplets, the process is known as electrospinning .
Colloid thrusters
Electrospray techniques are used as low thrust electric propulsion rocket engines to control satellites, since the fine-controllable particle ejection allows precise and effective thrust.
Deposition of particles for nanostructures
Electrospray may be used in nanotechnology, for example to deposit single particles on surfaces. This is done by spraying colloids on average containing only one particle per droplet. The solvent evaporates, leaving an aerosol stream of single particles of the desired type. The ionizing property of the process is not crucial for the application but may be used in electrostatic precipitation of the particles.
Deposition of ions as precursors for nanoparticles and nanostructures
Instead of depositing nanoparticles, nanoparticles and nano structures can also fabricated in situ by depositing metal ions to desired locations. Electrochemical reduction of ions to atoms and in situ assembly was believed to be the mechanism of nano structure formation.
Fabrication of drug carriers
Electrospray has garnered attention in the field of drug delivery, and it has been used to fabricate drug carriers including polymer microparticles used in immunotherapy as well as lipoplexes used for nucleic acid delivery. The sub-micrometer-sized drug particles created by electrospray possess increased dissolution rates, thus increased bioavailability due to the increased surface area. The side-effects of drugs can thus be reduced, as smaller dosage is enough for the same effect.
Air purifiers
Electrospray is used in some air purifiers. Particulate suspended in air can be charged by aerosol electrospray, manipulated by an electric field, and collected on a grounded electrode. This approach minimizes the production of ozone which is common to other types of air purifiers.
See also
Flow focusing
References
Electric and magnetic fields in matter
Industrial equipment
Aerosols | Electrospray | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,596 | [
"Electric and magnetic fields in matter",
"Colloids",
"Materials science",
"Aerosols",
"Condensed matter physics",
"nan"
] |
2,234,333 | https://en.wikipedia.org/wiki/Data%20%28computer%20science%29 | In computer science, data (treated as singular, plural, or as a mass noun) is any sequence of one or more symbols; datum is a single symbol of data. Data requires interpretation to become information. Digital data is data that is represented using the binary number system of ones (1) and zeros (0), instead of analog representation. In modern (post-1960) computer systems, all data is digital.
Data exists in three states: data at rest, data in transit and data in use. Data within a computer, in most cases, moves as parallel data. Data moving to or from a computer, in most cases, moves as serial data. Data sourced from an analog device, such as a temperature sensor, may be converted to digital using an analog-to-digital converter. Data representing quantities, characters, or symbols on which operations are performed by a computer are stored and recorded on magnetic, optical, electronic, or mechanical recording media, and transmitted in the form of digital electrical or optical signals. Data pass in and out of computers via peripheral devices.
Physical computer memory elements consist of an address and a byte/word of data storage. Digital data are often stored in relational databases, like tables or SQL databases, and can generally be represented as abstract key/value pairs. Data can be organized in many different types of data structures, including arrays, graphs, and objects. Data structures can store data of many different types, including numbers, strings and even other data structures.
Characteristics
Metadata helps translate data to information. Metadata is data about the data. Metadata may be implied, specified or given.
Data relating to physical events or processes will have a temporal component. This temporal component may be implied. This is the case when a device such as a temperature logger receives data from a temperature sensor. When the temperature is received it is assumed that the data has a temporal reference of now. So the device records the date, time and temperature together. When the data logger communicates temperatures, it must also report the date and time as metadata for each temperature reading.
Fundamentally, computers follow a sequence of instructions they are given in the form of data. A set of instructions to perform a given task (or tasks) is called a program. A program is data in the form of coded instructions to control the operation of a computer or other machine. In the nominal case, the program, as executed by the computer, will consist of machine code. The elements of storage manipulated by the program, but not actually executed by the central processing unit (CPU), are also data. At its most essential, a single datum is a value stored at a specific location. Therefore, it is possible for computer programs to operate on other computer programs, by manipulating their programmatic data.
To store data bytes in a file, they have to be serialized in a file format. Typically, programs are stored in special file types, different from those used for other data. Executable files contain programs; all other files are also data files. However, executable files may also contain data used by the program which is built into the program. In particular, some executable files have a data segment, which nominally contains constants and initial values for variables, both of which can be considered data.
The line between program and data can become blurry. An interpreter, for example, is a program. The input data to an interpreter is itself a program, just not one expressed in native machine language. In many cases, the interpreted program will be a human-readable text file, which is manipulated with a text editor program. Metaprogramming similarly involves programs manipulating other programs as data. Programs like compilers, linkers, debuggers, program updaters, virus scanners and such use other programs as their data.
For example, a user might first instruct the operating system to load a word processor program from one file, and then use the running program to open and edit a document stored in another file. In this example, the document would be considered data. If the word processor also features a spell checker, then the dictionary (word list) for the spell checker would also be considered data. The algorithms used by the spell checker to suggest corrections would be either machine code data or text in some interpretable programming language.
In an alternate usage, binary files (which are not human-readable) are sometimes called data as distinguished from human-readable text.
The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (281 exabytes).
Data keys and values, structures and persistence
Keys in data provide the context for values. Regardless of the structure of data, there is always a key component present. Keys in data and data-structures are essential for giving meaning to data values. Without a key that is directly or indirectly associated with a value, or collection of values in a structure, the values become meaningless and cease to be data. That is to say, there has to be a key component linked to a value component in order for it to be considered data.
Data can be represented in computers in multiple ways, as per the following examples:
RAM
Random access memory (RAM) holds data that the CPU has direct access to. A CPU may only manipulate data within its processor registers or memory. This is as opposed to data storage, where the CPU must direct the transfer of data between the storage device (disk, tape...) and memory. RAM is an array of linear contiguous locations that a processor may read or write by providing an address for the read or write operation. The processor may operate on any location in memory at any time in any order. In RAM the smallest element of data is the binary bit. The capabilities and limitations of accessing RAM are processor specific. In general main memory is arranged as an array of locations beginning at address 0 (hexadecimal 0). Each location can store usually 8 or 32 bits depending on the computer architecture.
Keys
Data keys need not be a direct hardware address in memory. Indirect, abstract and logical keys codes can be stored in association with values to form a data structure. Data structures have predetermined offsets (or links or paths) from the start of the structure, in which data values are stored. Therefore, the data key consists of the key to the structure plus the offset (or links or paths) into the structure. When such a structure is repeated, storing variations of the data values and the data keys within the same repeating structure, the result can be considered to resemble a table, in which each element of the repeating structure is considered to be a column and each repetition of the structure is considered as a row of the table. In such an organization of data, the data key is usually a value in one (or a composite of the values in several) of the columns.
Organised recurring data structures
The tabular view of repeating data structures is only one of many possibilities. Repeating data structures can be organised hierarchically, such that nodes are linked to each other in a cascade of parent-child relationships. Values and potentially more complex data-structures are linked to the nodes. Thus the nodal hierarchy provides the key for addressing the data structures associated with the nodes. This representation can be thought of as an inverted tree. Modern computer operating system file systems are a common example; and XML is another.
Sorted or ordered data
Data has some inherent features when it is sorted on a key. All the values for subsets of the key appear together. When passing sequentially through groups of the data with the same key, or a subset of the key changes, this is referred to in data processing circles as a break, or a control break. It particularly facilitates the aggregation of data values on subsets of a key.
Peripheral storage
Until the advent of bulk non-volatile memory like flash, persistent data storage was traditionally achieved by writing the data to external block devices like magnetic tape and disk drives. These devices typically seek to a location on the magnetic media and then read or write blocks of data of a predetermined size. In this case, the seek location on the media, is the data key and the blocks are the data values. Early used raw disk data file-systems or disc operating systems reserved contiguous blocks on the disc drive for data files. In those systems, the files could be filled up, running out of data space before all the data had been written to them. Thus much unused data space was reserved unproductively to ensure adequate free space for each file. Later file-systems introduced partitions. They reserved blocks of disc data space for partitions and used the allocated blocks more economically, by dynamically assigning blocks of a partition to a file as needed. To achieve this, the file system had to keep track of which blocks were used or unused by data files in a catalog or file allocation table. Though this made better use of the disc data space, it resulted in fragmentation of files across the disc, and a concomitant performance overhead due additional seek time to read the data. Modern file systems reorganize fragmented files dynamically to optimize file access times. Further developments in file systems resulted in virtualization of disc drives i.e. where a logical drive can be defined as partitions from a number of physical drives.
Indexed data
Retrieving a small subset of data from a much larger set may imply inefficiently searching through the data sequentially. Indexes are a way to copy out keys and location addresses from data structures in files, tables and data sets, then organize them using inverted tree structures to reduce the time taken to retrieve a subset of the original data. In order to do this, the key of the subset of data to be retrieved must be known before retrieval begins. The most popular indexes are the B-tree and the dynamic hash key indexing methods. Indexing is overhead for filing and retrieving data. There are other ways of organizing indexes, e.g. sorting the keys and using a binary search algorithm.
Abstraction and indirection
Object-oriented programming uses two basic concepts for understanding data and software:
The taxonomic rank-structure of classes, which is an example of a hierarchical data structure; and
at run time, the creation of references to in-memory data-structures of objects that have been instantiated from a class library.
It is only after instantiation that an object of a specified class exists. After an object's reference is cleared, the object also ceases to exist. The memory locations where the object's data was stored are garbage and are reclassified as unused memory available for reuse.
Database data
The advent of databases introduced a further layer of abstraction for persistent data storage. Databases use metadata, and a structured query language protocol between client and server systems, communicating over a computer network, using a two phase commit logging system to ensure transactional completeness, when saving data.
Parallel distributed data processing
Modern scalable and high-performance data persistence technologies, such as Apache Hadoop, rely on massively parallel distributed data processing across many commodity computers on a high bandwidth network. In such systems, the data is distributed across multiple computers and therefore any particular computer in the system must be represented in the key of the data, either directly, or indirectly. This enables the differentiation between two identical sets of data, each being processed on a different computer at the same time.
See also
Big data
Data
Data dictionary
Data modeling
Data stream
Data set
Database index
State (computer science)
Tuple
References | Data (computer science) | [
"Technology"
] | 2,352 | [
"Computer data",
"Data"
] |
2,234,414 | https://en.wikipedia.org/wiki/Monkey%20and%20hunter | In physics, the monkey and hunter is a hypothetical scenario often used to illustrate the effect of gravity on projectile motion. It can be presented as exercise problem or as a demonstration.
The essentials of the problem are stated in many introductory guides to physics. In essence, the problem is as follows: A hunter with a blowgun goes out in the woods to hunt for monkeys and sees one hanging in a tree. The monkey releases its grip the instant the hunter fires his blowgun. Where should the hunter aim in order to hit the monkey?
Discussion
To answer this question, recall that according to Galileo's law, all objects fall with the same constant acceleration of gravity (about 9.8 metres per second per second near the Earth's surface), regardless of the object's weight. Furthermore, horizontal motions and vertical motions are independent: gravity acts only upon an object's vertical velocity, not upon its velocity in the horizontal direction. The hunter's dart, therefore, falls with the same acceleration as the monkey.
Assume for the moment that gravity was not at work. In that case, the dart would proceed in a straight-line trajectory at a constant speed (Newton's first law). Gravity causes the dart to fall away from this straight-line path, making a trajectory that is in fact a parabola. Now, consider what happens if the hunter aims directly at the monkey, and the monkey releases his grip the instant the hunter fires. Because the force of gravity accelerates the dart and the monkey equally, they fall the same distance in the same time: the monkey falls from the tree branch, and the dart falls the same distance from the straight-line path it would have taken in the absence of gravity. Therefore, the dart will always hit the monkey, no matter the initial speed of the dart, no matter the acceleration of gravity.
Another way of looking at the problem is by a transformation of the reference frame. Earlier the problem was stated in a reference frame in which the Earth is motionless. However, for very small distances on the surface of Earth the acceleration due to gravity can be considered constant to good approximation. Therefore, the same acceleration acts upon both the dart and the monkey throughout the fall. Transform the reference frame to one that is accelerated upward by the amount with respect to the Earth's reference frame (which is to say the acceleration of the new frame with respect to the Earth is ). Because of Galilean equivalence, the (approximately) constant gravitational field (approximately) disappears, leaving us with only the horizontal velocity of both the dart and the monkey.
In this reference frame the hunter should aim straight at the monkey, since the monkey is stationary. Since angles are invariant under transformations of reference frames, transforming back to the Earth's reference frame the result is still that the hunter should aim straight at the monkey. While this approach has the advantage of making the results intuitively obvious, it suffers from the slight logical blemish that the laws of classical mechanics are not postulated within the theory to be invariant under transformations to non-inertial (accelerated) reference frames (see also principle of relativity).
References
Physics experiments
Thought experiments in physics
Kinematics | Monkey and hunter | [
"Physics",
"Technology"
] | 647 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Physics experiments",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics",
"Experimental physics"
] |
2,234,471 | https://en.wikipedia.org/wiki/Comparison%20of%20office%20suites | The following tables compare general and technical information for a number of office suites:
General information
Platforms listed are for when a local application is available that does not require network connectivity to function.
Office Suite names that are on a light purple background are discontinued.
OS support
The operating systems the office suites were designed to run on without emulation; for the given office suite/OS combination, there are five possibilities:
No indicates that it does not exist or was never released.
Partial indicates that the office suite lacks important functionality and it is still being developed, or it is online only so requires network connectivity to function.
Beta indicates that while a version of the office suite is fully functional and has been released, it is still in development (e.g. for stability).
Yes indicates that the office suite has been officially released in a fully functional, stable version.
Dropped indicates that while the office suite works, new versions are no longer being released for the indicated OS; the number in parentheses is the last known stable version which was officially released for that OS.
Office Suite names that are on a light purple background are discontinued.
Supported file formats
Office Suite names that are on a light purple background are discontinued.
Main components
Office Suite names that are on a light purple background are discontinued.
Online capabilities
Office Suite names that are on a light purple background are discontinued.
See also
List of office suites
Comparison of word processors
Comparison of spreadsheet software
Notes
References
Office suites | Comparison of office suites | [
"Technology"
] | 290 | [
"Software comparisons",
"Computing comparisons"
] |
2,234,553 | https://en.wikipedia.org/wiki/Adverse%20event | In pharmaceuticals, an adverse event (AE) is any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have a causal relationship with this treatment. An adverse event can therefore be any unfavourable and unintended symptom or sign (including an abnormal laboratory finding) or disease temporally associated with the use of a medicinal (investigational) product, whether or not related to the medicinal (investigational) product.
AEs in patients participating in clinical trials must be reported to the study sponsor and if required could be reported to the local ethics committee. Adverse events categorized as "serious" (results in death, illness requiring hospitalization, events deemed life-threatening, results in persistent or significant incapacity, a congenital anomaly or medically important condition) must be reported to the regulatory authorities immediately, whereas non-serious adverse events are merely documented in the annual summary sent to the regulatory authority.
The sponsor collects AE reports from the local researchers, and notifies all participating sites of the AEs at the other sites, as well as both the local investigators' and the sponsors' judgment of the seriousness of the AEs. This process allows the sponsor and all the local investigators access to a set of data that might suggest potential problems with the study treatment while the study is still ongoing.
Types of adverse events
All clinical trials have the potential to produce AEs. AEs are classified as serious or non-serious; expected or unexpected; and study-related, possibly study-related, or not study-related.
For example, while a study that tests the effectiveness of a new blood pressure cuff for a period of 10 minutes might seem innocuous, the potential exists for the patient's skin to be irritated by the device. Patients in that study might also die during that 10-minute period. Both skin irritation and sudden death would be considered AEs. In this case, the skin irritation would be classified as not serious, unexpected, and possibly study-related. The death would be classified as serious and unexpected (unless the patient was already at death's door). The local researcher would use his/her medical judgment to determine whether the death could have been related to the study device.
Both the skin irritation and the death are unexpected events, and should alert the researcher to the potential existence of a problem with the device (for instance, it could have malfunctioned and shocked the patient). The researcher would report these AEs to the local Institutional Review Board and to the sponsor, and await direction on whether to stop the study. If the researcher feels there is an imminent danger posed by the device, he or she can use medical discretion to stop patients from participating in the study.
An adverse event can also be declared in the normal treatment of a patient which is suspected of being caused by the medication being taken or a medical device used in the treatment of the patient.
In Australia, 'Adverse EVENT' refers generically to medical errors of all kinds, surgical, medical or nursing related. The most recent available official study (1995) indicated 18,000 deaths per year are a result of hospital care. The Medical Error Action Group is lobbying for legislation to improve the reporting of AEs and through quality control, minimize the needless deaths.
Reporting of adverse events
Researchers participating in a clinical trial must report all adverse events to the drug regulatory authority of the respective country where the drug or device is to be registered [e.g. Food and Drug Administration (FDA) if it is US]. Serious AEs must be reported immediately; minor AEs are 'bundled' by the sponsor and submitted later.
The type of method used to elicit AEs reported by individuals for evidence on likely adverse drug reactions (ADRs) influences the extent and nature of data. A 2018 review conducted found that some participants in clinical drug trials were asked simple open questions (i.e. 'how are you feeling?'), while in other trials, participants were given lengthy questionnaires about physical symptoms (i.e. 'do you experience muscle soreness or headaches?'. A 2022 review on adverse events in Human challenge trials found that reporting improved over time, but remains non-standardized in ways that make comparisons difficult.
As there is a lack of consensus on how AEs should be assessed, there is a concern that the kinds of questions and the phrasing of questions may lead to measurement error and impede comparisons between studies and pooled analysis. However, Allen et al. concluded that the impact of the AE detected by different methods is unclear.
Grades of AE
Clinical trial results often report the number of grade 3 and grade 4 adverse events.
Grades are defined:
Grade 1 Mild AE
Grade 2 Moderate AE
Grade 3 Severe AE
Grade 4 Life-threatening or disabling AE
Grade 5 Death related to AE
Databases of adverse events
Medical devices
The FDA provides a database for reporting of adverse medical device events called the Manufacturer and User Facility Device Experience Database (MAUDE)[1]. The data consist of voluntary reports since June 1993, user facility reports since 1991, distributor reports since 1993, and manufacturer reports since August 1996, and is open for public view. Two private companies have also recently started providing access to analyzed adverse event information: Clarimed provides adverse event information for medical devices and AdverseEvents provides adverse event data for drugs.
MAUDE is incomplete. KFF Health News of the Kaiser Family Foundation reported discovering a secret set of least 1.1 million adverse events hidden in a database not known to professionals familiar MAUDE Some device AEs reported to FDA can only be found in MDR Data Files of the Device Experience Network (DEN) or in Alternative Summary Report (ASR) data received by the FDA.
See also
Clinical trial
Complication (medicine)
Good clinical practice (GCP)
Data monitoring committees
Serious adverse event
Adverse effect
Adverse drug reaction
Adverse vaccine event
Adverse outcome pathway
Antibody-dependent enhancement
Pharmacovigilance
EudraVigilance (European Union AE reporting and evaluating network)
Clinical Trials Directive (Directive 2001/20/EC by European Union)
Vaccine-associated enhanced respiratory disease
Adverse Event Reporting System
Yellow Card Scheme
References
External links
ClinicalTrials.gov from US National Library of Medicine
ICH Website
FDA Website
Clinical research
Pharmaceutical industry | Adverse event | [
"Chemistry",
"Biology"
] | 1,274 | [
"Pharmaceutical industry",
"Pharmacology",
"Life sciences industry"
] |
2,234,566 | https://en.wikipedia.org/wiki/Serious%20adverse%20event | In drug development, serious adverse event (SAE) is defined as any untoward medical occurrence during a human drug trial that at any dose
Results in death
Is life-threatening
Requires inpatient hospitalization or causes prolongation of existing hospitalization
Results in persistent or significant disability/incapacity
May have caused a congenital anomaly/birth defect
Requires intervention to prevent permanent impairment or damage
The term "life-threatening" in the definition of "serious" refers to an event in which the patient was at risk of death at the time of the event; it does not refer to an event which hypothetically might have caused death if it were more severe. Adverse events are more broadly defined by international regulation as “Any untoward medical occurrence in a patient or clinical investigation subject administered a pharmaceutical product and which does not necessarily have to have a causal relationship with this treatment.”
Research
Investigators in human clinical trials are obligated to report these events in clinical study reports. Research suggests that these events are often inadequately reported in publicly available reports. Because of the lack of these data and uncertainty about methods for synthesising them, individuals conducting systematic reviews and meta-analyses of therapeutic interventions often unknowingly overemphasise health benefit. To balance the overemphasis on benefit, scholars have called for more complete reporting of harm from clinical trials.
Related terms
Serious adverse reactions are serious adverse events judged to be related to drug therapy. A SUSAR (suspected unexpected serious adverse reaction) should be reported to a drug regulatory authority under an investigational license by using the CIOMS form (or in some countries an equivalent form). "Unexpected" means that for an authorised (approved) medicinal product that the event is not described in the product's labeling, or in the case of an investigational (yet to be approved or disapproved) product that the event is not listed in the Investigator’s Brochure. That is, the AE is unexpected for the drug or device.
An adverse effect is an adverse event which is believed to be caused by a health intervention.
Footnotes
See also
Adverse event, including mild/minor
Clinical trial
Good clinical practice (GCP)
Data Monitoring Committees
Pharmacovigilance
EudraVigilance (European Union)
Directive 2001/20/EC (European Union)
TGN1412, a drug that had SAEs in a clinical trial
BIA 10-2474, a drug that had SAEs, including a fatality, in a clinical trial
External links
What Is A Serious Adverse Event? (MedWatch)
ClinicalTrials.gov from US National Library of Medicine
ICH Website
Clinical research
Clinical trials
Drug safety | Serious adverse event | [
"Chemistry"
] | 538 | [
"Drug safety"
] |
2,234,574 | https://en.wikipedia.org/wiki/Epilogue | An epilogue or epilog (from Greek ἐπίλογος epílogos, "conclusion" from ἐπί epi, "in addition" and λόγος logos, "word") is a piece of writing at the end of a work of literature, usually used to bring closure to the work. It is presented from the perspective of within the story. When the author steps in and speaks directly to the reader, that is more properly considered an afterword. The opposite is a prologue—a piece of writing at the beginning of a work of literature or drama, usually used to open the story and capture interest. Some genres, for example television programs and video games, call the epilogue an "outro" patterned on the use of "intro" for "introduction".
Epilogues are usually set in the future, after the main story is completed. Within some genres it can be used to hint at the next installment in a series of work. It is also used to satisfy the reader's curiosity and to cover any loose ends of the story.
History of the term
The first known use of the word epilogue was in the 15th century and it was used as a concluding section for literary work.
In Middle English and Middle French the term "epilogue" was used. In Latin they used epilogus, from Greek epilogos, and then epilegein.
The first citation of the word epilogue in the Oxford English Dictionary is from 1564: "Now at length you are come to the Epilogue (as it were) or full conclusion of your worke." Prior to this the OED only refers to Caxton's term ‘Epylogacion’ in 1474, ‘The Epylogacion and recapitulation of this book’. However, this term was not widely followed and instead ‘conclusion’ was the term used to introduce final words of a text. The first example of a dramatic epilogue in print is John Phillip's The Play of Patient and Meek Grissell (1569). Although in non-dramatic publications, the word does appear prior to this such as in Turbervile's Epitaphs (1566).
The word epilogue could be adopted to describe the end of speeches within medieval plays, but at the time this was primarily used to hint at the connection to later works. Most Greek plays would end with lines from the Chorus, which was different to the epilogues of early modern playwrights as well as Ancient Roman plays.
American Author Henry James has said the epilogue is a place that distributes last "prizes, pensions, husbands, wives, babies, millions, appended paragraphs and cheerful remarks." The word epilogue has also been described as a "Parthian Dart" by Pat Rogers. He said it has the potential to be excessive for some readers as it has a "shift in tense, and a jump in tempo-an accelerando" which quickly changes to a "loosening of the temporal screw, enabling us to move rapidly ahead over a period of years". Epilogues also consisted of traditional "topoi" which was a metaphor introduced by Aristotle, to symbolise writers creating arguments in their story.
In literature
An epilogue is the final chapter at the end of a story that often serves to reveal the fates of the characters. Some epilogues may feature scenes only tangentially related to the subject of the story. They can be used to hint at a sequel or wrap up all the loose ends. They can occur at a significant period of time after the main plot has ended. In some cases, the epilogue is used to allow the main character a chance to "speak freely".
An epilogue can continue in the same narrative style and perspective as the preceding story, although the form of an epilogue can occasionally be drastically different from the overall story. It can also be used as a sequel.
Books
For example, in Margaret Atwood's The Handmaid's Tale the epilogue is a transcript of a symposium at a university in the Arctic, held in 2195. The majority of the epilogue is a speech given by a professor named Pieixoto who is an expert on the area of Gilead where The Handmaid's Tale takes place. In the epilogue the land of Gilead has long gone and the main character Offred has her story published. The story is her perspective of the past events within the novel and Offred titles her publication the eponym, ‘The Handmaid's Tale.’
In George Orwell's Animal Farm, the epilogue is used to satisfy the curiosity of the readers by revealing a utopic ending to the characters in the Manor Farm many years after the revolution. "YEARS passed. The seasons came and went, the short animal lives fled by. A time came when there was no one who remembered the old days before the Rebellion, except Clover, Benjamin, Moses the raven, and a number of the pigs."
Horror and suspense novels
The epilogue can be used to reveal an approaching threat for the character. Readers may believe that the villain has been dealt with, but the epilogue will suggest that this is not entirely true, adding to the horror and mystery of the story.
Children's fantasy
In children's fantasy it has a particular purpose. It can serve as a reassuring ending to calm fears about a possible bad outcome. This is seen in the Harry Potter Saga where the characters have a happy ending as they are much older and with families. This provides comfort for readers who may have anticipated a bad outcome for them. Epilogues also serve as a transitory stage for the novel to change genres into myths and legends.
Plays
In Greek and Elizabethan plays, an actor would stand at the front of the stage and speak directly to the audience. They would usually show the contentment the characters have after experiencing the tragedies within the play. If the hero has had a tragic ending, the speaker of the epilogue would provide the moral lesson that the audience can learn from after the hero's poor moral choices were made.
Elizabethan plays
For example, in Shakespeare's epilogue in Romeo and Juliet, the speaker is providing the moral lesson and the consequences for the audience to take away.
"A glooming peace this morning with it brings;
The sun for sorrow will not show his head.
Go hence to have more talk of these sad things,
Some shall be pardoned, and some punished,
For never was a story of more woe
Than this of Juliet and her Romeo."
In As You Like It, the epilogue is said by Rosalind which shows her content.
"… and I charge you, O men, for the love you bear to women—as I perceive by your simpering, none of you hates them—that between you and the women the play may please. If I were a woman, I would kiss as many of you as had beards that pleased me, complexions that liked me, and breaths that I defied not. And I am sure as many as have good beards, or good faces, or sweet breaths will, for my kind offer, when I make curtsy, bid me farewell."
Epilogues were more frequently delivered by actors. As the epilogue would frame the end of the play it would allow the speaker to both simultaneously perform and reflect on the character. In combining both the speaker's persona and character, Felicity Nussbaum called this the "double consciousness". This invites the audience to reflect on each moment and its meaning behind it. Within tragedies the female epilogues were the most popular, and it would often challenge the integrity of the play. For example, Tyrannick Love took the main female character, who had often undergone tragedy, and reconceptualised her to be a comedian in the epilogue. The female character was adorned in the same costumes that she wore in Act 5 and the speaker would combine her two entities, the tragic role within the main play and her humourized public persona, when speaking in the epilogue.
Many writers would contribute their epilogues to other writer's plays. This would often be out of friendship. Other epilogues were designated as "written by a person of quality" or "sent from and unknown hand". From the period between 1660 and 1714 outsiders of England would supply both prologues and epilogues 229 times.
Epilogues would often focus to ensure the audience will return by pointing out the play's worth in the closing lines. There have also been linkages between epilogues and prayers and how they are often synonymous with each other when concluding pieces of literature.
Greek plays
Most Greek plays would end with lines said by the Chorus. They usually consist of two lines which encapsulate the moral observation of the play. Nine of Euripides’s plays have a deux ex machina and another three "end with a mortal who takes on the superhuman powers of a deus".
Roman plays
Roman plays have particularly shorter epilogues and mainly consist of pleas for a plaudite. This is seen in Terence’s extant plays and Plautus’s comedies. In Plautus’s plays Trinummus, Poenulus, Persa, Milus Gloriosus and Curculio all end with pleas for applause. This is to involve the audience by asking them to participate in the applause. However, in Epidicus the epilogue further states ‘Give us your applause… and stretch your limbs and rise’ and in Stitchus ‘Give us your applause, and then have a party of your own at home.’ This was a strategy to disengage the audience, by smoothly transitioning them to the outside world to regain their sense of reality.
The role of gender
Both prologues and epilogues would typically give women agency by allowing them to perform comedies and receive audience applause. Women Playwrights, actors and feminist work are a particular focus within Restoration Culture and especially Restoration Theatre. English playwrights Aphra Behn, Delarivier Manley, Mary Pix and Catherine Trotter’s work are examined to understand the Restoration theatre's relationship between women.
Gayle Rubin's "The Traffic in Women" and Laura Mulvey's "Visual Pleasure and Narrative Cinema" have sparked critical interest in cross-cultural feminism and has increased female audiences in attending theatre. David Robert's pioneering The Ladies: Female Patronage of Restoration Drama (1989) argues that the theatres were hostile to female spectators and that prologues and epilogues have contributed to that environment.
Michael Gavin argues the opposite and has stated that player's direct addresses to women have indicated the value of female audiences, which occurred more frequently when the players were female. It happened to also be one of the first times the women interacted with each in other in a public forum.
Jean Marsden has found that few plays also focus on female sexuality. Epilogues spoken by women to women provide stimulating material. They would often encourage female fantasy and critique male sexual performance. This woman-to-woman paratext allows females speakers and audiences to be perpetuating sexuality which would not often be spoken of within the main structure of the play.
Between 1660 and 1714 a total of 115 prologues and epilogues would feature actors either addressing female audiences or stating facts about the sex. One epilogue written by R.Boyle to Mr Anthony claim that poets try but fail to craft male characters that women find attractive. Epilogues often raise the topic of virtue but when addressing female audiences, they would typically praise the plays lack of virtue to make it a selling point. For example, the epilogue to Ravenscroft's The Citizen Turn’d Gentleman (1672) tries to appeal to women by demoting virtue, as the speaker says to win the ladies favour Ravenscroft will become "the greatest debauchee". The epilogue to Thomas Wright's 1693 comedy, The Female Vertuoso, Susannah Mountfort sneers that older ladies "boast of Virtue ‘cause unfit for Vice".
Some late Restoration epilogues claim that English women in comparison to women from other countries, possess more liberties, are better behaved, and enjoy happier lives. Examples include Francis Manning's All for the Better (1702) epilogue which states that English women have superior breeding over women from Madrid. Anne Bracegirdle's epilogue to Shadwell's The Amorous Bigotte (1690) claims that even though Spanish women may be wiser, English women are happier because they are not afraid that their husbands will find out about their lovers.
In opera
Because commenting on past action is inherently undramatic, few operas have epilogues, even those with prologues. Among those explicitly called epilogues are the concluding scenes of Stravinsky's The Rake's Progress and Offenbach's The Tales of Hoffmann. Other operas whose final scenes could be described as epilogues are Mozart's Don Giovanni, Mussorgsky's Boris Godunov, and Delius's Fennimore and Gerda.
In film
In films, the final scenes may feature a montage of images or clips with a short explanation of what happens to the characters. A few examples of such films are 9 to 5, American Graffiti, Changeling, Four Weddings and a Funeral, Harry Potter and the Deathly Hallows – Part 2, National Lampoon's Animal House, Babe: Pig in the City, Happy Feet Two, Remember the Titans and Zack Snyder's Justice League.
The epilogue of La La Land shows a happy ending, an alternative to the actual ending.
In many documentaries and biopics, the epilogue is text-based, explaining what happened to the subjects after the events covered in the film.
In video games
In video games, epilogues can occur at the end of the story, usually after the credits have rolled. An epilogue in a game functions similarly to an epilogue in film and literature, providing closure to the end of a story. However, the way in which a video game epilogue is interacted with can then determine how the story ends in works of fiction that contain multiple endings. For example, there are four possible endings to the 2012 video game Spec Ops: The Line, and three of the endings are chosen by what the player does in the epilogue. Epilogues may also be presented as peripheral downloadable content or expansion packs that supplements the main content or campaign mode of a game.
In video games that allow the permanent death of playable characters, an epilogue can chronicle what happened to the playable characters who survived and depict how their situation has changed after the story has ended. For example, the 2015 video game Until Dawn features characters who survived (if any) recounting their experiences to the police after being rescued. This system can also be expanded; relationships can be built between characters in most games of the Fire Emblem series, allowing for unique outcomes for characters depending on the actions of the player throughout the campaign.
A visual novel can also feature a type of epilogue, which will wrap up all of the scenarios encountered by a player, most often after the game has been fully completed by reaching all of the multiple endings; as is the case with Tsukihime, featuring an epilogue that expands on the endings of all completable routes, as well as providing context for the rest of the game by explaining events in the prologue.
See also
Conclusion
Prologue
References
External links
Book design
Literature
Fiction
Style (fiction)
Early Modern women writers
Drama | Epilogue | [
"Engineering"
] | 3,133 | [
"Book design",
"Design"
] |
2,234,718 | https://en.wikipedia.org/wiki/Lockring | A lock ring is a threaded washer used to prevent components from becoming loose during rotation. They are found on an adjustable bottom bracket and a track hub of a bicycle.
Lokring is another form of fastener used in the automotive and air condition industries: these fittings are often confused with lockrings.
See also
References
Fasteners
Bicycle parts | Lockring | [
"Engineering"
] | 72 | [
"Construction",
"Fasteners"
] |
2,234,766 | https://en.wikipedia.org/wiki/Chlorodifluoromethane | Chlorodifluoromethane or difluoromonochloromethane is a hydrochlorofluorocarbon (HCFC). This colorless gas is better known as HCFC-22, or R-22, or . It was commonly used as a propellant and refrigerant. These applications were phased out under the Montreal Protocol in developed countries in 2020 due to the compound's ozone depletion potential (ODP) and high global warming potential (GWP), and in developing countries this process will be completed by 2030. R-22 is a versatile intermediate in industrial organofluorine chemistry, e.g. as a precursor to tetrafluoroethylene.
Production and current applications
Worldwide production of R-22 in 2008 was about 800Gg per year, up from about 450Gg per year in 1998, with most production in developing countries. R-22 use is being phased out in developing countries, where it is largely used for air conditioning applications.
R-22 is prepared from chloroform:
HCCl3 + 2 HF → HCF2Cl + 2 HCl
An important application of R-22 is as a precursor to tetrafluoroethylene. This conversion involves pyrolysis to give difluorocarbene, which dimerizes:
2 CHClF2 → C2F4 + 2 HCl
The compound also yields difluorocarbene upon treatment with strong base and is used in the laboratory as a source of this reactive intermediate.
The pyrolysis of R-22 in the presence of chlorofluoromethane gives hexafluorobenzene.
Environmental effects
R-22 is often used as an alternative to the highly ozone-depleting CFC-11 and CFC-12, because of its relatively low ozone depletion potential of 0.055, among the lowest for chlorine-containing haloalkanes. However, even this lower ozone depletion potential is no longer considered acceptable.
As an additional environmental concern, R-22 is a powerful greenhouse gas with a GWP equal to 1810 (which indicates 1810 times as powerful as carbon dioxide). Hydrofluorocarbons (HFCs) are often substituted for R-22 because of their lower ozone depletion potential, but these refrigerants often have a higher GWP. R-410A, for example, is often substituted, but has a GWP of 2088. Another substitute is R-404A with a GWP of 3900. Other substitute refrigerants are available with low GWP. Ammonia (R-717), with a GWP of <1, remains a popular substitute on fishing vessels and large industrial applications. Ammonia's toxicity in high concentrations limit its application in small-scale refrigeration applications.
Propane (R-290) is another example, and has a GWP of 3. Propane was the de facto refrigerant in systems smaller than industrial scale before the introduction of CFCs. The reputation of propane refrigerators as a fire hazard kept delivered ice and the ice box the overwhelming consumer choice despite its inconvenience and higher cost until safe CFC systems overcame the negative perceptions of refrigerators. Illegal to use as a refrigerant in the US for decades, propane is now permitted for use in limited mass suitable for small refrigerators. It is not lawful to use in air conditioners or larger refrigerators because of its flammability and potential for explosion.
Phaseout in the European Union
Since 1 January 2010, it has been illegal to use newly manufactured HCFCs to service refrigeration and air-conditioning equipment; only reclaimed and recycled HCFCs may be used. In practice this means that the gas has to be removed from the equipment before servicing and replaced afterwards, rather than refilling with new gas.
Since 1 January 2015, it has been illegal to use any HCFCs to service refrigeration and air-conditioning equipment; broken equipment that used HCFC refrigerants must be replaced with equipment that does not use them.
Phaseout in the United States
R-22 was mostly phased out in new equipment in the United States by regulatory action by the EPA under the Significant New Alternatives Program (SNAP) by rules 20 and 21 of the program, due to its high global warming potential. The EPA program was consistent with the Montreal Accords, but international agreements must be ratified by the US Senate to have legal effect. A 2017 decision of the US Court of Appeals for the District of Columbia Circuit held that the US EPA lacked authority to regulate the use of R-22 under SNAP. In essence the court ruled the EPA's statutory authority was for ozone reduction, not global warming. The EPA subsequently issued guidance to the effect that the EPA would no longer regulate R-22. A 2018 ruling by the same court held that the EPA failed to conform with required procedure when it issued its guidance pursuant to the 2017 ruling, voiding the guidance, but not the prior ruling that required it. The refrigeration and air conditioning industry had already discontinued production of new R-22 equipment. The practical effect of these rulings is to reduce the cost of imported R-22 to maintain aging equipment, extending its service life, while preventing the use of R-22 in new equipment.
R-22, retrofit using substitute refrigerants
The energy efficiency and system capacity of systems designed for R-22 is slightly greater using R-22 than the available substitutes.
R-407A is for use in low- and medium-temp refrigeration. Uses a polyolester (POE) oil.
R-407C is for use in air conditioning. Uses a minimum of 20 percent POE oil.
R-407F and R-407H are for use in medium- and low-temperature refrigeration applications (supermarkets, cold storage, and process refrigeration); direct expansion system design only. They use a POE oil.
R-421A is for use in "air conditioning split systems, heat pumps, supermarket pak systems, dairy chillers, reach-in storage, bakery applications, refrigerated transport, self-contained display cabinets, and walk-in coolers". Uses mineral oil (MO), Alkylbenzene (AB), and POE.
R-422B is for use in low-, medium-, and high-temperature applications. It is not recommended for use in flooded applications.
R-422C is for use in medium- and low-temperature applications. The TXV power element will need to be changed to a 404A/507A element and critical seals (elastomers) may need to be replaced.
R-422D is for use in low-temp applications, and is mineral oil compatible.
R-424A is for use in air conditioning as well as medium-temp refrigeration temperature ranges of 20 to 50˚F. It works with MO, alkylbenzenes (AB), and POE oils.
R-427A is for use in air conditioning and refrigeration applications. It does not require all the mineral oil to be removed. It works with MO, AB, and POE oils.
R-434A is for use in water cooled and process chillers for air conditioning and medium- and low-temperature applications. It works with MO, AB, and POE oils.
R-438A (MO-99) is for use in low-, medium-, and high-temperature applications. It is compatible with all lubricants.
R-458A is for use in air conditioning and refrigeration applications, without capacity or efficiency loss. Works with MO, AB, and POE oils.
R-32 or HFC-32 (difluoromethane) is for use in air conditioning and refrigeration applications. It has zero ozone depletion potential (ODP) [2] and a global warming potential (GWP) index 675 times that of carbon dioxide.
Physical properties
It has two allotropes: crystalline II below 59K and crystalline I above 59K and below 115.73K.
Price history and availability
EPA's analysis indicated the amount of existing inventory was between 22,700t and 45,400t.
In 2012 the EPA reduced the amount of R-22 by 45%, causing the price to rise by more than 300%. For 2013, the EPA has reduced the amount of R-22 by 29%.
References
https://www.iiar.org/
External links
MSDS from DuPont
Data at Integrated Risk Information System: IRIS 0657
CDC – NIOSH Pocket Guide to Chemical Hazards – Chlorodifluoromethane
Phase change data at webbook.nist.gov
IR absorption spectra
IARC summaries and evaluations: Vol. 41 (1986), Suppl. 7 (1987), Vol. 71 (1999)
Halomethanes
Hydrochlorofluorocarbons
Greenhouse gases
Refrigerants
Propellants
Airsoft
IARC Group 3 carcinogens
Ozone-depleting chemical substances | Chlorodifluoromethane | [
"Chemistry",
"Environmental_science"
] | 1,922 | [
"Greenhouse gases",
"Harmful chemical substances",
"Environmental chemistry",
"Ozone-depleting chemical substances"
] |
2,234,982 | https://en.wikipedia.org/wiki/Message-waiting%20indicator | In telephony, a message-waiting indicator (MWI) is a Telcordia Technologies (formerly Bellcore) term for an FSK-based telephone calling feature that illuminates an LED on selected telephones to notify a telephone user of waiting voicemail messages on most North American public telephone networks and PBXs.
As described in Telcordia Generic Requirements document GR-283-CORE, a Message_Waiting_Indicator (MWI) is a mechanism that informs the subscriber about the status of recorded messages. The subscriber may subscribe to a notification feature that makes use of the status of this MWI.
Difference to audible message waiting indicators
This feature is also frequently called (and abbreviated) as visual message waiting indicator (VMWI). A VMWI, as defined in Telcordia GR-1401-CORE, is a stored program controlled switching (SPCS) system feature that activates and deactivates a visual indicator on customer-premises equipment (CPE) to notify the customer that new messages are waiting. VMWI differs from existing features that use other message indicators, such as audible stuttered dial tone, in that it activates a visual indicator on the CPE. The visual indicator may be as simple as lighting or flashing a light-emitting diode (LED), or as advanced as displaying a special message on a liquid-crystal display (LCD).
This technology was invented by Jerome (Jerry) Schull and Wayne Howe at BellSouth's Advanced Technology R&D group in 1992 and was issued as US Patents #5,363,431 and #5,521,964. It was introduced in 1995, with the introduction of CLASS-based calling features and ADSI. It was at one time only compatible with ADSI-compliant telephones but is now compatible with any customer premises equipment (CPE) that simply responds visually to visual FSK.
Use of similar indicator on standalone Caller ID set-top boxes
This service is often erroneously associated with the abilities of most Caller ID standalone set-top boxes. Caller ID boxes manufactured after 1998 feature an LED that blinks green to notify that new calls have been recorded and red to indicate that a subscriber has new voicemail messages waiting. Some units also display the text "MESSAGE WAITING" (similar to ADSI-compliant telephones). These units do not use visual FSK to activate their red LEDs, but instead, they briefly "pick-up" the line at certain intervals (normally, within two minutes of a new call) to check for a "stuttered" dial tone. The presence of a stutter dial tone activates a red LED; while absence deactivates it.
Mobile telephony
For mobile phones, the message-waiting indicator is sent via a Short Message Service (SMS) message — the same system used for texting. (SMS was actually invented for utilitarian uses like this, not for user conversation.) It not only indicates that a message is waiting, but also how many unheard messages there are on the voicemail server for that telephone number. In the event that a phone is deactivated, out of range, or otherwise removed from the network, there is often no way to clear this indicator until another message is recorded to the system, causing the MWI message to be sent again once the phone reconnects. The customer service call center for the mobile network operator may also be able to provide simple technical support to reset this by resending the MWI message manually.
See also
Telecommunications equipment
Telephone service enhanced features
Mobile technology
Voicemail | Message-waiting indicator | [
"Technology"
] | 747 | [
"nan"
] |
2,234,985 | https://en.wikipedia.org/wiki/Zome%20%28architecture%29 | A zome is a building designed using geometries different from of a series of rectangular boxes, used in a typical house or building. The word zome was coined in 1968 by Nooruddeen Durkee (then Steve Durkee), combining the words dome and zonohedron. One of the earliest models became a large climbing structure at the Lama Foundation.
Origin
Following his education at Amherst College and UCLA, Steve Baer studied mathematics at Eidgenössische Technische Hochschule (Zurich, Switzerland). Here he became interested in the possibilities of building innovative structures using polyhedra. Baer and his wife, Holly, moved back to the U.S., settling in Albuquerque, New Mexico in the early 1960s. In New Mexico, he experimented with constructing buildings of unusual geometries (calling them by his friend Steve Durkee's term: "zomes" — see "Drop City") — buildings intended to be appropriate to their environment, notably to utilize solar energy well. He was fascinated with the dome geometry popularized by architect R. Buckminster Fuller. Baer was an occasional guest at Drop City, an arts and experimental community near Trinidad, CO. He wanted to design and construct buildings that didn't suffer from some of the limitations of the smaller, owner-built versions of geodesic domes (of the 'pure Fuller' design).
Gallery
Later developments
In recent years, the unconventional "zome" building-design approach with its multi-faceted geometric lines has been taken up by French builders in the Pyrenees. Home Work, a book published in 2004 and edited by Lloyd Kahn, has a section featuring these buildings. While many zomes built in the last couple decades have been wood-framed and made use of wood sheathing, much of what Baer himself originally designed and constructed involved metal framing with a sheet-metal outer skin.[citation needed]
Zomes have also been used in art, sculpture, and furniture. Zomadic, based in San Francisco, CA and founded by Rob Bell, incorporates zome geometry into artistic structures constructed primarily from CNC machined plywood components. Bell is a frequent attendee at Burning Man, a yearly artistic showcase event located in the Black Rock Desert of Nevada.[citation needed]
Richie Duncan of Kodama Zomes, based in southern Oregon has invented a structural system based on a hanging zome geometry, suspended from an overhead anchor point. Constructed of metal compressive elements and webbing tensile elements, the structures are able to be assembled and disassembled. This suspended zome system has been used in furniture, performing arts, and treehouse applications.[citation needed]
Yann Lipnick of Zomadic Concepts in France has an extensive study of, and multiple project construction of zomes in many different materials. He highlights the universal appeal and healing atmosphere that zomes provide and has training classes and reference books on zome construction.[citation needed]
References
Further reading
Baer, Steve (1970). Zome Primer. Zomeworks Corporation.
Hart, George (2021) "The Joy of Polar Zonohedra". Proc. Bridges 2021. pp. 7–14.
Hildebrandt, Paul; Richert, Clark (2012). "Domes, Zomes, and Drop City". Proc. Bridges 2012.
Sadler, Simon (2006). "Drop City Revisited". Journal of Architectural Education 59 (3), pp. 5–14.
Sadler, Simon (2012). "Diagrams of Countercultural Architecture". Design and Culture, 4 (3), pp. 345–367.
Wester, Cass (1973). "Steve Baer and Holly Baer: Dome Home Enthusiasts". Mother Earth News 22.
External links
Examples of zome usage in North American prefabricated housing construction
Examples of European zome buildings (in French)
The zome building concept explained
ZOME Energy Networks
Kodama Zomes
Zomadic
Heliss (in French)
Software
Z5omes - A web application to create 3D models of zomes in GoodKarma.
Zome modelisation - Open-source sketchup plugin.
Zome Creator - Open-source code for Zome modelisation software.
Building
Building engineering | Zome (architecture) | [
"Engineering"
] | 888 | [
"Building",
"Building engineering",
"Construction",
"Civil engineering",
"Architecture"
] |
2,235,037 | https://en.wikipedia.org/wiki/Existential%20graph | An existential graph is a type of diagrammatic or visual notation for logical expressions, created by Charles Sanders Peirce, who wrote on graphical logic as early as 1882, and continued to develop the method until his death in 1914. They include both a separate graphical notation for logical statements and a logical calculus, a formal system of rules of inference that can be used to derive theorems.
Background
Peirce found the algebraic notation (i.e. symbolic notation) of logic, especially that of predicate logic, which was still very new during his lifetime and which he himself played a major role in developing, to be philosophically unsatisfactory, because the symbols had their meaning by mere convention. In contrast, he strove for a style of writing in which the signs literally carry their meaning within them – in the terminology of his theory of signs: a system of iconic signs that resemble or resemble the represented objects and relations.
Thus, the development of an iconic, graphic and – as he intended – intuitive and easy-to-learn logical system was a project that Peirce worked on throughout his life. After at least one aborted approach – the "Entitative Graphs" – the closed system of "Existential Graphs" finally emerged from 1896 onwards. Although considered by their creator to be a clearly superior and more intuitive system, as a mode of writing and as a calculus, they had no major influence on the history of logic. This has been attributed to the fact(s) that, for one, Peirce published little on this topic, and that the published texts were not written in a very understandable way; and, for two, that the linear formula notation in the hands of experts is actually the less complex tool. Hence, the existential graphs received little attention or were seen as unwieldy. From 1963 onwards, works by Don D. Roberts and J. Jay Zeman, in which Peirce's graphic systems were systematically examined and presented, led to a better understanding; even so, they have today found practical use within only one modern application—the conceptual graphs introduced by John F. Sowa in 1976, which are used in computer science to represent knowledge. However, existential graphs are increasingly reappearing as a subject of research in connection with a growing interest in graphical logic, which is also expressed in attempts to replace the rules of inference given by Peirce with more intuitive ones.
The overall system of existential graphs is composed of three subsystems that build on each other, the alpha graphs, the beta graphs and the gamma graphs. The alpha graphs are a purely propositional logical system. Building on this, the beta graphs are a first order logical calculus. The gamma graphs, which have not yet been fully researched and were not completed by Peirce, are understood as a further development of the alpha and beta graphs. When interpreted appropriately, the gamma graphs cover higher-level predicate logic as well as modal logic. As late as 1903, Peirce began a new approach, the "Tinctured Existential Graphs," with which he wanted to replace the previous systems of alpha, beta and gamma graphs and combine their expressiveness and performance in a single new system. Like the gamma graphs, the "Tinctured Existential Graphs" remained unfinished.
As calculi, the alpha, beta and gamma graphs are sound (i.e., all expressions derived as graphs are semantically valid). The alpha and beta graphs are also complete (i.e., all propositional or predicate-logically semantically valid expressions can be derived as alpha or beta graphs).
The graphs
Peirce proposed three systems of existential graphs:
alpha, isomorphic to propositional logic and the two-element Boolean algebra;
beta, isomorphic to first-order logic with identity, with all formulas closed;
gamma, (nearly) isomorphic to normal modal logic.
Alpha nests in beta and gamma. Beta does not nest in gamma, quantified modal logic being more general than put forth by Peirce.
Alpha
The syntax is:
The blank page;
Single letters or phrases written anywhere on the page;
Any graph may be enclosed by a simple closed curve called a cut or sep. A cut can be empty. Cuts can nest and concatenate at will, but must never intersect.
Any well-formed part of a graph is a subgraph.
The semantics are:
The blank page denotes Truth;
Letters, phrases, subgraphs, and entire graphs may be True or False;
To enclose a subgraph with a cut is equivalent to logical negation or Boolean complementation. Hence an empty cut denotes False;
All subgraphs within a given cut are tacitly conjoined.
Hence the alpha graphs are a minimalist notation for sentential logic, grounded in the expressive adequacy of And and Not. The alpha graphs constitute a radical simplification of the two-element Boolean algebra and the truth functors.
The depth of an object is the number of cuts that enclose it.
Rules of inference:
Insertion - Any subgraph may be inserted into an odd numbered depth. The surrounding white page is depth 1. Depth 2 are the black letters and lines that encircle elements. Depth 3 is entering the next white area in an enclosed element.
Erasure - Any subgraph in an even numbered depth may be erased.
Rules of equivalence:
Double cut - A pair of cuts with nothing between them may be drawn around any subgraph. Likewise two nested cuts with nothing between them may be erased. This rule is equivalent to Boolean involution and double negation elimination.
Iteration/Deiteration – To understand this rule, it is best to view a graph as a tree structure having nodes and ancestors. Any subgraph P in node n may be copied into any node depending on n. Likewise, any subgraph P in node n may be erased if there exists a copy of P in some node ancestral to n (i.e., some node on which n depends). For an equivalent rule in an algebraic context, see C2 in Laws of Form.
A proof manipulates a graph by a series of steps, with each step justified by one of the above rules. If a graph can be reduced by steps to the blank page or an empty cut, it is what is now called a tautology (or the complement thereof, a contradiction). Graphs that cannot be simplified beyond a certain point are analogues of the satisfiable formulas of first-order logic.
Beta
In the case of betagraphs, the atomic expressions are no longer propositional letters (P, Q, R,...) or statements ("It rains," "Peirce died in poverty"), but predicates in the sense of predicate logic (see there for more details), possibly abbreviated to predicate letters (F, G, H,...). A predicate in the sense of predicate logic is a sequence of words with clearly defined spaces that becomes a propositional sentence if you insert a proper noun into each space. For example, the word sequence "_ x is a human" is a predicate because it gives rise to the declarative sentence "Peirce is a human" if you enter the proper name "Peirce" in the blank space. Likewise, the word sequence "_1 is richer than _2" is a predicate, because it results in the statement "Socrates is richer than Plato" if the proper names "Socrates" or "Plato" are inserted into the spaces.
Notation of betagraphs
The basic language device is the line of identity, a thickly drawn line of any form. The identity line docks onto the blank space of a predicate to show that the predicate applies to at least one individual. In order to express that the predicate "_ is a human being" applies to at least one individual – i.e. to say that there is (at least) one human being – one writes an identity line in the blank space of the predicate "_ is a human being:"
The beta graphs can be read as a system in which all formula are to be taken as closed, because all variables are implicitly quantified. If the "shallowest" part of a line of identity has even depth, the associated variable is tacitly existentially (universally) quantified.
Zeman (1964) was the first to note that the beta graphs are isomorphic to first-order logic with equality (also see Zeman 1967). However, the secondary literature, especially Roberts (1973) and Shin (2002), does not agree on how this is. Peirce's writings do not address this question, because first-order logic was first clearly articulated only after his death, in the 1928 first edition of David Hilbert and Wilhelm Ackermann's Principles of Mathematical Logic.
Gamma
Add to the syntax of alpha a second kind of simple closed curve, written using a dashed rather than a solid line. Peirce proposed rules for this second style of cut, which can be read as the primitive unary operator of modal logic.
Zeman (1964) was the first to note that the gamma graphs are equivalent to the well-known modal logics S4 and S5. Hence the gamma graphs can be read as a peculiar form of normal modal logic. This finding of Zeman's has received little attention to this day, but is nonetheless included here as a point of interest.
Peirce's role
The existential graphs are a curious offspring of Peirce the logician/mathematician with Peirce the founder of a major strand of semiotics. Peirce's graphical logic is but one of his many accomplishments in logic and mathematics. In a series of papers beginning in 1867, and culminating with his classic paper in the 1885 American Journal of Mathematics, Peirce developed much of the two-element Boolean algebra, propositional calculus, quantification and the predicate calculus, and some rudimentary set theory. Model theorists consider Peirce the first of their kind. He also extended De Morgan's relation algebra. He stopped short of metalogic (which eluded even Principia Mathematica).
But Peirce's evolving semiotic theory led him to doubt the value of logic formulated using conventional linear notation, and to prefer that logic and mathematics be notated in two (or even three) dimensions. His work went beyond Euler's diagrams and Venn's 1880 revision thereof. Frege's 1879 work Begriffsschrift also employed a two-dimensional notation for logic, but one very different from Peirce's.
Peirce's first published paper on graphical logic (reprinted in Vol. 3 of his Collected Papers) proposed a system dual (in effect) to the alpha existential graphs, called the entitative graphs. He very soon abandoned this formalism in favor of the existential graphs. In 1911 Victoria, Lady Welby showed the existential graphs to C. K. Ogden who felt they could usefully be combined with Welby's thoughts in a "less abstruse form." Otherwise they attracted little attention during his life and were invariably denigrated or ignored after his death, until the PhD theses by Roberts (1964) and Zeman (1964).
See also
Nor operator
Conceptual graph
Charles Sander Peirce
Propositional calculus
References
Further reading
Primary literature
1931–1935 & 1958. The Collected Papers of Charles Sanders Peirce. Volume 4, Book II: "Existential Graphs", consists of paragraphs 347–584. A discussion also begins in paragraph 617.
Paragraphs 347–349 (II.1.1. "Logical Diagram")—Peirce's definition "Logical Diagram (or Graph)" in Baldwin's Dictionary of Philosophy and Psychology (1902), v. 2, p. 28. Classics in the History of Psychology Eprint.
Paragraphs 350–371 (II.1.2. "Of Euler's Diagrams")—from "Graphs" (manuscript 479) c. 1903.
Paragraphs 372–584 Eprint.
Paragraphs 372–393 (II.2. "Symbolic Logic")—Peirce's part of "Symbolic Logic" in Baldwin's Dictionary of Philosophy and Psychology (1902) v. 2, pp. 645–650, beginning (near second column's top) with "If symbolic logic be defined...". Paragraph 393 (Baldwin's DPP2 p. 650) is by Peirce and Christine Ladd-Franklin ("C.S.P., C.L.F.").
Paragraphs 394–417 (II.3. "Existential Graphs")—from Peirce's pamphlet A Syllabus of Certain Topics of Logic, pp. 15–23, Alfred Mudge & Son, Boston (1903).
Paragraphs 418–509 (II.4. "On Existential Graphs, Euler's Diagrams, and Logical Algebra")—from "Logical Tracts, No. 2" (manuscript 492), c. 1903.
Paragraphs 510–529 (II.5. "The Gamma Part of Existential Graphs")—from "Lowell Lectures of 1903," Lecture IV (manuscript 467).
Paragraphs 530–572 (II.6.)—"Prolegomena To an Apology For Pragmaticism" (1906), The Monist, v. XVI, n. 4, pp. 492-546. Corrections (1907) in The Monist v. XVII, p. 160.
Paragraphs 573–584 (II.7. "An Improvement on the Gamma Graphs")—from "For the National Academy of Science, 1906 April Meeting in Washington" (manuscript 490).
Paragraphs 617–623 (at least) (in Book III, Ch. 2, §2, paragraphs 594–642)—from "Some Amazing Mazes: Explanation of Curiosity the First", The Monist, v. XVIII, 1908, n. 3, pp. 416-464, see starting p. 440.
1992. "Lecture Three: The Logic of Relatives", Reasoning and the Logic of Things, pp. 146–164. Ketner, Kenneth Laine (editing and introduction), and Hilary Putnam (commentary). Harvard University Press. Peirce's 1898 lectures in Cambridge, Massachusetts.
1977, 2001. Semiotic and Significs: The Correspondence between C.S. Peirce and Victoria Lady Welby. Hardwick, C.S., ed. Lubbock TX: Texas Tech University Press. 2nd edition 2001.
A transcription of Peirce's MS 514 (1909), edited with commentary by John Sowa.
Currently, the chronological critical edition of Peirce's works, the Writings, extends only to 1892. Much of Peirce's work on logical graphs consists of manuscripts written after that date and still unpublished. Hence our understanding of Peirce's graphical logic is likely to change as the remaining 23 volumes of the chronological edition appear.
Secondary literature
Hammer, Eric M. (1998), "Semantics for Existential Graphs," Journal of Philosophical Logic 27: 489–503.
Ketner, Kenneth Laine
(1981), "The Best Example of Semiosis and Its Use in Teaching Semiotics", American Journal of Semiotics v. I, n. 1–2, pp. 47–83. Article is an introduction to existential graphs.
(1990), Elements of Logic: An Introduction to Peirce's Existential Graphs, Texas Tech University Press, Lubbock, TX, 99 pages, spiral-bound.
Queiroz, João & Stjernfelt, Frederik
(2011), "Diagrammatical Reasoning and Peircean Logic Representation", Semiotica vol. 186 (1/4). (Special issue on Peirce's diagrammatic logic.)
Roberts, Don D.
(1964), "Existential Graphs and Natural Deduction" in Moore, E. C., and Robin, R. S., eds., Studies in the Philosophy of C. S. Peirce, 2nd series. Amherst MA: University of Massachusetts Press. The first publication to show any sympathy and understanding for Peirce's graphical logic.
(1973). The Existential Graphs of C.S. Peirce. John Benjamins. An outgrowth of his 1963 thesis.
Shin, Sun-Joo (2002), The Iconic Logic of Peirce's Graphs. MIT Press.
Zalamea, Fernando. Peirce's Logic of Continuity. Docent Press, Boston MA. 2012. ISBN 9 780983 700494.
Part II: Peirce's Existential Graphs, pp. 76-162.
Zeman, J. J.
(1964), The Graphical Logic of C.S. Peirce. Unpublished Ph.D. thesis submitted to the University of Chicago.
(1967), "A System of Implicit Quantification," Journal of Symbolic Logic 32: 480–504.
External links
Stanford Encyclopedia of Philosophy: Peirce's Logic by Sun-Joo Shin and Eric Hammer.
Dau, Frithjof, Peirce's Existential Graphs --- Readings and Links. An annotated bibliography on the existential graphs.
Gottschall, Christian, Proof Builder — Java applet for deriving Alpha graphs.
Liu, Xin-Wen, "The literature of C.S. Peirce’s Existential Graphs" (via Wayback Machine), Institute of Philosophy, Chinese Academy of Social Sciences, Beijing, PRC.
(NB. Existential graphs and conceptual graphs.)
Van Heuveln, Bram, "Existential Graphs. " Dept. of Cognitive Science, Rensselaer Polytechnic Institute. Alpha only.
Zeman, Jay J., "Existential Graphs". With four online papers by Peirce.
Logic
Logical calculi
Philosophical logic
History of logic
History of mathematics
Charles Sanders Peirce
Logical diagrams | Existential graph | [
"Mathematics"
] | 3,752 | [
"Mathematical logic",
"Logical calculi"
] |
2,235,142 | https://en.wikipedia.org/wiki/Manning%20formula | The Manning formula or Manning's equation is an empirical formula estimating the average velocity of a liquid in an open channel flow (flowing in a conduit that does not completely enclose the liquid). However, this equation is also used for calculation of flow variables in case of flow in partially full conduits, as they also possess a free surface like that of open channel flow. All flow in so-called open channels is driven by gravity.
It was first presented by the French engineer in 1867, and later re-developed by the Irish engineer Robert Manning in 1890.
Thus, the formula is also known in Europe as the Gauckler–Manning formula or Gauckler–Manning–Strickler formula (after Albert Strickler).
The Gauckler–Manning formula is used to estimate the average velocity of water flowing in an open channel in locations where it is not practical to construct a weir or flume to measure flow with greater accuracy. Manning's equation is also commonly used as part of a numerical step method, such as the standard step method, for delineating the free surface profile of water flowing in an open channel.
Formulation
The Gauckler–Manning formula states:
where:
is the cross-sectional average velocity (dimension of L/T; units of ft/s or m/s);
is the Gauckler–Manning coefficient. Units of are often omitted, however is not dimensionless, having dimension of T/L1/3 and units of s/m1/3.
is the hydraulic radius (L; ft, m);
is the stream slope or hydraulic gradient, the linear hydraulic head loss loss (dimension of L/L, units of m/m or ft/ft); it is the same as the channel bed slope when the water depth is constant. ().
is a conversion factor between SI and English units. It can be left off, as long as you make sure to note and correct the units in the term. If you leave in the traditional SI units, is just the dimensional analysis to convert to English. for SI units, and for English units. (Note: (1 m)1/3/s = (3.2808399 ft)1/3/s = 1.4859 ft/s)
Note: the Strickler coefficient is the reciprocal of Manning coefficient: 1/, having dimension of L1/3/T and units of m1/3/s; it varies from 20 m1/3/s (rough stone and rough surface) to 80 m1/3/s (smooth concrete and cast iron).
The discharge formula, , can be used to rewrite Gauckler–Manning's equation by substitution for . Solving for then allows an estimate of the volumetric flow rate (discharge) without knowing the limiting or actual flow velocity.
The formula can be obtained by use of dimensional analysis. In the 2000s this formula was derived theoretically using the phenomenological theory of turbulence.
Hydraulic radius
The hydraulic radius is one of the properties of a channel that controls water discharge. It also determines how much work the channel can do, for example, in moving sediment. All else equal, a river with a larger hydraulic radius will have a higher flow velocity, and also a larger cross sectional area through which that faster water can travel. This means the greater the hydraulic radius, the larger volume of water the channel can carry.
Based on the 'constant shear stress at the boundary' assumption, hydraulic radius is defined as the ratio of the channel's cross-sectional area of the flow to its wetted perimeter (the portion of the cross-section's perimeter that is "wet"):
where:
is the hydraulic radius (L);
is the cross sectional area of flow (L2);
is the wetted perimeter (L).
For channels of a given width, the hydraulic radius is greater for deeper channels. In wide rectangular channels, the hydraulic radius is approximated by the flow depth.
The hydraulic radius is not half the hydraulic diameter as the name may suggest, but one quarter in the case of a full pipe. It is a function of the shape of the pipe, channel, or river in which the water is flowing.
Hydraulic radius is also important in determining a channel's efficiency (its ability to move water and sediment), and is one of the properties used by water engineers to assess the channel's capacity.
Gauckler–Manning coefficient
The Gauckler–Manning coefficient, often denoted as , is an empirically derived coefficient, which is dependent on many factors, including surface roughness and sinuosity. When field inspection is not possible, the best method to determine is to use photographs of river channels where has been determined using Gauckler–Manning's formula.
The friction coefficients across weirs and orifices are less subjective than along a natural (earthen, stone or vegetated) channel reach. Cross sectional area, as well as , will likely vary along a natural channel. Accordingly, more error is expected in estimating the average velocity by assuming a Manning's , than by direct sampling (i.e., with a current flowmeter), or measuring it across weirs, flumes or orifices.
In natural streams, values vary greatly along its reach, and will even vary in a given reach of channel with different stages of flow. Most research shows that will decrease with stage, at least up to bank-full. Overbank values for a given reach will vary greatly depending on the time of year and the velocity of flow. Summer vegetation will typically have a significantly higher value due to leaves and seasonal vegetation. Research has shown, however, that values are lower for individual shrubs with leaves than for the shrubs without leaves.
This is due to the ability of the plant's leaves to streamline and flex as the flow passes them thus lowering the resistance to flow. High velocity flows will cause some vegetation (such as grasses and forbs) to lay flat, where a lower velocity of flow through the same vegetation will not.
In open channels, the Darcy–Weisbach equation is valid using the hydraulic diameter as equivalent pipe diameter.
It is the only best and sound method to estimate the energy loss in human made open channels. For various reasons (mainly historical reasons), empirical resistance coefficients (e.g. Chézy, Gauckler–Manning–Strickler) were and are still used. The Chézy coefficient was introduced in 1768 while the Gauckler–Manning coefficient was first developed in 1865, well before the classical pipe flow resistance experiments in the 1920–1930s. Historically both the Chézy and the Gauckler–Manning coefficients were expected to be constant and functions of the roughness only. But it is now well recognised that these coefficients are only constant for a range of flow rates. Most friction coefficients (except perhaps the Darcy–Weisbach friction factor) are estimated 100% empirically and they apply only to fully rough turbulent water flows under steady flow conditions.
One of the most important applications of the Manning equation is its use in sewer design. Sewers are often constructed as circular pipes. It has long been accepted that the value of varies with the flow depth in partially filled circular pipes. A complete set of explicit equations that can be used to calculate the depth of flow and other unknown variables when applying the Manning equation to circular pipes is available. These equations account for the variation of with the depth of flow in accordance with the curves presented by Camp.
Authors of flow formulas
Albert Brahms (1692–1758)
Antoine de Chézy (1718–1798)
Henry Darcy (1803–1858)
Julius Ludwig Weisbach (1806-1871)
(1826–1905)
Robert Manning (1816–1897)
Wilhelm Rudolf Kutter (1818–1888)
Henri Bazin (1843–1917)
Ludwig Prandtl (1875–1953)
Paul Richard Heinrich Blasius (1883–1970)
Albert Strickler (1887–1963)
Cyril Frank Colebrook (1910–1997)
See also
Chézy formula
Darcy–Weisbach equation
Hydraulics
Notes and references
Further reading
External links
Hydraulic Radius Design Equations Formulas Calculator
History of the Manning Formula
Manning formula calculator for several channel shapes
Manning values associated with photos
Table of values of Manning's n
Interactive demo of Manning's equation
Fluid dynamics
Hydrology
Piping
Hydraulic engineering
Sedimentology
Geomorphology | Manning formula | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,715 | [
"Hydrology",
"Building engineering",
"Chemical engineering",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Mechanical engineering",
"Environmental engineering",
"Piping",
"Hydraulic engineering",
"Fluid dynamics"
] |
2,235,160 | https://en.wikipedia.org/wiki/MNDO | MNDO, or Modified Neglect of Diatomic Overlap is a semi-empirical method for the quantum calculation of molecular electronic structure in computational chemistry. It is based on the Neglect of Diatomic Differential Overlap integral approximation. Similarly, this method replaced the earlier MINDO method. It is part of the MOPAC program and was developed in the group of Michael Dewar. It is also part of the AMPAC, GAMESS (US), PC GAMESS, GAMESS (UK), Gaussian, ORCA and CP2K programs.
Later, it was essentially replaced by two new methods, PM3 and AM1, which are similar but have different parameterisation methods.
The extension by W. Thiel's group, called MNDO/d, which adds d functions, is widely used for organometallic compounds. It is included in GAMESS (UK).
MNDOC, also from W. Thiel's group, explicitly adds correlation effects though second order perturbation theory with the parameters fitted to experiment from the correlated calculation. In this way, the method should give better results for systems where correlation is particularly important and different from that in the ground state molecules from the MNDO training set. This includes excited states and transition states. However, Cramer argues that "the model has not been compared to other NDDO models to the extent required to assess whether the formalism fulfills its potential."
References
MNDO
MNDO/d
MNDOC
Semiempirical quantum chemistry methods | MNDO | [
"Chemistry"
] | 311 | [
"Computational chemistry",
"Quantum chemistry",
"Semiempirical quantum chemistry methods"
] |
2,235,169 | https://en.wikipedia.org/wiki/Austin%20Model%201 | Austin Model 1, or AM1, is a semi-empirical method for the quantum calculation of molecular electronic structure in computational chemistry. It is based on the Neglect of Differential Diatomic Overlap integral approximation. Specifically, it is a generalization of the modified neglect of differential diatomic overlap approximation. Related methods are PM3 and the older MINDO.
AM1 was developed by Michael Dewar and co-workers and published in 1985. AM1 is an attempt to improve the MNDO model by reducing the repulsion of atoms at close separation distances. The atomic core-atomic core terms in the MNDO equations were modified through the addition of off-center attractive and repulsive Gaussian functions.
The complexity of the parameterization problem increased in AM1 as the number of parameters per atom increased from 7 in MNDO to 13-16 per atom in AM1.
The results of AM1 calculations are sometimes used as the starting points for parameterizations of forcefields in molecular modelling.
AM1 is implemented in the MOPAC, AMPAC, Gaussian, CP2K, GAMESS (US), PC GAMESS, GAMESS (UK), and SPARTAN programs.
An extension of AM1 is SemiChem Austin Model 1 (SAM1), which is implemented in the AMPAC program and which explicitly treats d-orbitals.
An extension of AM1 is AM1* that is available in VAMP software.
See also
Semi-empirical quantum chemistry methods
Notes
References
Semiempirical quantum chemistry methods | Austin Model 1 | [
"Chemistry"
] | 308 | [
"Quantum chemistry stubs",
"Quantum chemistry",
"Theoretical chemistry stubs",
"Computational chemistry",
"Physical chemistry stubs",
"Semiempirical quantum chemistry methods"
] |
2,235,200 | https://en.wikipedia.org/wiki/PM3%20%28chemistry%29 | PM3, or Parametric Method 3, is a semi-empirical method for the quantum calculation of molecular electronic structure in computational chemistry. It is based on the Neglect of Differential Diatomic Overlap integral approximation.
The PM3 method uses the same formalism and equations as the AM1 method. The only differences are:
1) PM3 uses two Gaussian functions for the core repulsion function, instead of the variable number used by AM1 (which uses between one and four Gaussians per element);
2) the numerical values of the parameters are different. The other differences lie in the philosophy and methodology used during the parameterization: whereas AM1 takes some of the parameter values from spectroscopical measurements, PM3 treats them as optimizable values.
The method was developed by J. J. P. Stewart and first published in 1989. It is implemented in the MOPAC program (of which the older versions are public domain), along with the related RM1, AM1, MNDO and MINDO methods, and in several other programs such as Gaussian, CP2K, GAMESS (US), GAMESS (UK), PC GAMESS, Chem3D, AMPAC, ArgusLab, BOSS, and SPARTAN.
The original PM3 publication included parameters for the following elements: H, C, N, O, F, Al, Si, P, S, Cl, Br, and I.
The PM3 implementation in the SPARTAN program includes PM3tm with additional extensions for transition metals supporting calculations on Ca, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Zr, Mo, Tc, Ru, Rh, Pd, Hf, Ta, W, Re, Os, Ir, Pt, and Gd. Many other elements, mostly metals, have been parameterized in subsequent work.
A model for the PM3 calculation of lanthanide complexes, called Sparkle/PM3, was also introduced.
References
For a recent review,
Semiempirical quantum chemistry methods | PM3 (chemistry) | [
"Chemistry"
] | 423 | [
"Computational chemistry",
"Quantum chemistry",
"Semiempirical quantum chemistry methods"
] |
2,235,244 | https://en.wikipedia.org/wiki/Open-channel%20flow | In fluid mechanics and hydraulics, open-channel flow is a type of liquid flow within a conduit with a free surface, known as a channel. The other type of flow within a conduit is pipe flow. These two types of flow are similar in many ways but differ in one important respect: open-channel flow has a free surface, whereas pipe flow does not, resulting in flow dominated by gravity but not hydraulic pressure.
Classifications of flow
Open-channel flow can be classified and described in various ways based on the change in flow depth with respect to time and space. The fundamental types of flow dealt with in open-channel hydraulics are:
Time as the criterion
Steady flow
The depth of flow does not change over time, or if it can be assumed to be constant during the time interval under consideration.
Unsteady flow
The depth of flow does change with time.
Space as the criterion
Uniform flow
The depth of flow is the same at every section of the channel. Uniform flow can be steady or unsteady, depending on whether or not the depth changes with time, (although unsteady uniform flow is rare).
Varied flow
The depth of flow changes along the length of the channel. Varied flow technically may be either steady or unsteady. Varied flow can be further classified as either rapidly or gradually-varied:
Rapidly-varied flow
The depth changes abruptly over a comparatively short distance. Rapidly varied flow is known as a local phenomenon. Examples are the hydraulic jump and the hydraulic drop.
Gradually-varied flow
The depth changes over a long distance.
Continuous flow
The discharge is constant throughout the reach of the channel under consideration. This is often the case with a steady flow. This flow is considered continuous and therefore can be described using the continuity equation for continuous steady flow.
Spatially-varied flow
The discharge of a steady flow is non-uniform along a channel. This happens when water enters and/or leaves the channel along the course of flow. An example of flow entering a channel would be a road side gutter. An example of flow leaving a channel would be an irrigation channel. This flow can be described using the continuity equation for continuous unsteady flow requires the consideration of the time effect and includes a time element as a variable.
States of flow
The behavior of open-channel flow is governed by the effects of viscosity and gravity relative to the inertial forces of the flow. Surface tension has a minor contribution, but does not play a significant enough role in most circumstances to be a governing factor. Due to the presence of a free surface, gravity is generally the most significant driver of open-channel flow; therefore, the ratio of inertial to gravity forces is the most important dimensionless parameter. The parameter is known as the Froude number, and is defined as:where is the mean velocity, is the characteristic length scale for a channel's depth, and is the gravitational acceleration. Depending on the effect of viscosity relative to inertia, as represented by the Reynolds number, the flow can be either laminar, turbulent, or transitional. However, it is generally acceptable to assume that the Reynolds number is sufficiently large so that viscous forces may be neglected.
Formulation
It is possible to formulate equations describing three conservation laws for quantities that are useful in open-channel flow: mass, momentum, and energy. The governing equations result from considering the dynamics of the flow velocity vector field with components . In Cartesian coordinates, these components correspond to the flow velocity in the x, y, and z axes respectively.
To simplify the final form of the equations, it is acceptable to make several assumptions:
The flow is incompressible (this is not a good assumption for rapidly-varied flow)
The Reynolds number is sufficiently large such that viscous diffusion can be neglected
The flow is one-dimensional across the x-axis
Continuity equation
The general continuity equation, describing the conservation of mass, takes the form:where is the fluid density and is the divergence operator. Under the assumption of incompressible flow, with a constant control volume , this equation has the simple expression . However, it is possible that the cross-sectional area can change with both time and space in the channel. If we start from the integral form of the continuity equation:it is possible to decompose the volume integral into a cross-section and length, which leads to the form:Under the assumption of incompressible, 1D flow, this equation becomes:By noting that and defining the volumetric flow rate , the equation is reduced to:Finally, this leads to the continuity equation for incompressible, 1D open-channel flow:
Momentum equation
The momentum equation for open-channel flow may be found by starting from the incompressible Navier-Stokes equations :where is the pressure, is the kinematic viscosity, is the Laplace operator, and is the gravitational potential. By invoking the high Reynolds number and 1D flow assumptions, we have the equations:The second equation implies a hydrostatic pressure , where the channel depth is the difference between the free surface elevation and the channel bottom . Substitution into the first equation gives:where the channel bed slope . To account for shear stress along the channel banks, we may define the force term to be:where is the shear stress and is the hydraulic radius. Defining the friction slope , a way of quantifying friction losses, leads to the final form of the momentum equation:
Energy equation
To derive an energy equation, note that the advective acceleration term may be decomposed as:where is the vorticity of the flow and is the Euclidean norm. This leads to a form of the momentum equation, ignoring the external forces term, given by:Taking the dot product of with this equation leads to:This equation was arrived at using the scalar triple product . Define to be the energy density:Noting that is time-independent, we arrive at the equation:Assuming that the energy density is time-independent and the flow is one-dimensional leads to the simplification:with being a constant; this is equivalent to Bernoulli's principle. Of particular interest in open-channel flow is the specific energy , which is used to compute the hydraulic head that is defined as:with being the specific weight. However, realistic systems require the addition of a head loss term to account for energy dissipation due to friction and turbulence that was ignored by discounting the external forces term in the momentum equation.
See also
HEC-RAS
Streamflow
Fields of study
Computational fluid dynamics
Fluid dynamics
Hydraulics
Hydrology
Types of fluid flow
Laminar flow
Pipe flow
Transitional flow
Turbulent flow
Fluid properties
Froude number
Reynolds number
Viscosity
Other related articles
Chézy formula
Darcy-Weisbach equation
Hydraulic jump
Manning formula
Saint-Venant equations
Standard step method
References
Further reading
Nezu, Iehisa; Nakagawa, Hiroji (1993). Turbulence in Open-Channel Flows. IAHR Monograph. Rotterdam, NL: A.A. Balkema. .
Syzmkiewicz, Romuald (2010). Numerical Modeling in Open Channel Hydraulics. Water Science and Technology Library. New York, NY: Springer. .
External links
Caltech lecture notes:
Derivation of the Equations of Open Channel Flow
Surface Profiles for Steady Channel Flow
Open-Channel Flow
Open Channel Flow Concepts
What is a Hydraulic Jump?
Open Channel Flow Example
Simulation of Turbulent Flows (p. 26-38)
Civil engineering
Fluid dynamics
Hydraulics
Hydraulic engineering | Open-channel flow | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,531 | [
"Hydrology",
"Chemical engineering",
"Physical systems",
"Construction",
"Hydraulics",
"Civil engineering",
"Piping",
"Hydraulic engineering",
"Fluid dynamics"
] |
2,235,270 | https://en.wikipedia.org/wiki/International%20Ultraviolet%20Explorer | International Ultraviolet Explorer (IUE or Explorer 57, formerly SAS-D) was the first space observatory primarily designed to take ultraviolet (UV) electromagnetic spectrum. The satellite was a collaborative project between NASA, the United Kingdom's Science and Engineering Research Council (SERC, formerly UKSRC) and the European Space Agency (ESA), formerly European Space Research Organisation (ESRO). The mission was first proposed in early 1964, by a group of scientists in the United Kingdom, and was launched on 26 January 1978, 17:36:00 UTC aboard a NASA Thor-Delta 2914 launch vehicle. The mission lifetime was initially set for 3 years, but in the end, it lasted 18 years, with the satellite being shut down in 1996. The switch-off occurred for financial reasons, while the telescope was still functioning at near original efficiency.
It was the first space observatory to be operated in real-time by astronomers who visited the ground stations in the United States and Spain. Astronomers made over 104,000 observations using the IUE, of objects ranging from Solar System bodies to distant quasars. Among the significant scientific results from IUE data were the first large-scale studies of stellar winds, accurate measurements of the way interstellar dust absorbs light, and measurements of the supernova SN 1987A which showed that it defied stellar evolution theories as they then stood. When the mission ended, it was considered the most successful astronomical satellite ever.
History
Motivation
The human eye can perceive light with wavelengths between roughly 350 (violet) and 700 (red) nanometres. Ultraviolet light has wavelengths between roughly 10 nm and 350 nm. UV light can be harmful to human beings and is strongly absorbed by the ozone layer. This makes it impossible to observe UV emission from astronomical objects from the ground. Many types of objects emit copious quantities of UV radiation, though: the hottest and most massive stars in the universe can have surface temperatures high enough that the vast majority of their light is emitted in the UV. Active Galactic Nuclei, accretion disks, and supernovae all emit UV radiation strongly, and many chemical elements have strong absorption lines in the UV so that UV absorption by the interstellar medium provides a powerful tool for studying its composition.
Ultraviolet astronomy was impossible before the Space Age, and some of the first space telescopes were UV telescopes designed to observe this previously inaccessible region of the electromagnetic spectrum. One particular success was the second Orbiting Astronomical Observatory (OAO-2), which had a number of UV telescopes on board. It was launched in 1968 and took the first UV observations of 1200 objects, mostly stars. The success of OAO-2 motivated astronomers to consider larger missions.
Conception
The orbiting ultraviolet satellite which ultimately became the IUE mission was first proposed in 1964 by British astronomer Robert Wilson. The European Space Research Organisation (ESRO) was planning a Large Astronomical Satellite (LAS), and had sought proposals from the astronomical community for its aims and design. Wilson headed a British team which proposed an ultraviolet spectrograph, and their design was recommended for acceptance in 1966.
However, management problems and cost overruns led to the cancellation of the LAS program in 1968. Wilson's team scaled down their plans and submitted a more modest proposal to ESRO, but this was not selected as the Cosmic Ray satellite was given precedence. Rather than give up on the idea of an orbiting UV telescope, they instead sent their plans to NASA astronomer Leo Goldberg, and in 1973 the plans were approved. The proposed telescope was renamed the International Ultraviolet Explorer.
Design and aims
The telescope was designed from the start to be operated in real-time, rather than by remote control. This required that it would be launched into a geosynchronous orbit – that is, one with a period equal to one sidereal day of 23 h 56 m. A satellite in such an orbit remains visible from a given point on the Earth's surface for many hours at a time, and can thus transmit to a single ground station for a long period of time. Most space observatories in Earth orbit, such as the Hubble Space Telescope, are in a low Earth orbit in which they spend most of their time operating autonomously because only a small fraction of the Earth's surface can see them at a given time. Hubble, for example, orbits the Earth at an altitude of approximately , while a geosynchronous orbit has an average altitude of .
As well as allowing continuous communications with ground stations, a geosynchronous orbit also allows a larger portion of the sky to be viewed continuously. Because the distance from Earth is greater, the Earth occupies a much smaller portion of the sky as seen from the satellite than it does from low Earth orbit.
A launch into a geosynchronous orbit requires much more energy for a given weight of payload than a launch into a low Earth orbit. This meant that the telescope had to be relatively small, with a primary mirror, and a total weight of . Hubble, in comparison, weighs 11.1 tonnes and has a mirror. The largest ground-based telescope, the Gran Telescopio Canarias, has a primary mirror across. A smaller mirror means less light-gathering power, and less spatial resolution, compared to a larger mirror.
The stated aims of the telescope at the start of the mission were:
To obtain high-resolution spectra of stars of all spectral types to determine their physical characteristics;
To study gas streams in and around binary star system;
To observe faint stars, galaxies and quasars at low resolution, interpreting these spectra by reference to high-resolution spectra;
To observe the spectra of planets and comets;
To make repeated observations of objects with variable spectrum;
To study the modification of starlight caused by interstellar dust and gas.
Construction and engineering
The telescope was constructed as a joint project between NASA, ESRO (which became ESA in 1975) and the United Kingdom's SERC. SERC provided the Vidicon cameras for the spectrographs as well as software for the scientific instruments. ESA provided the solar arrays to power the spacecraft as well as a ground observing facility in Villafranca del Castillo, Spain. NASA contributed the telescope, spectrograph, and spacecraft as well as launching facilities and a second ground observatory in Greenbelt, Maryland at the Goddard Space Flight Center (GSFC).
According to the agreement setting up the project the observing time would be divided between the contributing agencies with 2/3 to NASA, 1/6 to ESA and 1/6 to the UK's SERC.
Mirror
The telescope mirror was a reflector of the Ritchey–Chrétien telescope type, which has hyperbolic primary and secondary mirrors. The primary was across. The telescope was designed to give high-quality images over a 16 arcminute field of view (about half the apparent diameter of the Sun or Moon). The primary mirror was made of beryllium, and the secondary of fused silica – materials chosen for their light weight, moderate cost, and optical quality.
Instruments
The instrumentation on board consisted of the Fine Error Sensors (FES), which were used for pointing and guiding the telescope, a high-resolution and a low-resolution spectrograph, and four detectors.
There were two Fine Error Sensors (FES), and their first purpose was to image the field of view of the telescope in visible light. They could detect stars down to 14th magnitude, about 1500 times fainter than can be seen with the naked eye from Earth. The image was transmitted to the ground station, where the observer would verify that the telescope was pointing at the correct field, and then acquire the exact object to be observed. If the object to be observed was fainter than 14th magnitude, the observer would point the telescope at a star that could be seen, and then apply "blind" offsets, determined from the coordinates of the objects. The accuracy of the pointing was generally better than 2 arcsecond for blind offsets
The FES acquisition images were the telescope's only imaging capability; for UV observations, it only recorded spectrum. For this, it was equipped with two spectrographs. They were called the Short Wavelength Spectrograph (SWS) and the Long Wavelength Spectrograph (LWS) and covered wavelength ranges of 115 to 200 nanometres and 185 to 330 nm respectively. Each spectrograph had both high and low-resolution modes, with spectral resolutions of 0.02 and 0.60-nm respectively.
The spectrographs could be used with either of two apertures. The larger aperture was a slot with a field of view roughly 10 × 20 arcseconds; the smaller aperture was a circle about 3 arcseconds in diameter. The quality of the telescope optics was such that point sources appeared about 3 arcseconds across, so the use of the smaller aperture required very accurate pointing, and it did not necessarily capture all of the light from the object. The larger aperture was therefore most commonly used, and the smaller aperture was only used when the larger field of view would have contained unwanted emission from other objects.
There were two cameras for each spectrograph, one designated the primary and the second being redundant in case of failure of the first. The cameras were named LWP, LWR, SWP and SWR where P stands for prime, R for redundant and LW/SW for long/short wavelength. The cameras were television cameras, sensitive only to visible light, and light gathered by the telescope and spectrographs first fell on a UV-to-visible converter. This was a caesium-tellurium cathode, which was inert when exposed to visible light, but which gave off electrons when struck by UV photons due to the photoelectric effect. The electrons were then detected by the TV cameras. The signal could be integrated for up to many hours, before being transmitted to Earth at the end of the exposure.
Mission
Launch
The IUE was launched from Kennedy Space Center, Florida on a Thor-Delta launch vehicle, on 26 January 1978. It was launched into a transfer orbit, from which its onboard launch vehicle fired it into its planned geosynchronous orbit. The orbit was inclined by 28.6° to the Earth's equator and had an orbital eccentricity of 0.24, meaning that the satellite's distance from Earth varied between and . The ground track was initially centered at a longitude of approximately 70° West.
Commissioning
The first 60 days of the mission were designated as the commissioning period. This was divided into three main stages. Firstly, as soon as its instruments were switched on, the IUE observed a small number of high-priority objects, to ensure that some data had been taken in the event of an early failure. The first spectrum, of the star Eta Ursae Majoris, was taken for calibration purposes three days after launch. The first science observations targeted objects including the Moon, the planets from Mars to Uranus, hot stars including Eta Carinae, cool giant stars including Epsilon Eridani, the black hole candidate Cygnus X-1, and galaxies including Messier 81 (M81) and Messier 87 (M87).
Then, the spacecraft systems were tested and optimized. The telescope was focused, and the prime and redundant cameras in both channels were tested. It was found that the SWR camera did not work properly, and so the SWP camera was used throughout the mission. Initially, this camera suffered from significant electronic noise, but this was traced to a sensor used to align the telescope after launch. Once this sensor was switched off, the camera performed as expected. The cameras were then adjusted for best performance, and the slewing and guiding performance of the telescope was evaluated and optimized
Finally, image quality and spectral resolution were studied and characterized, and the performance of the telescope, spectrographs and cameras were calibrated using observations of well-known stars. After these three phases were completed, the "routine phase" of operations began on 3 April 1978. Optimization, evaluation and calibration operations were far from complete, but the telescope was understood well enough for routine science observations to begin.
Usage
Use of the telescope was divided between NASA, ESA and United Kingdom in approximate proportion to their relative contributions to the satellite construction: two-thirds of the time was available to NASA, and one-sixth each to ESA and United Kingdom. Telescope time was obtained by submitting proposals, which were reviewed annually. Each of the three agencies considered applications separately for its allocated observing time. Astronomers of any nationality could apply for telescope time, choosing whichever agency they preferred to apply to. If an astronomer was awarded time, then when their observations were scheduled, they would travel to the ground stations which operated the satellite, so that they could see and evaluate their data as it was taken. This mode of operation was very different from most space facilities, for which data is taken with no real-time input from the astronomer concerned, and instead resembled the use of ground-based telescopes.
Ground support
For most of its lifetime, the telescope was operated in three eight-hour shifts each day, two from the U.S. ground station at the Goddard Space Flight Center in Maryland, and one from the ESA ground station at Villanueva de la Cañada near Madrid. Because of its elliptical orbit, the spacecraft spent part of each day in the Van Allen radiation belts, during which time science observations suffered from higher background noise. This time occurred during the second U.S. shift each day and was generally used for calibration observations and spacecraft "housekeeping", as well as for science observations that could be done with short exposure times. The twice-daily transatlantic handovers required telephone contact between Spain and the United States to coordinate the switch. Observations were not coordinated between the stations so that the astronomers taking over after the handover would not know where the telescope would be pointing when their shift started. This sometimes meant that observing shifts started with a lengthy pointing maneuver, but allowed maximum flexibility in scheduling of observing blocks.
Data transmission
Data was transmitted to Earth in real-time at the end of each science observation. The camera read-out formed an image of 768 × 768 pixels, and the analogue-to-digital converter resulted in a dynamic range of 8 bits. The data was then transmitted to Earth via one of six transmitters on the spacecraft; four were S-band transmitters, placed at points around the spacecraft such that no matter what its attitude, one could transmit to the ground, and two were Very high frequency (VHF) transmitters, which could sustain a lower bandwidth, but consumed less power, and also transmitted in all directions. The VHF transmitters were used when the spacecraft was in the Earth's shadow and thus reliant on battery power instead of solar power.
In normal operations, observers could hold the telescope in position and wait approximately 20 minutes for the data to be transmitted, if they wanted the option of repeating the observation, or they could slew to the next target and then start the data transmission to Earth while observing the next target. The data transmitted were used for "quick look" purposes only, and full calibration was carried out by IUE staff later. Astronomers were then sent their data on magnetic tape by post, about a week after processing. From the date of the observation, the observers had a six-month proprietary period during which only they had access to the data. After six months, it became public.
Scientific results
The IUE allowed astronomers their first view of the ultraviolet light from many celestial objects and was used to study objects ranging from Solar System planets to distant quasars. During its lifetime, hundreds of astronomers observed with IUE, and during its first decade of operations, over 1500 peer reviewed scientific articles based on IUE data were published. Nine symposia of the International Astronomical Union (IAU) were devoted to discussions of IUE results.
Solar System
All the planets in the Solar System except Mercury were observed; the telescope could not point at any part of the sky within 45° of the Sun, and Mercury's greatest angular distance from the Sun is only about 28°. IUE observations of Venus showed that the amount of sulfur monoxide and sulfur dioxide in its atmosphere declined by a large amount during the 1980s. The reason for this decline is not yet fully understood, but one hypothesis is that a large volcanic eruption had injected sulfur compounds into the atmosphere, and that they were declining following the end of the eruption.
Halley's Comet reached perihelion in 1986, and was observed intensively with the IUE, as well as with a large number of other ground-based and satellite missions. UV spectra were used to estimate the rate at which the comet lost dust and gas, and the IUE observations allowed astronomers to estimate that a total of 3×108 tons of water evaporated from the comet during its passage through the inner Solar System.
Stars
Some of the most significant results from IUE came in the studies of hot stars. A star that is hotter than about 10,000 K emits most of its radiation in the UV, and thus if it can only be studied in visible light, a large amount of information is being lost. The vast majority of all stars are cooler than the Sun, but the fraction that is hotter includes massive, highly luminous stars which shed enormous quantities of matter into interstellar space, and also white dwarf stars, which are the end stage of stellar evolution for the vast majority of all stars and which have temperatures as high as 100,000 K when they first form.
The IUE discovered many instances of white dwarf companions to main sequence stars. An example of this kind of system is Sirius, and at visible wavelengths, the main sequence star is far brighter than the white dwarf. However, in the UV, the white dwarf can be as bright or brighter, as its higher temperature means it emits most of its radiation at these shorter wavelengths. In these systems, the white dwarf was originally the heavier star but has shed most of its mass during the later stages of its evolution. Binary stars provide the only direct way to measure the mass of stars, from observations of their orbital motions. Thus, observations of binary stars where the two components are at such different stages of stellar evolution can be used to determine the relationship between the mass of stars and how they evolve.
Stars with masses of around ten times that of the Sun or higher have powerful stellar winds. The Sun loses about 10−14 solar masses per year in its solar wind, which travels at up to around , but the massive stars can lose as much as a billion times more material each year in winds traveling at several thousand kilometers per second. These stars exist for a few million years, and during this time the stellar wind carries away a significant fraction of their mass and plays a crucial role in determining whether they explode as supernova or not. This stellar mass loss was first discovered using rocket-borne telescopes in the 1960s, but the IUE allowed astronomers to observe a very large number of stars, allowing the first proper studies of how stellar mass loss is related to mass and luminosity.
SN 1987A
In 1987, a star in the Large Magellanic Cloud exploded as a supernova. Designated SN 1987A, this event was of enormous importance to astronomy, as it was the closest known supernova to Earth, and the first visible to the naked eye, since Kepler's star in 1604 – before the invention of the telescope. The opportunity to study a supernova so much more closely than had ever been possible before triggered intense observing campaigns at all major astronomical facilities, and the first IUE observations were made about 14 hours after the discovery of the supernova.
IUE data were used to determine that the progenitor star had been a blue supergiant, where theory had strongly expected a red supergiant. Hubble Space Telescope images revealed a nebula surrounding the progenitor star which consisted of mass lost by the star long before it exploded; IUE studies of this material showed that it was rich in nitrogen, which is formed in the CNO cycle – a chain of nuclear reactions which produces most of the energy emitted by stars much more massive than the Sun. Astronomers inferred that the star had been a red supergiant, and had shed a large amount of matter into space, before evolving into a blue supergiant and exploding.
The interstellar medium
The IUE was used extensively to investigate the interstellar medium (ISM). The ISM is normally observed by looking at background sources such as hot stars or quasars; interstellar material absorbs some of the light from the background source and so its composition and velocity can be studied. One of IUE's early discoveries was that the Milky Way is surrounded by a vast halo of hot gas, known as a galactic corona. The hot gas, heated by cosmic rays and supernova, extends several thousand light years above and below the plane of the Milky Way.
IUE data was also crucial in determining how the light from distant sources is affected by dust along the line of sight. Almost all astronomical observations are affected by this interstellar extinction, and correcting for it is the first step in most analyses of astronomical spectra and images. IUE data was used to show that within the galaxy, interstellar extinction can be well described by a few simple equations. The relative variation of extinction with wavelength shows little variation with direction; only the absolute amount of absorption changes. Interstellar absorption in other galaxies can similarly be described by fairly simple "laws".
Active Galactic Nuclei
The IUE vastly increased astronomers' understanding of active galactic nuclei (AGN). Before its launch, 3C 273, the first known quasar, was the only AGN that had ever been observed at UV wavelengths. With IUE, UV spectra of AGN became widely available.
One particular target was NGC 4151, the brightest Seyfert galaxy. Starting soon after IUE's launch, a group of European astronomers pooled their observing time to repeatedly observe the galaxy, to measure variations over time of its UV emission. They found that the UV variation was much greater than that seen at optical and infrared wavelengths. IUE observations were used to study the black hole at the centre of the galaxy, with its mass being estimated at between 50 and 100 million times that of the Sun. The UV emission varied on timescales of a few days, implying that the region of emission was only a few light days across.
Quasar observations were used to probe intergalactic space. Clouds of hydrogen gas in between the Earth and a given quasar will absorb some of its emission at the wavelength of Lyman alpha. Because the clouds and the quasar are all at different distances from Earth, and moving at different velocities due to the expansion of the universe, the quasar spectrum has a "forest" of absorption features at wavelengths shorter than its own Lyman alpha emission. Before IUE, observations of this so-called Lyman-alpha forest were limited to very distant quasars, for which the redshift caused by the expansion of the universe brought it into optical wavelengths. IUE allowed nearer quasars to be studied, and astronomers used this data to determine that there are fewer hydrogen clouds in the nearby universe than there are in the distant universe. The implication is that over time, these clouds have formed into galaxies.
Ultraviolet Spectrograph Package
This experiment included the ultraviolet spectrograph package carried by the IUE, consisting of two physically distinct echelle-spectrograph/camera units capable of astronomical observations. Each spectrograph was a three-element echelle system composed of an off-axis paraboloidal collimator, an echelle grating, and a spherical first-order grating that was used to separate the echelle orders and focus the spectral display on an image converter plus SEC Vidicon camera. There was a spare camera for each unit. The camera units were able to integrate the signal. The readout/preparation cycle for the cameras took approximately 20 minutes. Wavelength calibration was provided by the use of a hollow cathode comparison lamp. The photometric calibration was accomplished by observing standard stars whose spectral fluxes had previously been calibrated by other means.
Both echelle-spectrograph/camera units were capable of high-resolution (0.1 Angstrom (A)) or low-resolution (6 A) performance. The dual high/low-resolution capability was implemented by the insertion of a flat mirror in front of the echelle grating so that the only dispersion was provided by the spherical grating. As the SEC Vidicons could integrate the signal for up to many hours, data with a signal-to-noise ratio of 50 could be obtained for B0 stars of 9th and 14th magnitudes in the high- and low-resolution modes, respectively.
The distinguishing characteristic of the units was their wavelength coverage. One unit covered the wavelength range from 1192 to 1924 A in the high-resolution mode and 1135 to 2085 A in the low-resolution mode. For the other unit, the ranges were from 1893 to 3031 A and 1800 to 3255 A for the high-and low-resolution modes, respectively. Each unit also had its own choice of entrance apertures: either a 3-arcsecond hole or a 10- by 20-arcsecond slot. The 10- by 20-arcsecond slots could be blocked by a common shutter, but the 3-arcsecond aperture was always open. As a result, two aperture configurations were possible: (1) both 3-arcsecond apertures open and both 10- by 20-arcsecond slots closed, or (2) all four apertures open. With this instrumentation, the observational options open to an observer were long-wavelength and/or short-wavelength spectrograph, high or low resolution, and large or small apertures. Exposures could be made with the two spectrographs simultaneously, but the entrance apertures for each were distinct and separated in the sky by about 1 arcminute. An additional restriction was that data could be read out from only one camera at a time. However, one camera could be exposed while the other camera was being read out. The choice of high or low resolution could be made independently for the two spectrographs.
Particle Flux Monitor (Spacecraft)
The particle flux monitor experiment was placed in IUE to monitor the trapped electron fluxes that affected the sensitivity of the ultraviolet sensor in the IUE spectrograph package experiment, NSSDC ID 1978-012A-01. The particle flux monitor was a lithium-drifted silicon detector with a half-angle conical field of view of 16°. It had an aluminum absorber of 0.357 g/cm2 in front of the collimator and a brass shield with a minimum thickness of 2.31 g/cm2. The effective energy threshold for electron measurements was 1.3 MeV. The experiment was also sensitive to protons with energies greater than 15 MeV. The instrument was used as an operational tool to aid in determining background radiation and acceptable camera exposure time. The data were also useful as a monitor of the trapped radiation fluxes. The instrument was provided by Dr. C. Bostrom of the Applied Physics Laboratory of Johns Hopkins University. The instrument was turned off on 4 October 1991 because it was giving erroneous information.
Mission termination
The IUE was designed to have a minimum lifetime of three years and carried consumables sufficient for a five-year mission. However, it lasted far longer than its design called for. Occasional hardware failures caused difficulties, but innovative techniques were devised to overcome them. For example, the spacecraft was equipped with six gyroscopes to stabilize the spacecraft. Successive failures of these in 1979, 1982, 1983, 1985 and 1996 ultimately left the spacecraft with a single functional gyroscope. Telescope control was maintained with two gyros by using the telescope's Sun sensor to determine the spacecraft's attitude, and stabilization in three axes proved possible even after the fifth failure, by using the Sun sensor, the Fine Error Sensors and the single remaining gyroscope. Most other parts of the telescope systems remained fully functional throughout the mission.
In 1995, budget concerns at NASA almost led to the termination of the mission, but instead the operations responsibilities were redivided, with ESA taking control for 16 hours a day, and GSFC for the remaining 8 only. The ESA 16 hours was used for science operations, while the GSFC 8 hours was used only for maintenance. In February 1996, further budget cuts led ESA to decide that it would no longer maintain the satellite. Operations ceased on 30 September 1996, and all the remaining hydrazine was discharged, the batteries were drained and switched off, and at 18:44 UTC on 30 September 1996, the radio transmitter was shut down and all contact with the spacecraft was lost.
It continues to orbit the Earth in its geosynchronous orbit and will continue to do so more or less indefinitely as it is far above the upper reaches of the atmosphere of Earth. Anomalies in the Earth's gravity due to its non-spherical shape meant that the telescope tended to drift West from its original location at approximately 70° West longitude towards approximately 110° West. During the mission, this drift was corrected by occasional rocket firings, but since the end of the mission the satellite has drifted uncontrolled to the West of its former location.
Archives
The IUE archive is one of the most heavily used astronomical archives. Data were archived from the start of the mission, and access to the archive was free to anyone who wished to use it. However, in the early years of the mission, long before the advent of the World Wide Web and fast global data transmission links, access to the archive required a visit in person to one of two Regional Data Analysis Facilities (RDAFs), one at the University of Colorado and the other at GSFC.
In 1987, it became possible to access the archive electronically, by dialing into a computer at Goddard Space Flight Center. The archive, then totaling 23 Gb of data, was connected to the computer on a mass storage device. A single user at a time could dial in and would be able to retrieve an observation in 10–30 seconds.
As the mission entered its second decade, plans were made for its final archive. Throughout the mission, calibration techniques were improved, and the final software for data reduction yielded significant improvements over earlier calibrations. Eventually, the entire set of available raw data was recalibrated using the final version of the data reduction software, creating a uniform high-quality archive. Today, the archive is hosted at the Mikulski Archive for Space Telescopes at Space Telescope Science Institute and is available via the World Wide Web and APIs.
Impact on astronomy
The IUE mission, by virtue of its very long duration and the fact that for most of its lifetime, it provided astronomers only access to UV light, had a major impact on astronomy. By the end of its mission, it was considered by far the most successful and productive space observatory mission. For many years after the end of the mission, its archive was the most heavily used dataset in astronomy, and IUE data has been used in over 250 PhD projects worldwide. Almost 4,000 peer-reviewed papers have now been published based on IUE data, including some of the most cited astronomy papers of all time. The most cited paper based on IUE data is one analyzing the nature of interstellar reddening, which has subsequently been cited over 5,500 times.
The Hubble Space Telescope has now been in orbit for 31 years (as of 2021) and Hubble data has been used in almost 10,000 peer-reviewed publications at that time. In 2009, the Cosmic Origins Spectrograph was installed on HST by astronauts launched with the instrument by the Space Shuttle, and this device records ultraviolet spectrum, thus proving some ultraviolet observation ability in this period. Another ultraviolet space telescope, quite different in focus, was the wide-angle imaging GALEX space telescope operated between 2003 and 2013.
Some telescope visions such as Habex or Advanced Technology Large-Aperture Space Telescope (ATLAST) have included an ultraviolet capability, although it is not clear if they have any real prospects. In the 2010s, many telescope projects were struggling, and even some ground observatories saw their potential for being shut down ostensibly to save budget.
See also
Explorer program
Timeline of artificial satellites and space probes
References
Space telescopes
Ultraviolet telescopes
European Space Agency satellites
NASA space probes
Satellites orbiting Earth
1978 in spaceflight
Spacecraft launched in 1978 | International Ultraviolet Explorer | [
"Astronomy"
] | 6,693 | [
"Space telescopes"
] |
2,235,341 | https://en.wikipedia.org/wiki/Cuscohygrine | Cuscohygrine is a bis N-methyl pyrrolidine alkaloid found in coca plants. It can also be extracted from plants of the family Solanaceae, including Atropa belladonna (deadly nightshade) and various Datura species. Cuscohygrine usually occurs along with other, more potent alkaloids such as atropine or cocaine.
Cuscohygrine, along with the related metabolite hygrine, was first isolated by Carl Liebermann in 1889 as an alkaloid accompanying cocaine in coca leaves (also known as Cusco-leaves).
Cuscohygrine is an oil that can be distilled without decomposition only in vacuum. It is soluble in water. It also forms a crystalline trihydrate which melts at 40–41 °C.
Biosynthesis
Ornithine is methylated to N-methylornithine and when decarboxylated, becomes N-methylputrescine. 4-methylaminobutanal is yielded from the oxidation of the primary amino-group. 4-methylaminobutanal then cyclizes to an N-methyl-l-pyrrolinium salt. The condensation of the pyrrolinium salt with acetoacetyl coenzyme A yields hygrine. Finally, the condensation of the hygrine molecule with another molecule of pyrrolidinium salt yields cuscohygrine.
See also
Coca alkaloids
Dihydrocuscohygrine
References
Pyrrolidine alkaloids
Alkaloids found in Erythroxylum coca
Alkaloids found in Solanaceae
Ketones | Cuscohygrine | [
"Chemistry"
] | 355 | [
"Ketones",
"Alkaloids by chemical classification",
"Pyrrolidine alkaloids",
"Functional groups"
] |
2,235,522 | https://en.wikipedia.org/wiki/Octahedral%20molecular%20geometry | In chemistry, octahedral molecular geometry, also called square bipyramidal, describes the shape of compounds with six atoms or groups of atoms or ligands symmetrically arranged around a central atom, defining the vertices of an octahedron. The octahedron has eight faces, hence the prefix octa. The octahedron is one of the Platonic solids, although octahedral molecules typically have an atom in their centre and no bonds between the ligand atoms. A perfect octahedron belongs to the point group Oh. Examples of octahedral compounds are sulfur hexafluoride SF6 and molybdenum hexacarbonyl Mo(CO)6. The term "octahedral" is used somewhat loosely by chemists, focusing on the geometry of the bonds to the central atom and not considering differences among the ligands themselves. For example, , which is not octahedral in the mathematical sense due to the orientation of the bonds, is referred to as octahedral.
The concept of octahedral coordination geometry was developed by Alfred Werner to explain the stoichiometries and isomerism in coordination compounds. His insight allowed chemists to rationalize the number of isomers of coordination compounds. Octahedral transition-metal complexes containing amines and simple anions are often referred to as Werner-type complexes.
Isomerism in octahedral complexes
When two or more types of ligands (La, Lb, ...) are coordinated to an octahedral metal centre (M), the complex can exist as isomers. The naming system for these isomers depends upon the number and arrangement of different ligands.
cis and trans
For MLL, two isomers exist. These isomers of MLL are cis, if the Lb ligands are mutually adjacent, and trans, if the Lb groups are situated 180° to each other. It was the analysis of such complexes that led Alfred Werner to the 1913 Nobel Prize–winning postulation of octahedral complexes.
Facial and meridional isomers
For MLL, two isomers are possible - a facial isomer (fac) in which each set of three identical ligands occupies one face of the octahedron surrounding the metal atom, so that any two of these three ligands are mutually cis, and a meridional isomer (mer) in which each set of three identical ligands occupies a plane passing through the metal atom.
Δ vs Λ isomers
Complexes with three bidentate ligands or two cis bidentate ligands can exist as enantiomeric pairs. Examples are shown below.
Other
For MLLL, a total of five geometric isomers and six stereoisomers are possible.
One isomer in which all three pairs of identical ligands are trans
Three isomers in which one pair of identical ligands (La or Lb or Lc) is trans while the other two pairs of ligands are mutually cis.
Two enantiomeric pair in which all three pairs of identical ligands are cis. These are equivalent to the Δ vs Λ isomers mentioned above.
The number of possible isomers can reach 30 for an octahedral complex with six different ligands (in contrast, only two stereoisomers are possible for a tetrahedral complex with four different ligands). The following table lists all possible combinations for monodentate ligands:
Thus, all 15 diastereomers of MLaLbLcLdLeLf are chiral, whereas for MLLbLcLdLe, six diastereomers are chiral and three are not (the ones where La are trans). One can see that octahedral coordination allows much greater complexity than the tetrahedron that dominates organic chemistry. The tetrahedron MLaLbLcLd exists as a single enantiomeric pair. To generate two diastereomers in an organic compound, at least two carbon centers are required.
Deviations from ideal symmetry
Jahn–Teller effect
The term can also refer to octahedral influenced by the Jahn–Teller effect, which is a common phenomenon encountered in coordination chemistry. This reduces the symmetry of the molecule from Oh to D4h and is known as a tetragonal distortion.
Distorted octahedral geometry
Some molecules, such as XeF6 or , have a lone pair that distorts the symmetry of the molecule from Oh to C3v. The specific geometry is known as a monocapped octahedron, since it is derived from the octahedron by placing the lone pair over the centre of one triangular face of the octahedron as a "cap" (and shifting the positions of the other six atoms to accommodate it). These both represent a divergence from the geometry predicted by VSEPR, which for AX6E1 predicts a pentagonal pyramidal shape.
Bioctahedral structures
Pairs of octahedra can be fused in a way that preserves the octahedral coordination geometry by replacing terminal ligands with bridging ligands. Two motifs for fusing octahedra are common: edge-sharing and face-sharing. Edge- and face-shared bioctahedra have the formulas [M2L8(μ-L)]2 and M2L6(μ-L)3, respectively. Polymeric versions of the same linking pattern give the stoichiometries [ML2(μ-L)2]∞ and [M(μ-L)3]∞, respectively.
The sharing of an edge or a face of an octahedron gives a structure called bioctahedral. Many metal pentahalide and pentaalkoxide compounds exist in solution and the solid with bioctahedral structures. One example is niobium pentachloride. Metal tetrahalides often exist as polymers with edge-sharing octahedra. Zirconium tetrachloride is an example. Compounds with face-sharing octahedral chains include MoBr3, RuBr3, and TlBr3.
Trigonal prismatic geometry
For compounds with the formula MX6, the chief alternative to octahedral geometry is a trigonal prismatic geometry, which has symmetry D3h. In this geometry, the six ligands are also equivalent. There are also distorted trigonal prisms, with C3v symmetry; a prominent example is . The interconversion of Δ- and Λ-complexes, which is usually slow, is proposed to proceed via a trigonal prismatic intermediate, a process called the "Bailar twist". An alternative pathway for the racemization of these same complexes is the Ray–Dutt twist.
Splitting of d-orbital energies
For a free ion, e.g. gaseous Ni2+ or Mo0, the energy of the d-orbitals are equal in energy; that is, they are "degenerate". In an octahedral complex, this degeneracy is lifted. The energy of the dz2 and dx2−y2, the so-called eg set, which are aimed directly at the ligands are destabilized. On the other hand, the energy of the dxz, dxy, and dyz orbitals, the so-called t2g set, are stabilized. The labels t2g and eg refer to irreducible representations, which describe the symmetry properties of these orbitals. The energy gap separating these two sets is the basis of crystal field theory and the more comprehensive ligand field theory. The loss of degeneracy upon the formation of an octahedral complex from a free ion is called crystal field splitting or ligand field splitting. The energy gap is labeled Δo, which varies according to the number and nature of the ligands. If the symmetry of the complex is lower than octahedral, the eg and t2g levels can split further. For example, the t2g and eg sets split further in trans-MLL.
Ligand strength has the following order for these electron donors:
weak: iodine < bromine < fluorine < acetate < oxalate < water < pyridine < cyanide :strong
So called "weak field ligands" give rise to small Δo and absorb light at longer wavelengths.
Reactions
Given that a virtually uncountable variety of octahedral complexes exist, it is not surprising that a wide variety of reactions have been described. These reactions can be classified as follows:
Ligand substitution reactions (via a variety of mechanisms)
Ligand addition reactions, including among many, protonation
Redox reactions (where electrons are gained or lost)
Rearrangements where the relative stereochemistry of the ligand changes within the coordination sphere.
Many reactions of octahedral transition metal complexes occur in water. When an anionic ligand replaces a coordinated water molecule the reaction is called an anation. The reverse reaction, water replacing an anionic ligand, is called aquation. For example, the slowly yields in water, especially in the presence of acid or base. Addition of concentrated HCl converts the aquo complex back to the chloride, via an anation process.
See also
Octahedral clusters
AXE method
Molecular geometry
References
External links
Example of octahedral geometry at 3dCHEM.com
Indiana University Molecular Structure Center
Point Group Symmetry Examples
Molecular geometry
Coordination chemistry | Octahedral molecular geometry | [
"Physics",
"Chemistry"
] | 1,946 | [
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Coordination chemistry",
"Matter"
] |
2,235,737 | https://en.wikipedia.org/wiki/P73 | p73 is a protein related to the p53 tumor protein. Because of its structural resemblance to p53, it has also been considered a tumor suppressor. It is involved in cell cycle regulation, and induction of apoptosis. Like p53, p73 is characterized by the presence of different isoforms of the protein. This is explained by splice variants, and an alternative promoter in the DNA sequence.
p73, also known as tumor protein 73 (TP73), protein was the first identified homologue of the tumor suppressor gene, p53. Like p53, p73 has several variants. It is expressed as distinct forms differing at either at the C- or the N-terminus. Currently, six different C-terminus splicing variants have been found in normal cells. The p73 gene encodes a protein with a significant sequence homology and a functional similarity with the tumor suppressor p53. The over-expression of p73 in cultured cells promotes a growth arrest and/or apoptosis similarly to p53.
The p73 gene has been mapped to a chromosome region (1p36. 2-3) a locus commonly deleted in various tumor entities and human cancers. Similar to p53 the protein product of p73 induces cell cycle arrest or apoptosis, hence its classification as a tumor suppressor. However unlike its counterpart, p73 is infrequently mutated in cancers. Perhaps, even more shocking is the fact that p73 – deficient mice do not show a tumorigenic phenotype. A deficiency of p53 almost certainly leads to unchecked cell proliferation and is noted in 60% of cancers.
Analyses of many tumors typically found in humans including breast and ovarian cancer show a high expression of p73 when compared to normal tissues in corresponding areas. Adenoviruses that cause cellular transformations have also been found to result in increased p73 expression. Furthermore, recent finding are suggesting that over-expression of transcription factors involved in cell cycle regulation and synthesis of DNA in mammalian cells (e.g.: E2F-1) induces the expression of p73. Many researchers believe that these results imply that p73 may not be a tumor suppressor but rather an oncoprotein. Some suggest that the TP73 locus encodes both a tumor suppressor (TAp73) and a putative oncogene (ΔNp73). This is a strong theory and causes much confusion, as it is unknown which of the two p73 variants is over-expressed and ultimately plays a role in tumorigenesis.
Genes of the p53 family are known to be complex. The viral oncoproteins (e.g. Adenovirus E1B) that efficiently inhibit p53 function are unable to inactivate p73, and those that seem to inhibit p73 have no effect on p53.
Debate persists about the exact function of p73. Recently it has been reported that p73 is enriched in the nervous system and that the p73-deficient mice, which do not exhibit an increased susceptibility to spontaneous tumorigenesis, have neurological and immunological defects. These results have been expanded and it has also been shown that p73 is present in early stages of neurological development and neuronal apoptosis by blocking the proapoptotic function of p53. This strongly implicates p73 as playing a large role in cellular differentiation.
References
External links
A naturally occurring p73 mutation in a p73-p53 double-mutant lung cancer cell line encodes p73α protein with a dominant-negative function
Further reading
Proteins
Tumor suppressor genes | P73 | [
"Chemistry"
] | 782 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
2,236,003 | https://en.wikipedia.org/wiki/Lunar%20plaque | Lunar plaques are stainless steel commemorative plaques measuring attached to the ladders on the descent stages of the United States Apollo Lunar Modules flown on lunar landing missions Apollo 11 through Apollo 17, to be left permanently on the lunar surface. The plaques were originally suggested and designed by NASA's head of technical services Jack Kinzler, who oversaw their production.
All of the plaques bear facsimiles of the participating astronauts' signatures. For this reason, an extra plaque had to be made for Apollo 13 due to the late replacement of one crew member. The first (Apollo 11) and last (Apollo 17) plaques bear a facsimile of the signature of Richard Nixon, President of the United States during the landings, along with references to the start and completion of "man's first explorations of the Moon" and expressions of peace "for all mankind".
All, except the Apollo 12 plaque (which is also textured differently), bear pictures of the two hemispheres of Earth. Apollo 17's plaque bears a depiction of the lunar globe in addition to the Earth. The plaques used on missions 13 through 16 bear the call-sign of each mission's Lunar Module. All of the plaques were left on the Moon, except for those of the aborted Apollo 13 mission.
Plaques
Apollo 11 plaque inscription: "Here men from the planet Earth first set foot upon the Moon July 1969, A.D. We came in peace for all mankind" in capital letters. The statement "We came in peace for all mankind" is derived from the 1958 National Aeronautics and Space Act's declaration of policy and purpose:
"The Congress hereby declares that it is the policy of the United States that activities in space should be devoted to peaceful purposes for the benefit of all mankind."
(Signatures: Neil A. Armstrong; Michael Collins; Edwin E. Aldrin, Jr.; Richard Nixon, President, United States of America)
Apollo 12 plaque inscription: Apollo 12 November 1969 (Signatures: Charles Conrad, Jr.; Richard F. Gordon, Jr.; Alan L. Bean)
Apollo 13 plaque inscription: Apollo 13 Aquarius April 1970 (Signatures on original plaque, bolted to the LM ladder: James A. Lovell; Thomas K. Mattingly II; Fred W. Haise. A replacement plaque with John L. Swigert, Jr.'s name replacing Mattingly's was carried in the spacecraft cabin; Lovell was to have placed this over the original as he descended the ladder. After the landing was aborted, Lovell saved the replacement to keep as a souvenir; the first plaque bearing Mattingly's name was destroyed when Aquarius reentered the Earth's atmosphere at the end of the mission.)
Apollo 14 plaque inscription: Apollo 14 Antares February 1971 (Signatures: Alan B. Shepard, Jr.; Stuart A. Roosa; Edgar D. Mitchell)
Apollo 15 plaque inscription: Apollo 15 Falcon July 1971 (Signatures: David R. Scott; Alfred M. Worden; James B. Irwin)
Apollo 16 plaque inscription: Apollo 16 Orion April 1972 (Signatures: John W. Young; Thomas K. Mattingly II; Charles M. Duke, Jr.)
Apollo 17 plaque inscription: "Here man completed his first explorations of the Moon December 1972, A.D. May the spirit of peace in which we came be reflected in the lives of all mankind" in capital letters. (Signatures: Eugene A. Cernan; Ronald E. Evans; Harrison H. Schmitt; Richard Nixon, President, United States of America).
Plaques gallery
Notes
The plaques are curved so as to fit around the landing leg and not hinder the astronauts from using the ladder. The plaques were attached directly to the ladder rungs, between the third and fourth rungs from the bottom.
Two plaques were made for Apollo 13, because of the replacement two days before launch of Command Module Pilot Thomas K. "Ken" Mattingly with John L. "Jack" Swigert, because of Mattingly's exposure to German measles. Mission Commander James "Jim" Lovell was to have placed the plaque bearing Swigert's name over the original already fastened to the LM Aquarius, when he descended the ladder to walk on the Moon. When the landing was aborted, Lovell saved this plaque to keep as a memento, and it remains in his possession. The original plaque bearing Mattingly's name was destroyed when Aquarius reentered the Earth's atmosphere.
The lunar near-side map, with the six Apollo lunar landing sites marked on it, was added to the Apollo 17 plaque at the suggestion of astronaut Gene Cernan.
Reproductions of the plaques were given as mementos to foreign governments through various United States embassies after each flight.
James C. Humes, a speech writer for President Nixon and four other presidents, is partly credited for authoring the text on the Apollo 11 lunar plaque. William Safire and Pat Buchanan also worked on drafting the plaque.
In 1989, William Safire in his capacity as a "word maven" wryly wrote that AD should have been placed before the year, not after: "I had a hand in the first sign to be placed by earthlings on another celestial body, and it contains a glaring grammatical error."
The plaques were produced by Nelson-Miller, Inc. in Los Angeles, CA.
See also
List of artificial objects on the Moon
Pioneer plaque
References
Missions to the Moon
Apollo program hardware
Maps in art
Message artifacts | Lunar plaque | [
"Astronomy"
] | 1,133 | [
"Message artifacts",
"Outer space"
] |
2,236,083 | https://en.wikipedia.org/wiki/Osteopenia | Osteopenia, known as "low bone mass" or "low bone density", is a condition in which bone mineral density is low. Because their bones are weaker, people with osteopenia may have a higher risk of fractures, and some people may go on to develop osteoporosis. In 2010, 43 million older adults in the US had osteopenia. Unlike osteoporosis, osteopenia does not usually cause symptoms, and losing bone density in itself does not cause pain.
There is no single cause for osteopenia, although there are several risk factors, including modifiable (behavioral, including dietary and use of certain drugs) and non-modifiable (for instance, loss of bone mass with age). For people with risk factors, screening via a DXA scanner may help to detect the development and progression of low bone density. Prevention of low bone density may begin early in life and includes a healthy diet and weight-bearing exercise, as well as avoidance of tobacco and alcohol. The treatment of osteopenia is controversial: non-pharmaceutical treatment involves preserving existing bone mass via healthy behaviors (dietary modification, weight-bearing exercise, avoidance or cessation of smoking or heavy alcohol use). Pharmaceutical treatment for osteopenia, including bisphosphonates and other medications, may be considered in certain cases but is not without risks. Overall, treatment decisions should be guided by considering each patient's constellation of risk factors for fractures.
Risk factors
Many divide risk factors for osteopenia into fixed (non-changeable) and modifiable factors. Osteopenia can also be secondary to other diseases. An incomplete list of risk factors:
Fixed
Age: bone density peaks at age 35, and then decreases. Bone density loss occurs in both men and women
Ethnicity: European and Asian people have increased risk
Sex: women are at higher risk, particularly those with early menopause
Family history: low bone mass in the family increases risk
Modifiable / behavioral
Tobacco use
Alcohol use
Inactivity – particularly lack of weight-bearing or resistance activities
Insufficient caloric intake – osteopenia can be connected to female athlete triad syndrome, which occurs in female athletes as a combination of energy deficiency, menstrual irregularities, and low bone mineral density.
Low nutrient diet (particularly calcium, Vitamin D)
Other diseases
Celiac disease, via poor absorption of calcium and vitamin D
Hyperthyroidism
Anorexia nervosa
Medications
Steroids
Anticonvulsants
Screening and diagnosis
The ISCD (International Society for Clinical Densitometry) and the National Osteoporosis Foundation recommend that older adults (women over 65 and men over 70) and adults with risk factors for low bone mass, or previous fragility fractures, undergo DXA testing. The DXA (dual X-ray absorptiometry) scan uses a form of X-ray technology, and offers accurate bone mineral density results with low radiation exposure.
The United States Preventive Services Task Force recommends osteoporosis screening for women with increased risk over 65 and states there is insufficient evidence to support screening men. The main purpose of screening is to prevent fractures. Of note, USPSTF screening guidelines are for osteoporosis, not specifically osteopenia. The National Osteoporosis Foundation recommends use of central (hip and spine) DXA testing for accurate measure of bone density, emphasizing that peripheral or "screening" scanners should not be used to make clinically meaningful diagnoses and that peripheral and central DXA scans cannot be compared to each other.
DXA scanners can be used to diagnose osteopenia or osteoporosis as well as to measure bone density over time as people age or undergo medical treatment or lifestyle changes.
Information from the DXA scanner creates a bone mineral density T-score by comparing a patient's density to the bone density of a healthy young person. Bone density between 1 and 2.5 standard deviations below the reference, or a T-score between −1.0 and −2.5, indicates osteopenia (a T-score smaller than or equal to −2.5 indicates osteoporosis). Calculation of the T-score itself may not be standardized. The ISCD recommends using Caucasian women between 20 and 29 years old as the baseline for bone density for ALL patients, but not all facilities follow this recommendation.
The ISCD recommends that Z-scores, not T-scores, be used to classify bone density in premenopausal women and men under 50.
Prevention
Prevention of low bone density can start early in life by maximizing peak bone density. Once a person loses bone density, the loss is usually irreversible, so preventing (greater than normal) bone loss is important.
Actions to maximize bone density and stabilize loss include:
Exercise, particularly weight-bearing exercise, resistance exercises and balance exercises, through mechanical loading that promotes increased bone mass, and reduced fall risk
Adequate caloric intake
Sufficient calcium in diet: older adults may have increased calcium needs—of note, medical conditions such as Celiac and hyperthyroidism can affect absorption of calcium
Sufficient Vitamin D in diet
Estrogen replacement
Avoidance of steroid medications
Limit alcohol use and smoking
Treatment
The pharmaceutical treatment of osteopenia is controversial and more nuanced than well-supported recommendations for improved nutrition and weight-bearing exercise. The diagnosis of osteopenia in and of itself does not always warrant pharmaceutical treatment. Many people with osteopenia may be advised to follow risk prevention measures (as above).
Risk of fracture guides clinical treatment decisions: the World Health Organization (WHO) Fracture Risk Assessment Tool (FRAX) estimates the probability of hip fracture and the probability of a major osteoporotic fracture (MOF), which could occur in a bone other than the hip. In addition to bone density (T-score), calculation of the FRAX score involves age, body characteristics, health behaviors, and other medical history.
As of 2014, The National Osteoporosis Foundation (NOF) recommends pharmaceutical treatment for osteopenic postmenopausal women and men over 50 with FRAX hip fracture probability of >3% or FRAX MOF probability >20%. As of 2016, the American Association of Clinical Endocrinologists and the American College of Endocrinology agree. In 2017, the American College of Physicians recommended that clinicians use individual judgment and knowledge of patients' particular risk factors for fractures, as well as patient preferences, to decide whether to pursue pharmaceutical treatment for women with osteopenia over 65.
Pharmaceutical treatment for low bone density includes a range of medications. Commonly used drugs include bisphosphonates (alendronate, risedronate, and ibandronate)—some studies show that decreased fracture risk and increased bone density after bisphosphonate treatment for osteopenia. Other medications include selective estrogen receptor modulators (SERMs) (e.g., raloxifene), estrogens (e.g., estradiol), calcitonin, and parathyroid hormone-related protein analogues (e.g., abaloparatide, teriparatide).
These drugs are not without risks. In this complex landscape, many argue that clinicians must consider a patient's individual risk of fracture, not simply treat those with osteopenia as equally at risk. A 2005 editorial in the Annals of Internal Medicine states "The objective of using osteoporosis drugs is to prevent fractures. This can be accomplished only by treating patients who are likely to have a fracture, not by simply treating T-scores."
History
Osteopenia, from Greek ὀστέον (ostéon), "bone" and πενία (penía), "poverty", is a condition of sub-normally mineralized bone, usually the result of a rate of bone lysis that exceeds the rate of bone matrix synthesis. See also osteoporosis.
In June 1992, the World Health Organization defined osteopenia. An osteoporosis epidemiologist at the Mayo Clinic who participated in setting the criterion in 1992 said "It was just meant to indicate the emergence of a problem", and noted that "It didn't have any particular diagnostic or therapeutic significance. It was just meant to show a huge group who looked like they might be at risk."
See also
Bone mineral density
Osteoporosis
References
External links
Aging-associated diseases
Endocrine diseases
Osteopathies
Rheumatology
Histopathology
Medical signs
Medical controversies | Osteopenia | [
"Chemistry",
"Biology"
] | 1,798 | [
"Senescence",
"Aging-associated diseases",
"Histopathology",
"Microscopy"
] |
2,236,213 | https://en.wikipedia.org/wiki/Microcellular%20plastic | Microcellular plastics, otherwise known as microcellular foam, is a form of manufactured plastic fabricated to contain billions of tiny bubbles less than 50 microns wide (typically 0.1–100 micrometers). It is formed by dissolving gas under high pressure into various polymers, relying on the phenomenon of thermodynamic instability to cause the uniform arrangement of the gas bubbles, otherwise known as nucleation. Its main purpose was to reduce material usage while maintaining valuable mechanical properties. the density of the finished product is determined by the gas used. Depending on the gas, the foam's density can be between 5% and 99% of the pre-processed plastic. Design parameters, focused on the foam's final form and the molding process afterward, include the type of die or mold to be used, as well as the dimensions of the bubbles, or cells, that classify the material as a foam. Since the cells' size is close to the wavelength of light, to the casual observer the foam retains the appearance of a solid, light-colored plastic.
Recent developments at the University of Washington have produced nanocellular foams with cells in the 20-100 nanometer range. At Indian Institute of Technology Delhi, technologies are being developed to fabricate high quality microcellular foams.
History
Prior to 1974, traditional foams were created using a method outlined in U.S Patent named Mixing of Molten Plastic and Gas in 1974. By releasing a gas, otherwise known as a chemical or physical blowing agent, over molten plastic, hard plastic was converted into traditional foam. The results of these methods were highly undesirable. Due to the uncontrolled nature of the process, the product was often non-uniform, housing many large voids. In turn, the outcome was a low strength, low density foam, with large cells in the cellular structure. The pitfalls of this method drove the need for a process that could make a similar material with more advantageous mechanical properties.
The creation of microcellular foams as we know today was inspired by the production of traditional foams. In 1979, MIT masters students J.E. Martini and F.A Waldman, under the direction of Professor Nam P Suh, are both accredited with the invention of microcellular plastics, or microcellular foams. By doing pressurized extrusion and injection molding, their experimentation led to a method that used significantly less material and a product with 5-30% less voids that were less than 8 microns in size. In terms of mechanical properties, the fracture toughness of the material improved by 400% and the resistance to crack propagation increased by 200%. First, plastic is uniformly saturated with gas at a high pressure. Then, the temperature is increased, causing thermal instability in the plastic. In order to reach a stable state, cell nucleation takes place. During this step, the cells created would be much smaller than that of traditional foams. After this, cell growth, or matrix relaxation would initiate. The novelty of this method was the ability to control the mechanical properties of the product by varying the temperature and pressure inputs. For example, by modifying the pressure, a very thin outside layer could be formed, making the product even stronger. Experimental results found to be the gas that produced the densest foams. Other gases, such as Argon and Nitrogen produced foams with mechanical properties that were slightly less desirable.
Production
When selecting a gas to produce the desired foam, functional requirements and design parameters are considered. The functional requirements are identical to the criteria used when inventing this material type; using less plastic without sacrificing mechanical properties (especially toughness) that are capable of making the same three dimensional products the original plastic was able to do.
The production of microcellular plastics is dependent on temperature and pressure. Dissolving gas under high temperature and pressure creates a driving force that activates nucleation sites when the pressure drops, which increases exponentially with amount of dissolved gas.
Homogeneous nucleation is the primary mechanism for producing the bubbles in the cellular matrix. The dissolved gas molecules have a preference to diffuse to activation sites that have nucleated first. This is prevented since these sites are activated nearly simultaneously, forcing the dissolved gas molecules to be shared equally and uniform throughout the plastic.
Removing the plastic from the high pressure environment creates a thermodynamic instability. Heating the polymer above the effective glass transition temperature (of the polymer/gas mixture) then causes the plastic to foam, creating a very uniform structure of small bubbles.
Mechanical properties
The density of microcellular plastics has the greatest influence on the behavior and performance. The material tensile strength linearly decreases with the material density as more gas is dissolved into the part. Melting temperature and viscosity also decrease as well.
The foam injection process itself introduces surface defects such as swirl marks, streaking, and blistering, which also influence how the part reacts to external forces.
Advantages and disadvantages
Due to the non-hazardous nature of this foam-generating process, these plastics are able to be recycled and put back into the production cycle, reducing their carbon footprint as well as reducing the cost of raw materials.
With the porous nature of this material, the overall density is much lower than that of any solid plastic, considerably dropping the weight per unit volume of the part. This also entails less consumption of raw plastic with the addition of the tiny gas-filled pockets, allowing for further cost reduction, up to 35%.
When observing the mechanical properties of these foams, a loss of tensile strength is correlated with the decrease in density, in a nearly linear fashion.
Industrial applications
Since the steps taken by MIT research in the late 70s, microcellular plastics, and their methods of manufacturing, has become more standardized and improved upon. Trexel Inc. is often referred to as the industry standard for microcellular plastics with their use of MuCell® Molding Technology. Trexel, and other manufacturers of microcellular plastics, use both injection molding and blow mold methods to create products for applications such as automotive, medical, packaging, consumer, and industrial.
Injection molding and blow molding differ in regards to the type of product in need of being manufactured. Injection molding, much like casting, is centered around creating a mold for a solid object, which is to later be filled in with the molten plastic. Blow molding on the other hand, is more specialized for hollow objects, although it is less accurate regarding wall thickness with this dimension being an undefined feature (unlike in an injection mold where all dimensions are predetermined). In respect to MuCell® and microcellular plastics, these processes vary from that of traditional plastics due to the additional steps of gas dissolving and cell nucleation before the molding process can begin. This process removed the "pack and hold phase" that allowed for imperfections within a mold, creating a finished product with greater dimensional accuracy and sound structure. By removing an entire step of the molding process, time is saved, making MuCell® a more economical option since more parts can be manufactured in the same time compared to standard resins. A few examples of applications include automobile instrument panels, heart pumps, storage bins, and the housing on multiple household power tools.
See also
acrylonitrile
butadiene
styrene
References
External links
Plastics | Microcellular plastic | [
"Physics"
] | 1,505 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
2,236,341 | https://en.wikipedia.org/wiki/Systematic%20desensitization | Systematic desensitization, or graduated exposure therapy, is a behavior therapy developed by the psychiatrist Joseph Wolpe. It is used when a phobia or anxiety disorder is maintained by classical conditioning. It shares the same elements of both cognitive-behavioral therapy and applied behavior analysis. When used in applied behavior analysis, it is based on radical behaviorism as it incorporates counterconditioning principles. These include meditation (a private behavior or covert conditioning) and breathing (a public behavior or overt conditioning). From the cognitive psychology perspective, cognitions and feelings precede behavior, so it initially uses cognitive restructuring.
The goal of the therapy is for the individual to learn how to cope with and overcome their fear in each level of an exposure hierarchy. The process of systematic desensitization occurs in three steps. The first step is to identify the hierarchy of fears. The second step is to learn relaxation or coping techniques. Finally, the individual uses these techniques to manage their fear during a situation from the hierarchy. The third step is repeated for each level of the hierarchy, starting from the least fear-inducing situation.
Three steps of desensitization
There are three main steps that Wolpe identified to successfully desensitize an individual.
Establish anxiety stimulus hierarchy. The individual should first identify the items that are causing the anxiety problems. Each item that causes anxiety is given a subjective ranking on the severity of induced anxiety. If the individual is experiencing great anxiety to many different triggers, each item is dealt with separately. For each trigger or stimulus, a list is created to rank the events from least anxiety-provoking to most anxiety-provoking.
Learn the mechanism response. Relaxation training, such as meditation, is one type of best coping strategies. Wolpe taught his patients relaxation responses because it is not possible to be both relaxed and anxious at the same time. In this method, patients practice tensing and relaxing different parts of the body until the patient reaches a state of serenity. This is necessary because it provides the patient with a means of controlling their fear, rather than letting it increase to intolerable levels. Only a few sessions are needed for a patient to learn appropriate coping mechanisms. Additional coping strategies include anti-anxiety medicine and breathing exercises. Another example of relaxation is cognitive reappraisal of imagined outcomes. The therapist might encourage patients to examine what they imagine happening when exposed to the anxiety-inducing stimulus and then allowing for the client to replace the imagined catastrophic situation with any of the imagined positive outcomes.
Connect stimulus to the incompatible response or coping method by counter conditioning. In this step the client completely relaxes and is then presented with the lowest item that was placed on their hierarchy of severity of anxiety phobias. When the patient has reached a state of serenity again after being presented with the first stimuli, the second stimuli that should present a higher level of anxiety is presented. This will help the patient overcome their phobia. This activity is repeated until all the items of the hierarchy of severity anxiety is completed without inducing any anxiety in the client at all. If at any time during the exercise the coping mechanisms fail or became a failure, or the patient fails to complete the coping mechanism due to the severe anxiety, the exercise is then stopped. When the individual is calm, the last stimuli that is presented without inducing anxiety is presented again and the exercise is then continued depending on the patient outcomes.
Example
A client may approach a therapist due to their great phobia of snakes. This is how the therapist would help the client using the three steps of systematic desensitization:
Establish anxiety stimulus hierarchy. A therapist may begin by asking the patient to identify a fear hierarchy. This fear hierarchy would list the relative unpleasantness of various levels of exposure to a snake. For example, seeing a picture of a snake might elicit a low fear rating, compared to live snakes crawling on the individual—the latter scenario becoming highest on the fear hierarchy.
Learn coping mechanisms or incompatible responses. The therapist would work with the client to learn appropriate coping and relaxation techniques such as meditation and deep muscle relaxation responses.
Connect the stimulus to the incompatible response or coping method. The client would be presented with increasingly unpleasant levels of the feared stimuli, from lowest to highest—while utilizing the deep relaxation techniques (i.e. progressive muscle relaxation) previously learned. The imagined stimuli to help with a phobia of snakes may include: a picture of a snake; a small snake in a nearby room; a snake in full view; touching of the snake, etc. At each step in the imagined progression, the patient is desensitized to the phobia through exposure to the stimulus while in a state of relaxation. As the fear hierarchy is unlearned, anxiety gradually becomes extinguished.
Uses
Specific phobias
Specific phobias are one class of mental disorder often treated via systematic desensitization. When persons experience such phobias (for example fears of heights, dogs, snakes, closed spaces, etc.), they tend to avoid the feared stimuli; this avoidance, in turn, can temporarily reduce anxiety but is not necessarily an adaptive way of coping with it. In this regard, patients' avoidance behaviors can become reinforced – a concept defined by the tenets of operant conditioning. Thus, the goal of systematic desensitization is to overcome avoidance by gradually exposing patients to the phobic stimulus, until that stimulus can be tolerated. Wolpe found that systematic desensitization was successful 90% of the time when treating phobias.
Test anxiety
Between 25 and 40 percent of students experience test anxiety. Children can suffer from low self-esteem and stress-induced symptoms as a result of test anxiety. The principles of systematic desensitization can be used by children to help reduce their test anxiety. Children can practice the muscle relaxation techniques by tensing and relaxing different muscle groups. With older children and college students, an explanation of desensitization can help to increase the effectiveness of the process. After these students learn the relaxation techniques, they can create an anxiety inducing hierarchy. For test anxiety these items could include not understanding directions, finishing on time, marking the answers properly, spending too little time on tasks, or underperforming. Teachers, school counselors or school psychologists could instruct children on the methods of systematic desensitization.
Recent use
Desensitization is widely known as one of the most effective therapy techniques. In recent decades, systematic desensitization has become less commonly used as a treatment of choice for anxiety disorders. Since 1970 academic research on systematic desensitization has declined, and the current focus has been on other therapies. In addition, the number of clinicians using systematic desensitization has also declined since 1980. Those clinicians that continue to regularly use systematic desensitization were trained before 1986. It is believed that the decrease of systematic desensitization by practicing psychologist is due to the increase in other techniques such as flooding, implosive therapy, and participant modeling.
History
In 1947, Wolpe discovered that the cats of Wits University could overcome their fears through gradual and systematic exposure. Wolpe studied Ivan Pavlov's work on artificial neuroses and the research done on elimination of children's fears by Watson and Jones. In 1958, Wolpe did a series of experiments on the artificial induction of neurotic disturbance in cats. He found that gradually deconditioning the neurotic animals was the best way to treat them of their neurotic disturbances. Wolpe deconditioned the neurotic cats through different feeding environments. Wolpe knew that this treatment of feeding would not generalize to humans and he instead substituted relaxation as a treatment to relieve the anxiety symptoms.
Wolpe found that if he presented a client with the actual anxiety inducing stimulus, the relaxation techniques did not work. It was difficult to bring all of the objects into his office because not all anxiety inducing stimuli are physical objects, but instead are concepts. Wolpe instead began to have his clients imagine the anxiety inducing stimulus or look at pictures of the anxiety inducing stimulus, much like the process that is done today.
See also
Flooding (psychology)
Immersion therapy
Sensitization
References
External links
Self-administered Systematic Desensitization
Anxiety disorder treatment
Behavior therapy
Behaviorism | Systematic desensitization | [
"Biology"
] | 1,694 | [
"Behavior",
"Behavior therapy",
"Behaviorism"
] |
2,236,439 | https://en.wikipedia.org/wiki/Scharnhorst%20effect |
The Scharnhorst effect is a hypothetical phenomenon in which light signals travel slightly faster than c between two closely spaced conducting plates. It was first predicted in a 1990 paper by Klaus Scharnhorst of the Humboldt University of Berlin, Germany. He showed using quantum electrodynamics that the effective refractive index n, at low frequencies, in the space between the plates was less than 1. Gabriel Barton and Scharnhorst in 1993 claimed that either signal velocity can exceed c or that the imaginary part of n is negative.
Explanation
Vacuum fluctuations exist even in a perfect vacuum. The vacuum fluctuations are influenced by conducting plates nearby. As a photon travels through a vacuum its propagation is influenced by these vacuum fluctuations.
A prediction made by this assertion is that the speed of a photon will be increased if it travels between two Casimir plates. The ultimate effect would be to increase the apparent speed of that photon. The closer the plates are, the stronger the change in the vacuum fluctuations, and the higher the speed of light.
The effect, however, is predicted to be minuscule. A photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 1036. This change in light's speed is too small to be detected with current technology, which prevents the Scharnhorst effect from being tested at this time.
Causality
The possibility of superluminal photons has caused concern because it might allow for the violation of causality by sending information faster than c. However, several authors (including Scharnhorst) argue that the Scharnhorst effect cannot be used to create causal paradoxes.
References
Quantum electrodynamics | Scharnhorst effect | [
"Physics"
] | 348 | [
"Theoretical physics",
"Hypotheses in physics"
] |
2,236,475 | https://en.wikipedia.org/wiki/Work%20Song%3A%20Three%20Views%20of%20Frank%20Lloyd%20Wright | Work Song: Three Views of Frank Lloyd Wright is a play in three acts by Jeffrey Hatcher and Eric Simonson. It premiered at the Milwaukee Repertory Theater in 2000. The play was commissioned by the Steppenwolf Theatre Company of Chicago.
The play explores the relationship between Wright's personal life and work. The ensemble portrays more than a dozen people who were important in Wright's eventful life. The work is populated by architect Louis Sullivan, friends Ayn Rand and Alexander Woollcott, son John Lloyd Wright, wives Catherine and Olgivanna, and paramour Mamah Cheney.
Productions
Work Song premiered at Milwaukee Repertory Theater's Quadracci Powerhouse Theater in September 2000. Co-author Eric Simonson directed Lee. E. Ernst as Wright.
References
External links
Work Song at its publisher, Dramatists Play Service
Plays by Eric Simonson
Plays by Jeffrey Hatcher
2000 plays
Plays based on real people
Frank Lloyd Wright | Work Song: Three Views of Frank Lloyd Wright | [
"Engineering"
] | 191 | [
"Architecture stubs",
"Architecture"
] |
2,236,626 | https://en.wikipedia.org/wiki/HD%20188753 | HD 188753 is a hierarchical triple star system approximately 151 light-years away in the constellation of Cygnus, the Swan. In 2005, an extrasolar planet was announced to be orbiting the primary star (designated HD 188753 A) in the system. Follow-up measurements by an independent group in 2007 did not confirm the planet's existence.
Stellar components
The primary star, HD 188753 A, is similar to the Sun with a mass only 6% larger and a stellar classification of G8V. Orbiting this primary at a distance of 12.3 AU is a pair of smaller stars that orbit each other with a period of , a semi-major axis of 0.67 AU, and eccentricity of . The pair have estimated masses of 0.96 and 0.67 solar masses. They orbit the primary with a period of about 25.7 years and an orbital eccentricity of about 0.50. The periastron distance of this orbit is 6.2 AU.
Possible planet
In 2005 the discovery of a candidate planet orbiting the primary star of the triple star system was announced. This planet, which received the designation HD 188753 Ab, was announced by a Polish astronomer working in the United States, Dr. Maciej Konacki. This would not be the first known planet in a triple star system – for example, the planet 16 Cygni Bb had been discovered earlier, orbiting one of the components of a wide triple system also in the constellation of Cygnus.
Since HD 188753 Ab was believed to be orbiting in a multi-star system, Konacki referred to planets of this type as "Tatooine planets" after Luke Skywalker's home world. The detection of this planet has been challenged by Eggenberger et al.
The candidate planet, a hot Jupiter gas giant slightly more massive than Jupiter, was thought to orbit the star HD 188753 A once every 80 hours or so (3.3 days), at a distance of about 8 million kilometers, a twentieth of the distance between Earth and the Sun. The existence of HD 188753 Ab in a relatively close triple star system challenged the current models of planet formation. The current idea is that giant planets form in the outer reaches of their system (in orbits similar to those of Jupiter and Saturn). Once formed, some of these planets may migrate close to their stars, becoming hot Jupiters. The theoretical difficulty in understanding HD 188753 Ab is that any protoplanetary disk would have ended around 1 astronomical unit from the primary star (due to the presence of the secondary stars). A Jovian planet should not have been able to form so close to the primary, and with no disk material beyond 1 AU, a planet should not have been able to form beyond that distance to migrate inward. One of the possibilities suggested that the planet formed before the secondary stars had reached their current configuration. This suggests that the two secondary stars were once more distant than they are now.
An attempt to confirm the discovery failed. In 2007, a team at the Geneva Observatory stated that they had the precision and sampling rate sufficient to have detected the would-be planet, and that they did not detect it. Konacki responded to this, stating that the precision of the follow-up measurements was not sufficient to confirm or deny the planet's existence and that he planned to release an update in 2007. , no update appears to have been published.
See also
51 Pegasi
Alpha Centauri
Polaris
References
External links
G-type main-sequence stars
Triple star systems
Hypothetical planetary systems
Cygnus (constellation)
Durchmusterung objects
188753
098001 | HD 188753 | [
"Astronomy"
] | 739 | [
"Cygnus (constellation)",
"Constellations"
] |
2,236,984 | https://en.wikipedia.org/wiki/Late%20fee | A late fee, also known as an overdue fine, late fine, or past due fee, is a charge fined against a client by a company or organization for not paying a bill or returning a rented or borrowed item by its due date. Its use is most commonly associated with businesses like creditors, video rental outlets and libraries. Late fees are generally calculated on a per day, per item basis.
Organizations encourage the payment of late fees by suspending a client's borrowing or rental privileges until accumulated fees are paid, sometimes after these fees have exceeded a certain level. Late fees are issued to people who do not pay on time and don't honor a lease or obligation for which they are responsible.
Library fine
Library fines, also known as overdue fines, late fees, or overdue fees, are small daily or weekly fees that libraries in many countries charge borrowers after a book or other borrowed item is kept past its due date. Library fines are an enforcement mechanism designed to ensure that library books are returned within a certain period of time and to provide increasing penalties for late items. Library fines do not typically accumulate over years or decades. Fines are usually assessed for only a few days or months, until a pre-set limit is reached.
Library fines are a small percentage of overall library budgets, but lost, stolen or un-returned library books can be costly for various levels of government that fund.
History
In the late 1800s, as modern circulating libraries began making checking out books possible for the general public, concerns rose about books being taken out and never returned. To encourage the return of books and to help fund the replacement acquisition of new books, libraries began assessing a fee on late books. For example, when the Aberdeen Free Library in Scotland opened in 1886, borrowers were fined a penny a week for every week a book was held longer than a fortnight.
Public libraries in New York began charging overdue fees in the late 1800s at a rate of 1 cent/day. That increased to 2 cents/day in 1954 and 5 cents/day in 1959. Before removing late fees in October 2021, the most common fee among New York City public libraries was 25 cents/day.
Elimination of fines
In recent years, many libraries have stopped charging fines. The Public Library Association and the Association of Library Services to Children have asked libraries to reconsider policies that may keep poor teens away for fear of fines. Many libraries also offer alternatives and amnesties in order to encourage patrons to return overdue books. "Food for Fines" programs, in which borrowers donate canned food in exchange for fine forgiveness, are common in libraries all over the world. Some libraries offer children and teens the option to "read down" their fines by reducing fines based on the amount of time spent reading or the number of books read. Other libraries may block access to library privileges until materials are returned. Librarians have had a longstanding debate over whether or not to charge late fines.
The American Library Association's (ALA) Policy 61 entitled, “Library Services to the Poor,” promotes the removal of all barriers to library and information services, particularly fees and overdue charges. Proponents for the elimination of fines argue for waiving fees if they are a barrier for continued use of the library. They argue that it is imperative that the library staff understand its patrons' financial situations, and that the barrier-to-use posed by fines is an underlying problem that needs to be addressed. Gehner (2010) proposed that libraries work with the community to determine the community's need and to build relationships. He also posited that overdue fines could be a limiting factor: since libraries face limited funding, fees and fines represent both a source of revenue and a barrier to use.
In 2019, the ALA published its "Resolution on Monetary Library Fines as a Form of Social Inequity", which described monetary fines as an economic barrier to access to library materials and services, as well as a barrier to public relations and more valuable use of library staff time. Considered contrary to the mission of the modern public library, the ALA called for libraries to eliminate such barriers. Later that year, due to the economic hardship of the COVID-19 pandemic, many libraries suspended fines for late materials. Realizing the ability of their systems to absorb the economic costs of eliminating fines, many library systems eventually made the decision to go completely fine-free, long after the pandemic restrictions were lifted.
Enforcement
Some libraries have stepped up enforcement and collection of late fees. People who do not return library property after an extended period of time may face arrest or a negative action on their credit reports in some jurisdictions. Punitive measures such as these are typically used to recover stolen library property, not to enforce late fees. In some institutions, patrons are responsible for paying the cost of replacing lost items.
Postage
A special use of the term "late fee" is postal surcharge once required by post offices to expedite delivery of a letter posted later than the normal pick-up time. For example, in Britain in 1856, a letter could be included in the night's mail for an extra pence if by 6:45 p.m. at the local office, for a tuppence by 7:15 p.m. at the Chief or District office, or for four pence by 7:30 p.m. at the Chief office. Such mail typically received a special postmark to note the late fee paid. Often a special Late Fee Box was provided.
Poverty
Late fees charged by banks, landlords, and utilities have been heavily criticized as a penalty against the poor, and attempts have been made in some places to outlaw them completely or place caps on them. The argument against them is that the poor will inevitably be forced to pay them as they cannot earn the money to pay their bills by the due date. These people will be forced to pay even higher fees for the same services, and will find making future timely payments to their creditors even more difficult.
On the other hand, late fees are sometimes levied by freelancers when payments to them are delayed. In this case, late payments can help protect non-staffers against income instability.
See also
Grace period
Turn-off notice
Regarding libraries:
Library card
Public library
References
Renting
Mathematical finance
Credit
Personal financial problems
Economics and time | Late fee | [
"Physics",
"Mathematics"
] | 1,285 | [
"Physical quantities",
"Time",
"Applied mathematics",
"Economics and time",
"Spacetime",
"Mathematical finance"
] |
2,237,139 | https://en.wikipedia.org/wiki/Basidiospore | A basidiospore is a reproductive spore produced by basidiomycete fungi, a grouping that includes mushrooms, shelf fungi, rusts, and smuts. Basidiospores typically each contain one haploid nucleus that is the product of meiosis, and they are produced by specialized fungal cells called basidia. Typically, four basidiospores develop on appendages from each basidium, of which two are of one strain and the other two of its opposite strain. In gills under a cap of one common species, there exist millions of basidia. Some gilled mushrooms in the order Agaricales have the ability to release billions of spores. The puffball fungus Calvatia gigantea has been calculated to produce about five trillion basidiospores. Most basidiospores are forcibly discharged, and are thus considered ballistospores. These spores serve as the main air dispersal units for the fungi. The spores are released during periods of high humidity and generally have a night-time or pre-dawn peak concentration in the atmosphere.
When basidiospores encounter a favorable substrate, they may germinate, typically by forming hyphae. These hyphae grow outward from the original spore, forming an expanding circle of mycelium. The circular shape of a fungal colony explains the formation of fairy rings, and also the circular lesions of skin-infecting fungi that cause ringworm. Some basidiospores germinate repetitively by forming small spores instead of hyphae.
General structure and shape
Basidiospores are generally characterized by an attachment peg (called a hilar appendage) on its surface. This is where the spore was attached to the basidium. The hilar appendage is quite prominent in some basidiospores, but less evident in others. An apical germ pore may also be present. Many basidiospores have an asymmetric shape due to their development on the basidium. Basidiospores are typically single-celled (without septa), and typically range from spherical to oval to oblong, to ellipsoid or cylindrical. The surface of the spore can be fairly smooth, or it can be ornamented. The color of the spore print is usually found in the spore wall, although in rare instances – like the yellow spores of Clavaria helicoides – the cytoplasm is responsible for the spore color.
References
Tree of Life: Basidiomycota
Basidiospores
Basidiomycota
Fungal morphology and anatomy
Mycology | Basidiospore | [
"Biology"
] | 544 | [
"Mycology"
] |
2,237,293 | https://en.wikipedia.org/wiki/Staudinger%20reaction | The Staudinger reaction is a chemical reaction of an organic azide with a phosphine or phosphite produces an iminophosphorane. The reaction was discovered by and named after Hermann Staudinger. The reaction follows this stoichiometry:
R3P + R'N3 → R3P=NR' + N2
Staudinger reduction
The Staudinger reduction is conducted in two steps. First phosphine imine-forming reaction is conducted involving treatment of the azide with the phosphine. The intermediate, e.g. triphenylphosphine phenylimide, is then subjected to hydrolysis to produce a phosphine oxide and an amine:
R3P=NR' + H2O → R3P=O + R'NH2
The overall conversion is a mild method of reducing an azide to an amine. Triphenylphosphine or tributylphosphine are most commonly used, yielding tributylphosphine oxide or triphenylphosphine oxide as a side product in addition to the desired amine. An example of a Staudinger reduction is the organic synthesis of the pinwheel compound 1,3,5-tris(aminomethyl)-2,4,6-triethylbenzene.
Reaction mechanism
The reaction mechanism centers around the formation of an iminophosphorane through nucleophilic addition of the aryl or alkyl phosphine at the terminal nitrogen atom of the organic azide and expulsion of diatomic nitrogen. The iminophosphorane is then hydrolyzed in the second step to the amine and a phosphine oxide byproduct.
Staudinger ligation
Of interest in chemical biology is the Staudinger ligation, which has been called one of the most important bioconjugation methods. Two versions of the Staudinger ligation have been developed. Both begin with the classic iminophosphorane reaction.
In classical Staudinger ligation, the organophosphorus compound becomes incorporated into the peptide. Typically, appended to the organophosphorus component are reporter groups such as fluorophores. In traceless Staudinger ligation, the organophosphorus group dissociates giving a phosphorus-free bioconjugate.
References
External links
Staudinger Reaction at organic-chemistry.org accessed 060906.
Julia-Staudinger Reaction
Organic redox reactions
Carbohydrate chemistry
Name reactions | Staudinger reaction | [
"Chemistry"
] | 536 | [
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Carbohydrate chemistry",
"nan",
"Chemical synthesis",
"Glycobiology"
] |
2,237,309 | https://en.wikipedia.org/wiki/Cohesion%20%28chemistry%29 | In chemistry and physics, cohesion (), also called cohesive attraction or cohesive force, is the action or property of like molecules sticking together, being mutually attractive. It is an intrinsic property of a substance that is caused by the shape and structure of its molecules, which makes the distribution of surrounding electrons irregular when molecules get close to one another, creating electrical attraction that can maintain a macroscopic structure such as a water drop. Cohesion allows for surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed.
Water, for example, is strongly cohesive as each molecule may make four hydrogen bonds to other water molecules in a tetrahedral configuration. This results in a relatively strong Coulomb force between molecules. In simple terms, the polarity (a state in which a molecule is oppositely charged on its poles) of water molecules allows them to be attracted to each other. The polarity is due to the electronegativity of the atom of oxygen: oxygen is more electronegative than the atoms of hydrogen, so the electrons they share through the covalent bonds are more often close to oxygen rather than hydrogen. These are called polar covalent bonds, covalent bonds between atoms that thus become oppositely charged. In the case of a water molecule, the hydrogen atoms carry positive charges while the oxygen atom has a negative charge. This charge polarization within the molecule allows it to align with adjacent molecules through strong intermolecular hydrogen bonding, rendering the bulk liquid cohesive. Van der Waals gases such as methane, however, have weak cohesion due only to van der Waals forces that operate by induced polarity in non-polar molecules.
Cohesion, along with adhesion (attraction between unlike molecules), helps explain phenomena such as meniscus, surface tension and capillary action.
Mercury in a glass flask is a good example of the effects of the ratio between cohesive and adhesive forces. Because of its high cohesion and low adhesion to the glass, mercury does not spread out to cover the bottom of the flask, and if enough is placed in the flask to cover the bottom, it exhibits a strongly convex meniscus, whereas the meniscus of water is concave. Mercury will not wet the glass, unlike water and many other liquids, and if the glass is tipped, it will 'roll' around inside.
See also
Adhesion – the attraction of molecules or compounds for other molecules of a different kind
Specific heat capacity – the amount of heat needed to raise the temperature of one gram of a substance by one degree Celsius
Heat of vaporization – the amount of energy needed to change one gram of a liquid substance to a gas at constant temperature
Zwitterion – a molecule composed of individual functional groups which are ions, of which the most prominent examples are the amino acids
Chemical polarity – a neutral, or uncharged molecule or its chemical groups having an electric dipole moment, with a negatively charged end and a positively charged end
References
External links
The Bubble Wall (audio slideshow from the National High Magnetic Field Laboratory explaining cohesion, surface tension and hydrogen bonds)
"Adhesion and Cohesion of Water" – US Geological Survey
Molecular physics
Intermolecular forces
Physical quantities | Cohesion (chemistry) | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 670 | [
"Physical phenomena",
"Molecular physics",
"Physical quantities",
"Quantity",
"Materials science",
"Intermolecular forces",
" molecular",
"nan",
"Atomic",
"Physical properties",
" and optical physics"
] |
2,237,358 | https://en.wikipedia.org/wiki/Cypress%20Hills%20Interprovincial%20Park | Cypress Hills Interprovincial Park is a natural park in Canada straddling the Alberta / Saskatchewan boundary and jointly administered by the two provinces. Located south-east of Medicine Hat in the Cypress Hills, it became Canada's first interprovincial park in 1989.
The park consists of two protected areas, the West Block, that straddles the Alberta / Saskatchewan boundary between Alberta Highway 41, the townsite of Elkwater, Saskatchewan Highway 615, Saskatchewan Highway 271, and Fort Walsh, and the Centre Block, an additional area of in Saskatchewan, west of Saskatchewan Highway 21.
Geography
The Cypress Hills plateau rises up to above the surrounding prairie, to a maximum elevation of at "Head of the Mountain" at the west end in Alberta, making it Canada's highest point between the Canadian Rockies and the Labrador Peninsula. Eastward across the boundary is the highest point in Saskatchewan, at . The "West Block" of the Cypress Hills spans the provincial boundary. Battle Creek runs through the central part of the park. Although the hills seem relatively low, in a larger geographic context the plateau does rise gradually from many kilometres away so that the total elevation gain from Medicine Hat is approximately . The vegetation of park is undergoing changes through woody plant encroachment, with a detected 1% increase of shrub cover annually.
History
1931 — Cypress Hills Provincial Park was established in Saskatchewan.
1951 — Cypress Hills Provincial Park was established in Alberta.
1989 — On August 25, the governments of Alberta and Saskatchewan signed an agreement committing themselves to cooperation on ecosystem management, education, and park promotion.
2000 — Fort Walsh National Historic Site (located on the Saskatchewan side of the West Block) joined the collective. Together, these three partner agencies make up the park. Both Alberta and Saskatchewan provincial governments signed the Cypress Hills Interprovincial Park agreement, establishing the first interprovincial park in Canada.
2001 — On August 18, Vance Petriew discovered a comet from Cypress Hills during the Saskatchewan Summer Star Party. The comet was later named 185P/Petriew.
2004 — On September 28, Saskatchewan Parks, Alberta Community Development, Parks Canada and the Royal Astronomical Society of Canada signed an agreement that declared the park a dark-sky preserve.
2011 — On August 25, the Park opened the Cypress Hills Observatory and Yurt classroom.
Activities
On both sides of the park, all year long, park interpreters present education programs to school and youth groups, adult and seniors groups, and a wide range of park visitors. During the summer months, there's camping, hiking, and swimming. During the winter, there's skiing, winter camping, snowmobiling, and other winter activities. In summertime, kayak, canoe, bicycle, and stand up paddle boards are available for rent. In the winter, kicksleds, snowshoes, skates, and cross-country skis rentals are available.
Alberta
On the Alberta side of West Block, key park features include Head of the Mountain Viewpoint, the highest point between the Rocky Mountains and Labrador, the Elkwater townsite (a cottage community sitting at the same elevation as the Banff townsite), Horseshoe Canyon and Reesor Lake viewpoints (offering views on a clear day), over of hiking and mountain biking trails, and Hidden Valley Ski Resort. Three lakes sit on the Alberta side of the park — Elkwater Lake, Spruce Coulee Reservoir, and Reesor Lake.
Saskatchewan
Like in Alberta, there are campgrounds, hiking trails, and lakes. Some of the lakes include Harris, Adams, Coulee Lake, Loch Lomond, and Loch Leven. On the shores of Loch Leven in Centre Block is a marina, the community of Loch Leven, a restaurant, swimming pool, tourist info centre, and The Resort at Cypress Hills.
Winter amenities around the park include a winter picnic shelter, warm-up shacks, a tobogganing hill, a mini-luge slide, and Camp-Easy yurt rentals. There are also snowmobile and winter fat bike trails.
Cypress Hills Ski Area
Cypress Hills Ski Area is a cross-country ski area on the Saskatchewan side of Cypress Hills Interprovincial Park. During the winter, summer campgrounds are transformed into of Cross-country skiing trails and of snowshoeing trails. A further of summer hiking trails are designated for cross-country skiing, of which are groomed.
Nature
Approximately 700 species of plants and animals thrive in the park, including 14 species of orchids.
The park protects the majority of the Cypress Hills landscape, which consists of three separate elevated blocks of lush forest and fescue grassland surrounded by dry mixed-grass prairie. The West and Centre Blocks are protected as provincial parks, and are managed by Alberta Parks and Protected Areas and Saskatchewan Parks, respectively. The "East Block" of the Cypress Hills, situated near Eastend, Saskatchewan, is not a park or protected area. The Fort Walsh National Historic Site is also located adjacent to the West Block.
Mammals
There are five species of large hoofed mammals found in the park: wapiti, moose, mule deer, white-tailed deer, and pronghorn. Other mammals found in Cypress Hills Interprovincial Park include:
Eulipotyphla
Masked shrew (Sorex cinereus)
American pygmy shrew (Sorex hoyi)
Wandering shrew (Sorex vagrans)
Lagomorpha
Snowshoe hare (Lepus americanus)
White-tailed jackrabbit (Lepus townsendii)
Nuttall's cottontail (Sylvilagus nuttallii)
Chiroptera
Hoary bat (Aeorestes cinereus)
Big brown bat (Eptesicus fuscus)
Silver-haired bat (Lasionycteris noctivagans)
Eastern red bat (Lasiurus borealis)
Small-footed myotis (Myotis ciliolabrum)
Long-eared myotis (Myotis evotis)
Little brown myotis (Myotis lucifugus)
Carnivora
Coyote (Canis latrans)
North American river otter (Lontra canadensis)
Bobcat (Lynx rufus)
Striped skunk (Mephitis mephitis)
Stoat (Mustela erminea)
Black-footed ferret (Mustela nigripes)
Least weasel (Mustela nivalis)
Long-tailed weasel (Neogale frenata)
American mink (Neogale vison)
Raccoon (Procyon lotor)
Mountain lion (Puma concolor)
American badger (Taxidea taxus)
Northern pocket gopher (Thomomys talpoides)
Swift fox (Vulpes velox)
Rodentia
North American beaver (Castor canadensis)
Black-tailed prairie dog (Cynomys ludovicianus)
Ord's kangaroo rat (Dipodomys ordii)
North American porcupine (Erethizon dorsatum)
Thirteen-lined ground squirrel (Ictidomys tridecemlineatus)
Sagebrush vole (Lemmiscus curtatus)
Western meadow vole (Microtus drummondii)
Gapper's red-backed vole (Myodes gapperi)
Least chipmunk (Neotamias minimus)
Bushy-tailed woodrat (Neotoma cinerea)
Muskrat (Ondatra zibethicus)
Northern grasshopper mouse (Onychomys leucogaster)
White-footed mouse (Peromyscus leucopus)
Western deer mouse (Peromyscus sonoriensis)
American red squirrel (Tamiasciurus hudsonicus)
Richardson's ground squirrel (Urocitellus richardsonii)
Olive-backed pocket mouse (Perognathus fasciatus)
Western jumping mouse (Zapus princeps)
Fish species
Fish species include walleye, yellow perch, northern pike, brook, brown, westslope cutthroat and rainbow trout, burbot, common carp, white sucker, and shorthead redhorse.
See also
List of protected areas of Alberta
List of protected areas of Saskatchewan
List of highest points of Canadian provinces and territories
List of ski areas and resorts in Canada
References
External links
Official Park website
A Road Trip To Cypress Hills
Cypress Hills Interprovincial Park
Cypress County
Provincial parks of Alberta
Provincial parks of Saskatchewan
Dark-sky preserves in Canada
Maple Creek No. 111, Saskatchewan
1931 establishments in Saskatchewan
1951 establishments in Alberta | Cypress Hills Interprovincial Park | [
"Astronomy"
] | 1,748 | [
"Dark-sky preserves in Canada",
"Dark-sky preserves"
] |
2,237,679 | https://en.wikipedia.org/wiki/Methyl%20nitrate | Methyl nitrate is the methyl ester of nitric acid and has the chemical formula CH3NO3. It is a colourless explosive volatile liquid.
Synthesis
It can be produced by the condensation of nitric acid and methanol:
CH3OH + HNO3 → CH3NO3 + H2O
A newer method uses methyl iodide and silver nitrate:
CH3I + AgNO3 → CH3NO3 + AgI
Methyl nitrate can be produced on a laboratory or industrial scale either through the distillation of a mixture of methanol and nitric acid, or by the nitration of methanol by a mixture of sulfuric and nitric acids. The first procedure is not preferred due to the great explosion danger presented by the methyl nitrate vapour. The second procedure is essentially identical to that of making nitroglycerin. However, the process is usually run at a slightly higher temperature and the mixture is stirred mechanically on an industrial scale instead of with compressed air.
Electrolytic production methods have been reported involving electrolyzing sodium acetate and sodium nitrate in acetic acid.
Methyl nitrate is also the product of the oxidation of some organic compounds in the presence of nitrogen oxides and chlorine, namely chloroethane or di-tert-butyl ether, while also producing nitromethane. Oxidation of nitromethane using nitrogen dioxide in an inert atmosphere can also yield methyl nitrate.
Explosive properties
Methyl nitrate is a sensitive explosive. When ignited it burns extremely fiercely with a gray-blue flame. Methyl nitrate is a very strong explosive with a detonation velocity of 6,300 m/s, like nitroglycerin, ethylene glycol dinitrate, and other nitrate esters. The sensitivity of methyl nitrate to initiation by detonation is among the greatest known, with even a number one blasting cap, the lowest power available, producing a near full detonation of the explosive.
Despite the superior explosive properties of methyl nitrate, it has not received application as an explosive due mostly to its high volatility, which prevents it from being stored or handled safely.
Safety
As well as being an explosive, methyl nitrate is toxic and causes headaches when inhaled.
History
Methyl nitrate has not received much attention as an explosive, but as a mixture containing 25% methanol it was used as rocket fuel and volumetric explosive under the name Myrol in Nazi Germany during World War II. This mixture would evaporate at a constant rate and so its composition would not change over time. It presents a slight explosive danger (it is somewhat difficult to detonate) and does not detonate easily via shock.
According to A. Stettbacher, the substance was used as a combustible during the Reichstag fire in 1933. Gartz shows in a recent work that only methyl nitrate with its production and explosion potential can represent the famous and mysterious "shooting water" from the German Feuerwerkbuch ("fireworks book") of about 1420 (the oldest technical text in German language, handwritten in Dresden and later printed in Augsburg).
An extract of the text from the 1420 Feuerwerkbuch is as follows (written in Early New High German):
Translated:
Structure
The structure of methyl nitrate has been studied experimentally in the gas phase (combined gas-electron diffraction and microwave spectroscopy, GED/MW) and in the crystalline state (X-ray diffraction, XRD) (see Table 1).
In the solid state there are weak interactions between the O and N atoms of different molecules (see figure).
References
External links
Alkyl nitrates
Explosive chemicals
Liquid explosives
Methyl esters | Methyl nitrate | [
"Chemistry"
] | 761 | [
"Explosive chemicals"
] |
2,237,858 | https://en.wikipedia.org/wiki/Exponential%20object | In mathematics, specifically in category theory, an exponential object or map object is the categorical generalization of a function space in set theory. Categories with all finite products and exponential objects are called cartesian closed categories. Categories (such as subcategories of Top) without adjoined products may still have an exponential law.
Definition
Let be a category, let and be objects of , and let have all binary products with . An object together with a morphism is an exponential object if for any object and morphism there is a unique morphism (called the transpose of ) such that the following diagram commutes:
This assignment of a unique to each establishes an isomorphism (bijection) of hom-sets,
If exists for all objects in , then the functor defined on objects by and on arrows by , is a right adjoint to the product functor . For this reason, the morphisms and are sometimes called exponential adjoints of one another.
Equational definition
Alternatively, the exponential object may be defined through equations:
Existence of is guaranteed by existence of the operation .
Commutativity of the diagrams above is guaranteed by the equality .
Uniqueness of is guaranteed by the equality .
Universal property
The exponential is given by a universal morphism from the product functor to the object . This universal morphism consists of an object and a morphism .
Examples
In the category of sets, an exponential object is the set of all functions . The map is just the evaluation map, which sends the pair to . For any map the map is the curried form of :
A Heyting algebra is just a bounded lattice that has all exponential objects. Heyting implication, , is an alternative notation for . The above adjunction results translate to implication () being right adjoint to meet (). This adjunction can be written as , or more fully as:
In the category of topological spaces, the exponential object exists provided that is a locally compact Hausdorff space. In that case, the space is the set of all continuous functions from to together with the compact-open topology. The evaluation map is the same as in the category of sets; it is continuous with the above topology. If is not locally compact Hausdorff, the exponential object may not exist (the space still exists, but it may fail to be an exponential object since the evaluation function need not be continuous). For this reason the category of topological spaces fails to be cartesian closed.
However, the category of locally compact topological spaces is not cartesian closed either, since need not be locally compact for locally compact spaces and . A cartesian closed category of spaces is, for example, given by the full subcategory spanned by the compactly generated Hausdorff spaces.
In functional programming languages, the morphism is often called , and the syntax is often written . The morphism should not be confused with the eval function in some programming languages, which evaluates quoted expressions.
See also
Closed monoidal category
Notes
References
External links
Interactive Web page which generates examples of exponential objects and other categorical constructions. Written by Jocelyn Paine.
Objects (category theory) | Exponential object | [
"Mathematics"
] | 651 | [
"Objects (category theory)",
"Mathematical structures",
"Category theory"
] |
13,518,556 | https://en.wikipedia.org/wiki/Mabinlin | Mabinlins are sweet-tasting proteins extracted from the seed of mabinlang (Capparis masaikai Levl.), a plant growing in Yunnan province of China. There are four homologues. Mabinlin-2 was first isolated in 1983 and characterised in 1993, and is the most extensively studied of the four. The other variants of mabinlin-1, -3 and -4 were discovered and characterised in 1994.
Protein structures
The 4 mabinlins are very similar in their amino acids sequences (see below).
Chain A
M-1:
M-2:
M-3:
M-4:
Chain B
M-1:
M-2:
M-3:
M-4:
''Amino acid sequence of Mabinlins homologues are adapted from Swiss-Prot biological database of protein.
The molecular weights of Mabinlin-1, Mabinlin-3 and Mabinlin-4 are 12.3 kDa, 12.3 kDa and 11.9 kDa, respectively.
With a molecular weight of 10.4kDa, mabinlin-2 is lighter than mabinlin-1. It is a heterodimer consisting of two different chains A and B produced by post-translational cleavage. The A chain is composed of 33 amino acid residues and the B chain is composed of 72 amino acid residues. The B chain contains two intramolecular disulfide bonds and is connected to the A chain through two intermolecular disulfide bridges.
Mabinlin-2 is the sweet-tasting protein with the highest known thermostability, which is due to the presence of the four disulfide bridges. It has been suggested also that the difference in the heat stability of the different mabinlin homologues is due to the presence of an arginine residue (heat-stable homologue) or a glutamine (heat-unstable homologue) at position 47 in the B-chain.
The sequences of Mabilins cluster with Napins ().
Sweetness properties
Mabinlins sweetness were estimated to be about 100–400 times that of sucrose on molar basis, 10 times sucrose on a weight basis, which make them less sweet than thaumatin (3000 times) but elicit a similar sweetness profile.
The sweetness of mabinlin-2 is unchanged after 48 hours incubation at 80 °C.
Mabinlin-3 and -4 sweetness stayed unchanged after 1 hour at 80 °C, while mabinlin-1 loses sweetness after 1 hour at the same condition.
As a sweetener
Mabinlins, as proteins, are readily soluble in water and found to be highly sweet; however, mabinlin-2 with its high heat stability has the best chance to be used as a sweetener.
During the past decade, attempts have been made to produce mabinlin-2 industrially. The sweet-tasting protein has been successfully synthesised by a stepwise solid-phase method in 1998, however the synthetic protein had an astringent-sweet taste.
Mabinlin-2 has been expressed in transgenic potato tubers, but no explicit results have been reported yet. However, patents to protect production of recombinant mabinlin by cloning and DNA sequencing have been issued.
See also
Brazzein
Monellin
Pentadin
Thaumatin
References
External links
Sugar substitutes
Proteins | Mabinlin | [
"Chemistry"
] | 703 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
13,519,224 | https://en.wikipedia.org/wiki/Amlexanox | Amlexanox (trade name Aphthasol) is an anti-inflammatory antiallergic immunomodulator used to treat recurrent aphthous ulcers (canker sores), and (in Japan) several inflammatory conditions. This drug has been discontinued in the U.S.
Medical uses
Amlexanox is the active ingredient in a common topical treatment for recurrent aphthous ulcers of the mouth (canker sores), reducing both healing time and pain. Amlexanox 5% paste is well tolerated, and is typically applied four times per day directly on the ulcers. A 2011 review found it to be the most effective treatment of the eight treatments investigated for recurrent canker sores. It is also used to treat ulcers associated with Behçet disease.
In Japan, it is used to treat bronchial asthma, allergic rhinitis and conjunctivitis.
Contraindications
The drug is contraindicated in those with known allergies to it.
Adverse effects
Amlexanox may cause a slightly painful stinging or burning sensation, nausea or diarrhea.
Mechanism of action
Its mechanism of action is not well-determined, but it might inhibit inflammation by inhibiting the release of histamine and leukotrienes. It has been shown to selectively inhibit TBK1 and IKK-ε, producing reversible weight loss and improved insulin sensitivity, reduced inflammation and attenuated hepatic steatosis without affecting food intake in obese mice. It produced a statistically significant reduction in glycated hemoglobin and fructosamine in obese patients with type 2 diabetes and nonalcoholic fatty liver disease
Chemistry
The chemical itself is an odorless, white to yellowish-white powder.
The 5% preparation for patient use is an adherent beige paste, and it is also available in some countries as a tablet that adheres to the ulcer in the mouth.
Pharmacokinetics
Amlexanox applied to an aphthous ulcer is largely absorbed through the gastrointestinal tract; an insignificant amount enters the bloodstream through the ulcer itself. After a single 100 mg dose, mean maximum serum concentration occurs 2.4 +/- 0.9 hours after application, with a half-life of elimination (through urine) of 3.5 +/- 1.1 hours. With multiple daily applications (four doses per day), steady state serum levels occur after one week, with no accumulation occurring after four weeks.
History
The patent for its use as a treatment for aphthous ulcers was issued in November 1994 to inventors Kakubhai R. Vora, Atul Khandwala and Charles G. Smith, and assigned to Chemex Pharmaceuticals, Inc.
Society and culture
Economics
A 2011 review found a one-week supply of amlexanox 5% paste to cost $30.
Research
A review found that, , robust studies investigating its effectiveness alongside other canker sore treatments were still needed.
Because it is an inhibitor of the protein kinases TBK1 and IKK-ε, which are implicated in the etiology of type II diabetes and obesity, amlexanox may be a candidate for human clinical trials testing in relation to these diseases.
Synthesis
References
Anti-inflammatory agents
Leukotriene antagonists
Carboxylic acids
Isopropyl compounds | Amlexanox | [
"Chemistry"
] | 718 | [
"Carboxylic acids",
"Functional groups"
] |
13,519,248 | https://en.wikipedia.org/wiki/Domiphen%20bromide | Domiphen bromide is a chemical antiseptic and a quaternary ammonium compound.
It is also used together with other ingredients in the suspension for the active ingredient ibuprofen, in some children painkiller medicines, besides other ingredients like Maltitol liquid, water, glycerol, citric acid, sodium citrate, sodium chloride, sodium saccharin, flavour, xanthan gum, polysorbate 80.
References
Quaternary ammonium compounds
Bromides
Phenol ethers
Cationic surfactants | Domiphen bromide | [
"Chemistry"
] | 117 | [
"Bromides",
"Salts"
] |
13,519,253 | https://en.wikipedia.org/wiki/Polynoxylin | Polynoxylin (trade name Anaflex in Egypt) is an antiseptic for local treatment of the skin and the mouth. It is a formaldehyde releasing antimicrobial polymer.
References
Antimicrobials
Dental materials
Ureas | Polynoxylin | [
"Physics",
"Chemistry",
"Biology"
] | 54 | [
"Dental materials",
"Antimicrobials",
"Organic compounds",
"Materials",
"Biocides",
"Matter",
"Ureas"
] |
13,519,266 | https://en.wikipedia.org/wiki/Olaflur | {{Drugbox
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 447933354
| IUPAC_name = N,N,N'-tris(2-hydroxyethyl)-N-octadecylpropane-1,3-diamine dihydrofluoride
| image = Olaflur2.png
| width = 300
| tradename =
| Drugs.com =
| pregnancy_AU =
| pregnancy_US =
| pregnancy_category = No adequate studies
| legal_AU =
| legal_CA =
| legal_UK =
| legal_US =
| legal_status = OTC
| routes_of_administration = Topical (toothpaste, mouthwash, gel)
| bioavailability =
| protein_bound =
| metabolism =
| elimination_half-life =
| excretion =
| CAS_number_Ref =
| CAS_number = 6818-37-7
| ATC_prefix = A01
| ATC_suffix = AA03
| PubChem = 23257
| DrugBank_Ref =
| DrugBank =
| UNII_Ref =
| UNII = 8NY9L8837D
| ChemSpiderID_Ref =
| ChemSpiderID = 21751
| smiles = F.F.OCCN(CCCN(CCCCCCCCCCCCCCCCCC)CCO)CCO
| StdInChI_Ref =
| StdInChI = 1S/C27H58N2O3.2FH/c1-2-3-4-5-6-7-8-9-10-11-12-13-14-15-16-17-19-28(22-25-30)20-18-21-29(23-26-31)24-27-32;;/h30-32H,2-27H2,1H3;2*1H
| StdInChIKey_Ref =
| StdInChIKey = ZVVSSOQAYNYNPP-UHFFFAOYSA-N
| C=27 | H=60 | F=2 | N=2 | O=3
| synonyms = {3-[Octadecyl(2-hydroxyethyl)aminio]propyl}bis(2-hydroxyethyl)amine dihydrofluoride
}}Olaflur (INN, or amine fluoride 297''') is a fluoride-containing substance that is an ingredient of toothpastes and solutions for the prevention of dental caries. It has been in use since 1966. Especially in combination with dectaflur, it is also used in the form of gels for the treatment of early stages of caries, sensitive teeth, and by dentists for the refluoridation of damaged tooth enamel.
Overdosage
Overdosage leads to irritation of the oral mucosa. In especially sensitive persons, even standard doses of olaflur can cause irritation. Like other fluoride salts, olaflur is toxic when given in high doses over an extended period of time. Especially in children, before the development of the permanent teeth, overdosage can lead to dental fluorosis, a discolouring and weakening of the enamel. In acute cases of overdosage, for example when an olaflur containing preparation is swallowed, calcium in any oral form serves as an antidote. Often milk is used because it is usually at hand.
Interactions
Because calcium fluoride is practically insoluble in water, calcium-containing drugs and food inhibit the action of olaflur.
Chemistry and mechanism of action
Olaflur is a salt consisting of an alkyl ammonium cation and fluoride as the counterion. With a long lipophilic hydrocarbon chain, the cation has surfactant properties. It forms a film layer on the surface of teeth, which facilitates incorporation of fluoride into the enamel. The top layers of the enamel's primary mineral, hydroxylapatite, are converted into the more robust fluorapatite. The fluoridation reaches only a depth of a few nanometres, which has raised doubts whether the mechanism really relies on the formation of fluorapatite.
Synthesis
The synthesis of olaflur starts from cattle's tallow. The contained fatty acids, mainly stearic acid (C17H35COOH), are obtained by hydrolysis, and then converted to the corresponding amides, which in turn are reduced catalytically to the primary amines (largely octadecylamine). Addition of acrylonitrile, followed by another reduction, yields N''-alkyl-1,3-propanediamines. The two nitrogen atoms react with ethylene oxide to form tertiary amines. Finally, hydrofluoric acid is added to give the end product. Because olaflur is produced from a mixture of fatty acids, some molecules have side chains that are longer or shorter than 18 carbon atoms. Other byproducts of the reaction include hydroxyethyl ethers resulting from addition of ethylene oxide to the free hydroxyl groups. The presence of these side products is clinically irrelevant.
See also
Fluoride therapy
Remineralisation of teeth
References
Dental materials
Fluorides
Amines
Cationic surfactants | Olaflur | [
"Physics",
"Chemistry"
] | 1,131 | [
"Dental materials",
"Functional groups",
"Salts",
"Materials",
"Amines",
"Bases (chemistry)",
"Fluorides",
"Matter"
] |
13,519,340 | https://en.wikipedia.org/wiki/Magnesium%20pidolate | Magnesium pidolate, the magnesium salt of pidolic acid (also known as pyroglutamic acid), is a mineral supplement, which contains 8.6% magnesium w/w.
External links
Magnesium compounds
Salts of carboxylic acids | Magnesium pidolate | [
"Chemistry"
] | 55 | [
"Salts of carboxylic acids",
"Salts"
] |
13,519,351 | https://en.wikipedia.org/wiki/Magnesium%20levulinate | Magnesium levulinate, the magnesium salt of levulinic acid, is a mineral supplement.
External links
Magnesium compounds
Salts of carboxylic acids | Magnesium levulinate | [
"Chemistry"
] | 32 | [
"Salts of carboxylic acids",
"Salts"
] |
13,519,369 | https://en.wikipedia.org/wiki/Magnesium%20aspartate | Magnesium aspartate is a magnesium salt of aspartic acid. It is used as a mineral supplement, and as an ingredient in manufacturing of cosmetics and household products.
As magnesium is an essential micronutrient, the use of magnesium aspartate as a supplement is intended to increase magnesium levels in the body.
Bioavailability
Absorption of magnesium from different preparations of magnesium supplements varies, with some studies indicating that magnesium in the aspartate (and several other) forms has more complete absorption than magnesium oxide and magnesium citrate forms.
In its evaluation in 2005, a scientific panel of the European Food Safety Authority concluded that the bioavailability of magnesium L-aspartate was similar to that from other organic magnesium salts and the more soluble inorganic magnesium salts. Overall, it was concluded that organic salts of magnesium have the greatest water solubility and demonstrate a greater oral absorption and bioavailability compared to less soluble magnesium preparations such as magnesium oxide, magnesium hydroxide, magnesium carbonate and magnesium sulfate.
Chemical structure and properties
Magnesium aspartate is a compound formed by the combination of the divalent magnesium cation (Mg2+) and the dicarboxylic amino acid aspartate (C4H6NO4-). The chemical formula for this compound is Mg(C4H6NO4)2.
The structure of magnesium aspartate consists of a central magnesium ion that is chelated, or bound, by two aspartate anions. The aspartate moiety contains a carboxyl group (-COOH), an amino group (-NH2), and a second carboxyl group, forming a dicarboxylic amino acid structure.
This chelated structure is responsible for the enhanced water solubility of magnesium aspartate compared to other magnesium salts, such as magnesium oxide or magnesium citrate.
Supplemental use
Magnesium deficiency is unlikely to occur from low dietary intake because magnesium is abundant in the food supply and the kidneys restrict its excretion via the urine. Long-term deficiency of magnesium may result from chronic alcoholism or some prescription drugs. Signs of deficiency that may require magnesium supplementation include loss of appetite, nausea, vomiting, fatigue, and weakness.
Dosage
Adequate Intake (AI)
Magnesium supplements and other magnesium containing products, such as antacids, can bind with prescription medicines, reducing their effectiveness.
Safety
Adverse effects from magnesium occurring naturally in food have not been described. However, excessive magnesium supplementation causes diarrhea — a side effect used by prescription as a laxative. Individuals with kidney disease have higher risk for adverse effects with magnesium supplementation. Excessive magnesium supplementation may cause a fall in blood pressure.
See also
Potassium aspartate
References
External links
Magnesium in the diet - MedlinePlus, US National Library of Medicine, January 2023
Magnesium compounds
Metal-amino acid complexes
Salts of carboxylic acids | Magnesium aspartate | [
"Chemistry"
] | 591 | [
"Salts of carboxylic acids",
"Coordination chemistry",
"Metal-amino acid complexes",
"Salts"
] |
13,520,509 | https://en.wikipedia.org/wiki/Baire%20measure | In mathematics, a Baire measure is a measure on the σ-algebra of Baire sets of a topological space whose value on every compact Baire set is finite. In compact metric spaces the Borel sets and the Baire sets are the same, so Baire measures are the same as Borel measures that are finite on compact sets. In general Baire sets and Borel sets need not be the same. In spaces with non-Baire Borel sets, Baire measures are used because they connect to the properties of continuous functions more directly.
Variations
There are several inequivalent definitions of Baire sets, so correspondingly there are several inequivalent concepts of Baire measure on a topological space. These all coincide on spaces that are locally compact σ-compact Hausdorff spaces.
Relation to Borel measure
In practice Baire measures can be replaced by regular Borel measures. The relation between Baire measures and regular Borel measures is as follows:
The restriction of a finite Borel measure to the Baire sets is a Baire measure.
A finite Baire measure on a compact space is always regular.
A finite Baire measure on a compact space is the restriction of a unique regular Borel measure.
On compact (or σ-compact) metric spaces, Borel sets are the same as Baire sets and Borel measures are the same as Baire measures.
Examples
Counting measure on the unit interval is a measure on the Baire sets that is not regular (or σ-finite).
The (left or right) Haar measure on a locally compact group is a Baire measure invariant under the left (right) action of the group on itself. In particular, if the group is an abelian group, the left and right Haar measures coincide and we say the Haar measure is translation invariant. See also Pontryagin duality.
References
Leonard Gillman and Meyer Jerison, Rings of Continuous Functions, Springer Verlag #43, 1960
Measures (measure theory) | Baire measure | [
"Physics",
"Mathematics"
] | 408 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
13,520,719 | https://en.wikipedia.org/wiki/Nu%20Piscium | Nu Piscium (ν Piscium) is an orange-hued binary star system in the zodiac constellation of Pisces. Prior to the formation of the modern constellation boundaries in 1930, it was designated 51 Ceti in the Cetus constellation. Nu Piscium is visible to the naked eye, having a combined apparent visual magnitude of 4.44. Based upon an annual parallax shift of 8.98 mas as seen from Earth, it is located about 365 light years from the Sun.
The primary, component A, is an evolved, K-type giant star with a stellar classification of K3 IIIb. It is a weak barium star, indicating that the atmosphere was previously enriched by accretion of s-process elements from what is now a white dwarf companion. The giant has 1.66 times the mass of the Sun and has expanded to about 35 times the Sun's radius. It is about 3.4 billion years old and is radiating 380 times the Sun's luminosity from its photosphere at an effective temperature of 4,154 K.
Naming
In Chinese, (), meaning Outer Fence, refers to an asterism consisting of ν Piscium, δ Piscium, ε Piscium, ζ Piscium, μ Piscium, ξ Piscium and α Piscium. Consequently, the Chinese name for ν Piscium itself is (, .)
References
K-type giants
Barium stars
Binary stars
Pisces (constellation)
Piscium, Nu
BD+04 0293
Piscium, 106
007884
010380
0489 | Nu Piscium | [
"Astronomy"
] | 339 | [
"Pisces (constellation)",
"Constellations"
] |
13,520,743 | https://en.wikipedia.org/wiki/Functional%20cloning | Functional cloning is a molecular cloning technique that relies on prior knowledge of the encoded protein’s sequence or function for gene identification. In this assay, a genomic or cDNA library is screened to identify the genetic sequence of a protein of interest. Expression cDNA libraries may be screened with antibodies specific for the protein of interest or may rely on selection via the protein function. Historically, the amino acid sequence of a protein was used to prepare degenerate oligonucleotides which were then probed against the library to identify the gene encoding the protein of interest. Once candidate clones carrying the gene of interest are identified, they are sequenced and their identity is confirmed. This method of cloning allows researchers to screen entire genomes without prior knowledge of the location of the gene or the genetic sequence.
This technique can be used to identify genes that encode similar proteins from one organism to another. Similarly, this technique can be paired with metagenomic libraries to identify novel genes and proteins that perform similar functions, such as the identification of novel antibiotics by screening for beta-lactamase activity or selecting for growth in the presence of penicillin.
Experimental workflow
The workflow of a functional cloning experiment varies depending on the source of genetic material, the extent of prior knowledge of the protein or gene of interest and the ability to screen for the protein function. In general, a functional cloning experiment consists of four steps: 1) sample collection, 2) library preparation, 3) screening or selection and 4) sequencing.
Sample collection
Genetic material is collected from a particular cell type, organism or environmental sample relevant to the biological question. In functional cloning, mRNA is commonly isolated and cDNA is prepared from the isolated mRNA (RNA extraction). In certain circumstances genomic DNA may be isolated, particularly when environmental samples are used as the source of genetic material.
Library preparation
If the starting material is genomic DNA, the DNA is sheared to produce fragments of appropriate length for the vector of choice. The DNA fragments or cDNA are then treated with restriction endonucleases and ligated to a plasmid or chromosomal vectors. In the case of assays that screen for the protein or for its function, an expression vector is used to ensure that the gene product is expressed. The vector choice will depend on the origin of the DNA or cDNA to ensure proper expression and to ensure that the encoded gene will fall within the limits of the vector's insert size.
The choice of host is important to ensure that the codon usage will be similar to the donor organism. The host will also need to guarantee that the proper post-translational modifications and protein folding will occur to enable proper functioning of the expressed proteins.
Screening or selection
The method of screening the prepared genomic or cDNA libraries for the gene of interest is highly variable depending on the experimental design and biological question. One method of screening is to probe colonies via Southern blotting with degenerate oligonucleotides prepared from the amino acid sequence of the query protein. In expression libraries, the protein of interest can be identified by screening with an antibody specific for the query protein via Western blotting to identify colonies carrying the gene of interest. In other circumstances, a specific assay can be used to screen or select for the protein's activity. For example, genes conferring antibiotic resistance can be selected by growing the colonies of the library on media containing a specified antibiotic. Another example is screening for enzymatic activity by incubating with a substrate that is catalyzed to a colorimetric compound that can easily be visualized.
Sequencing
The final step of functional cloning is to sequence the DNA or cDNA from the clones that were successfully identified in the screen or selection step. The sequence can then be annotated and used for downstream applications, such as protein expression and purification for industrial applications.
Advantages
The advantages of functional cloning include the ability to screen for novel genes with desired applications in organisms that cannot be cultured, particularly from bacterial or viral specimens. Additionally, genes encoding proteins with related functions can be identified when there is low sequence similarity due to the ability to screen for the protein function alone. Functional cloning allows for gene identification without prior knowledge of the organism's genome sequence or position of the gene within the genome.
Limitations
As with other cloning techniques, vector and host choice affect the success of gene identification via functional cloning due to cloning bias. The vector must have an insert size that will accommodate the entire DNA sequence of the expressed protein. Additionally, in expression vectors the promoters and terminators must function within the chosen host organism. The host choice may affect transcription and translation due to differing codon usage, transcriptional and translational machinery or post-translational modifications within the host.
Other limitations include the labour-intensive library preparation and potential screens which can be both expensive and time-consuming.
Alternative approaches
Positional cloning
Positional cloning is another molecular cloning technique for identification of a gene of interest. This method uses exact chromosomal location instead of function to guide gene identification. Because of this, this method focuses on all the genetic material at a chromosomal locus and makes no assumptions about function. In model organisms such as mice or yeast, this method is used more frequently as the information about the position of a gene of interest can be obtained from the sequenced genome. However, this method becomes much more cumbersome when sequence information is not available. In this case, linkage analysis can also be used. Functional cloning on the other hand is more readily used in organisms such as bacterial pathogens that are viable but nonculturable and where sequence data is not available but gene homology or protein function is still of interest.
A way to differentiate between functional and positional cloning is to visualize genes as words. Functional cloning is like using a thesaurus to look up words and selecting for new words that have the same meanings (or functions). Positional cloning is more like picking a specific page of a dictionary and then browsing only that page for any words of interest.
Computationally determine homology
With the advent of sequencing technology becoming cheaper and cheaper, it is now more feasible to sequence an unknown genome and then computationally determine homology instead of screening. This brings the added benefit of being able to screen for multiple genes of interest at the same time and reduces experimental time. It also allows one to avoid labour-intensive cloning procedures as well. However, if this route is taken, there are other biases and hurdles one must consider. By using sequenced data, one is able to screen based on homology alone. A function-based approach thus allows for discovery of novel enzymes whose functions would not have been predicted based on DNA sequence alone. Therefore, while sequencing is less labour-intensive experimentally, it can also lead to missed genes of interest due to differing sequence homology in genes of related function.
Gibson assembly
Gibson assembly is a quick cloning method that uses three primary enzymes; 5' exonuclease, polymerase and ligase. The exonuclease digests the 5' end of DNA fragments leaving a 3' overhang. If there is significant homology (20-40 bp) on each end of the DNA insert, it can anneal with a complementary backbone. Afterwards the polymerase can fill in the gaps while ligase fuses the nicks at the end. This method greatly increases the rate of cloning and success rate of cloning into a vector backbone. However, it requires the DNA fragment to have significant homology with the plasmid. For this reason, knowledge of the sequence being cloned must be known beforehand. This is not a requirement with functional cloning.
TOPO cloning
TOPO Cloning is a cloning method that uses Taq polymerase. This is because Taq leaves a single adenosine overhang on the 3’ end of PCR reaction products. Utilizing this knowledge, backbones with a 5’ thymine overhang can be used for cloning purposes. In this case knowledge of the fragment being cloned must be known to be able to make PCR primers for it and the number of TOPO Cloning compatible vectors is relatively small. However, it provides the advantage that reactions only take about 5 minutes to do.
Gateway recombination cloning
Gateway recombination cloning is a cloning method in which a DNA fragment is moved from one plasmid backbone to another via a single homologous recombination event. However, for this method to work, the DNA fragment of interest must be flanked by recombination sites. While this method isn't strictly an alternative, it does allow the movement of DNA fragments from one plasmid to another quicker than creating a whole new genomic library. The reason this method may be used in conjunction with functional cloning is to put a library under a different promoter or on a backbone with a different selection marker. This can come in handy if an individual wants to try functional cloning in a wide range of bacteria to try to combat the issue with codon bias.
Applications
Determining homology in the environment
Metagenomics is one of the largest fields that commonly uses functional cloning. Metagenomics studies all the genetic material from a specific environmental sample, such as the gut microbiome or lake water. Functional libraries are created that contain DNA fragments from the environment. As the original bacterium that a DNA sequence originated from cannot be easily detected, creating metagenomic functional libraries possesses advantages. Less than 1% of all bacteria are easily cultured in the lab, leaving a large percentage of bacteria that cannot be grown. By using functional libraries, the gene functions of unculturable bacteria can still be studied. Furthermore, these uncultured microbes provide a source for the discovery of novel enzymes with biotechnological applications. Some novel proteins that have been discovered from marine environments include enzymes such as proteases, amylases, lipases, chitinases, deoxyribonucleases and phosphatases.
Determining homology in a known species
There are situations in which it is imperative to determine if a gene homolog from one source is present in another organism. For example, identification of novel DNA polymerases for polymerase chain reaction (PCR) reactions which synthesize DNA molecules from deoxyribonucleotides. While human polymerase optimally works at , DNA does not denature until . This poses a problem as at these temperatures the human DNA polymerase would denature during the denaturation step of the PCR reaction resulting in a non-functioning polymerase protein and a failed PCR. To combat this a DNA polymerase from a thermophile, or bacteria that grows at high temperatures, could be used. An example is Taq polymerase which comes from the thermophilic bacterium Thermus aquaticus. One could set up a functional cloning screen to find homologous polymerases that have the added advantage of being thermostable at high temperatures.
With this in mind, 3173 Polymerase, another polymerase enzyme, now commonly used in RT-PCR reactions was discovered using the above theory. In RT-PCR reactions, two separate enzymes are commonly used. The first is a retroviral reverse transcriptase to convert RNA to cDNA. The second is a thermostable DNA polymerase to amplify the target sequence. 3173 Polymerase is able to perform both enzymatic functions resulting in a better option for RT-PCR. The enzyme was discovered using functional cloning from a viral host originally found in Octopus hot springs (93 °C) in Yellowstone National Park.
Human health applications
One of the ongoing challenges of treating bacterial infections is antibiotic resistance which commonly arises when patients do not take their full treatment of medication and hence allow bacteria to develop resistance to antibiotics over time. To understand how to combat antibiotic resistance it is important to understand how the bacterial genome is evolving and changing in healthy individuals with no recent usage of antibiotics to provide a baseline. Using a functional cloning-based technique, DNA isolated from human microflora were cloned into expression vectors in Escherichia coli. Afterwards, antibiotics were applied as a screen. If a plasmid contained a gene insert that provided antibiotic resistance the cell survived and was selected on the plate. If the insert provided no resistance, the cell died and did not form a colony. Based on selection of cell colonies that survived, a better picture of genetic factors contributing to antibiotic resistance were pieced together. Most of the resistance genes that were identified were previously unknown. By using a functional cloning-based technique one is able to elucidate genes giving rise to antibiotic resistance to better understand treatment for bacterial infections.
See also
cDNA library
Genetic screen
Genomic library
Hybridization probe
Molecular cloning
References
External links
Functional Cloning, Sorting, and Expression Profiling of Nucleic Acid-Binding Proteins Genome Research.com
A novel strategy for the functional cloning of enzymes using filamentous phage display: the case of nucleotidyl transferases Oxford Journals]
Molecular genetics of Cohen syndrome Department of Medical Genetics, University of Helsinki''
Genetics techniques
Molecular genetics
Medical genetics | Functional cloning | [
"Chemistry",
"Engineering",
"Biology"
] | 2,736 | [
"Genetics techniques",
"Molecular genetics",
"Genetic engineering",
"Molecular biology"
] |
13,521,210 | https://en.wikipedia.org/wiki/Edward%20Kofler | Edward Kofler (November 16, 1911 – April 22, 2007) was a mathematician who made important contributions to game theory and fuzzy logic by working out the theory of linear partial information.
Biography
Kofler was born in Brzeżany, Austrian-Hungarian empire (now western Ukraine) and graduated as a disciple of among others Hugo Steinhaus and Stefan Banach from the University of Lwów Poland (now Ukraine) and the University of Cracow, having studied game theory. After graduation in 1939 Kofler returned to his family in Kolomyia (today Kolomea in Ukraine), where he taught mathematics in a Polish high school. After German attack on the town 1 July 1941 he succeeded to escape to Kazakhstan together with his wife. At Alma-Ata he managed a Polish school with orphanage in exile and worked there as mathematics teacher. After World War II ended he returned home with the orphanage. He was accompanied by his wife and their baby son. The family settled in Poland. From 1959 he accepted the position of lecturer at the University of Warsaw in the faculty of economics. In 1962 he gained a Ph.D. with his thesis Economic Decisions, Applying Game Theory. Then in 1962 he became assistant professor at the faculty of social science in the same university, specializing in econometrics.
In 1969 he migrated to Zürich, Switzerland, where he was employed at the Institute for Empirical Research in Economics at the University of Zürich and scientific advisor at the Swiss National Science Foundation (Schweizerische Nationalfonds zur Förderung der wissenschaftlichen Forschung). In Zürich in 1970 Kofler developed his linear partial information (LPI) theory allowing qualified decisions to be made on the basis of fuzzy logic: incomplete or fuzzy a priori information.
Kofler was visiting professor at the University of St Petersburg (former Leningrad, Russia), University of Heidelberg (Germany), McMaster University (Hamilton, Ontario, Canada) and University of Leeds (England). He collaborated with many well known specialists in information theory, such as Oskar R. Lange in Poland, Nicolai Vorobiev in the Soviet Union, Günter Menges in Germany, and Heidi Schelbert and Peter Zweifel in Zürich. He was the author of many books and articles. He died in Zürich.
See also
Linear partial information
Fuzzy logic
Game theory
Fuzzy set
Decision making
Stochastics
Probability
Information theory
Lwów School of Mathematics
List of Swiss people
References
Bibliography
"Set theory Considerations on the Chess Game and the Theory of Corresponding Elements"- Mathematics Seminar at the University of Lviv, 1936
On the history of mathematics (Fejezetek a matematika történetéből) – book, 339 pages, Warsaw 1962 and Budapest 1965
From the digit to infinity – book, 312 pages, Warsaw 1960
Economic decisions and the theory of games – Dissertation, University of Warsaw 1961
Introduction to game theory – book, 230 pages, Warsaw 1962
Optimization of multiple goals, Przeglad Statystyczny, Warsaw 1965
The value of information – book, 104 pages, Warsaw 1967
(With H. Greniewski and N. Vorobiev) Strategy of games, book, 80 pages, Warsaw 1968
"Das Modell des Spiels in der wissenschaftlichen Planung" Mathematik und Wirtschaft No.7, East Berlin 1969
Konfidenzintervalle in Entscheidungen bei Ungewissheit, Stattliche Hefte, 1976/1
"Entscheidungen bei teilweise bekannter Verteilung der Zustande", Zeitschrift für OR, Bd. 18/3, 1974, S 141-157
"Konfidenzintervalle in Entscheidungen bei Ungewissheit", Statistische Hefte, 1976/1, S. 1-21
(With G. Menges) Entscheidungen bei unvollständiger Information, Springer Verlag, 1976
(With G. Menges) "Cognitive Decisions under Partial Information", in R.J. Bogdan (ed.), Local Induction, Reidel, Dordrecht-Holland, 1976
(With G. Menges) "Entscheidungen bei unvollständiger Information", volume 136 of Lecture Notes in Economics and Mathematical Systems. Springer, Berlin, 1976.
(With G. Menges) "Stochastic Linearisation of Indeterminateness" in Mathematical Economics and Game Theory, (Springer) Berlin-Heidelberg-New York City 1977, S. 20-63
(With G. Menges) "Die Strukturierung von Unbestimmtheiten und eine Verallgemeinerung des Axiomensystems von Kolmogoroff", Statistische Hefte 1977/4, S. 297-302
(With G. Menges) "Lineare partielle Information, fuzziness und Vielziele-Optimierung", Proceedings in Operations Research 8, Physica-Verlag 1979
(With Fahrion, R., Huschens, S., Kuß, U., and Menges, G.) "Stochastische partielle Information (SPI)", Statistische Hefte, Bd. 21, Jg. 1980, S. 160-167
"Fuzzy sets- oder LPI-Theorie?" in G. Menges, H. Schelbert, P. Zweifel (eds.), Stochastische Unschärfe in Wirtschaftswissenschaften, Haag & Herchen, Frankfurt-am-Main, 1981
(With P. Zweifel)"Decisions under Fuzzy State Distribution with Application to the dealt Risks of Nuclear Power", in Haag, W. (ed.), Large Scale Energy Systems, (Pergamon), Oxford 1981, S: 437-444
"Extensive Spiele bei unvollständiger Information", in Information in der Wirtschaft, Gesellschaft für Wirtschafts- und Sozialwissenschaften, Band 126, Berlin 1982
"Equilibrium Points, Stability and Regulation in Fuzzy Optimisation Systems under Linear Partial Stochastic Information (LPI)", Proceedings of the International Congress of Cybernetics and Systems, AFCET, Paris 1984, pp. 233–240
"Fuzzy Weighing in Multiple Objective Decision Making, G. Menges Contribution and Some New Developments", Beitrag zum Gedenkband G. Menges, Hrgb. Schneeweiss, H., Strecker H., Springer Verlag 1984
(With Z. W. Kmietowicz, and A. D. Pearman) "Decision making with Linear Partial Information (L.P.I.)". The Journal of the Operational Research Society, 35(12):1079-1090, 1984
(With P. Zweifel, A. Zimmermann) "Application of the Linear Partial Information (LPI) to forecasting the Swiss timber market" Journal of Forecasting 1985, v4(4),387-398
(With Peter Zweifel) "Exploiting linear partial information for optimal use of forecasts with an application to U.S. economic policy, International Journal of Forecasting, 1988
"Prognosen und Stabilität bei unvollständiger Information", Campus 1989
(With P. Zweifel) "Convolution of Fuzzy Distributions in Decision Making", Statistical Papers 32, Springer 1991, p. 123-136
(With P. Zweifel) "One-Shot Decisions under Linear Partial Information" Theory and Decision 34, 1993, p. 1-20
"Decision Making under Linear Partial Information". Proceedings of the European Congress EUFIT, Aachen, 1994, p. 891-896
(With P. Zweifel) "Linear Partial Information in One-Shot Decisions", Selecta Statistica Vol. IX, 1996
Mehrfache Zielsetzung in wirtschaftlichen Entscheidungen bei unscharfen Daten, Institut für Empirische Wirtschaftsforschung, 9602, 1996
"Linear Partial Information with Applications". Proceedings of ISFL 1997 (International Symposium on Fuzzy Logic), Zürich, 1997, p. 235-239
(With Thomas Kofler) "Forecasting Analysis of the Economic Growth", Selecta Statistica Canadiana, 1998
"Linear Partial Information with Applications in Fuzzy Sets and Systems", 1998. North-Holland
(With Thomas Kofler) Fuzzy Logic and Economic Decisions, 1998
(With L. Götte) "Fuzzy Systems and their Game Theoretical Solution", International Conference on Operations Research, ETH, Zürich, August 1998
"Forecasting and Optimal Strategies in Fuzzy Chess Situations ("Prognosen und Optimale Strategien in unscharfen Schachsituationen"), Idee & Form No. 70, 2001 Zürich, pp. 2065 & 2067
(With P. Zweifel) "One-shot decisions under Linear Partial Information" - Springer Netherlands, 2005
External links
How to apply the Linear Partial Information (LPI)
Linear Partial Information (LPI) theory and its applications
Applying the Linear Partial Information (LPI) for USA's economy policy
Practical decision making with the Linear Partial Information (LPI)
One-shot decisions applying the Linear Partial Information (LPI)
1911 births
2007 deaths
People from Berezhany
20th-century Swiss mathematicians
20th-century Polish mathematicians
Game theorists
Information theorists
Set theorists
Probability theorists
Academic staff of the University of Zurich
Polish emigrants to Switzerland | Edward Kofler | [
"Mathematics"
] | 2,028 | [
"Game theorists",
"Game theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.