id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
13,629,713 | https://en.wikipedia.org/wiki/Applicability%20domain | The applicability domain (AD) (for both chemistry and machine learning) of a QSAR model is the physico-chemical, structural or biological space, knowledge or information on which the training set of the model has been developed, and for which it is applicable to make predictions for new compounds.
The purpose of AD is to state whether the model's assumptions are met, and for which chemicals the model can be reliably applicable. In general, this is the case for interpolation rather than for extrapolation. Up to now there is no single generally accepted algorithm for determining the AD: a comprehensive survey can be found in a Report and Recommendations of ECVAM Workshop 52. There exists a rather systematic approach for defining interpolation regions. The process involves the removal of outliers and a probability density distribution method using kernel-weighted sampling.
Another widely used approach for the structural AD of the regression QSAR models is based on the leverage calculated from the diagonal values of the hat matrix of the modeling molecular descriptors.
A recent rigorous benchmarking study of several AD algorithms identified standard-deviation of model predictions as the most reliable approach.
To investigate the AD of a training set of chemicals one can directly analyse properties of the multivariate descriptor space of the training compounds or more indirectly via distance (or similarity) metrics. When using distance metrics care should be taken to use an orthogonal and significant vector space. This can be achieved by different means of feature selection and successive principal components analysis.
Notes
Cheminformatics
Medicinal chemistry
Drug discovery | Applicability domain | [
"Chemistry",
"Biology"
] | 322 | [
"Biochemistry",
"Life sciences industry",
"Drug discovery",
"Medicinal chemistry stubs",
"Biochemistry stubs",
"Computational chemistry",
"nan",
"Medicinal chemistry",
"Cheminformatics"
] |
11,071,105 | https://en.wikipedia.org/wiki/Cancer%20Minor | Cancer Minor (Latin for "lesser crab") was a constellation composed from a few stars in Gemini adjacent to Cancer. The constellation was introduced in 1612 (or 1613) by Petrus Plancius.
The 5th-magnitude stars constituting Cancer Minor were HIP 36616, and 68, 74, 81 and 85 Geminorum, forming a faint natural arrow-shaped asterism.
It is only found on a few 17th-century Dutch celestial globes and in the atlas of Andreas Cellarius. It was no longer used after the 18th century.
See also
Obsolete constellations
References
Former constellations
Constellations listed by Petrus Plancius | Cancer Minor | [
"Astronomy"
] | 131 | [
"Former constellations",
"Constellations listed by Petrus Plancius",
"Astronomy stubs",
"Constellations"
] |
11,071,463 | https://en.wikipedia.org/wiki/Entropy%20rate | In the mathematical theory of probability, the entropy rate or source information rate is a function assigning an entropy to a stochastic process.
For a strongly stationary process, the conditional entropy for latest random variable eventually tend towards this rate value.
Definition
A process with a countable index gives rise to the sequence of its joint entropies . If the limit exists, the entropy rate is defined as
Note that given any sequence with and letting , by telescoping one has . The entropy rate thus computes the mean of the first such entropy changes, with going to infinity.
The behaviour of joint entropies from one index to the next is also explicitly subject in some characterizations of entropy.
Discussion
While may be understood as a sequence of random variables, the entropy rate represents the average entropy change per one random variable, in the long term.
It can be thought of as a general property of stochastic sources - this is the subject of the asymptotic equipartition property.
For strongly stationary processes
A stochastic process also gives rise to a sequence of conditional entropies, comprising more and more random variables.
For strongly stationary stochastic processes, the entropy rate equals the limit of that sequence
The quantity given by the limit on the right is also denoted , which is motivated to the extent that here this is then again a rate associated with the process, in the above sense.
For Markov chains
Since a stochastic process defined by a Markov chain that is irreducible, aperiodic
and positive recurrent has a stationary distribution, the entropy rate is independent of the initial distribution.
For example, consider a Markov chain defined on a countable number of states. Given its right stochastic transition matrix and an entropy
associated with each state, one finds
where is the asymptotic distribution of the chain.
In particular, it follows that the entropy rate of an i.i.d. stochastic process is the same as the entropy of any individual member in the process.
For hidden Markov models
The entropy rate of hidden Markov models (HMM) has no known closed-form solution. However, it has known upper and lower bounds. Let the underlying Markov chain be stationary, and let be the observable states, then we haveand at the limit of , both sides converge to the middle.
Applications
The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning.
See also
Information source (mathematics)
Markov information source
Asymptotic equipartition property
Maximal entropy random walk - chosen to maximize entropy rate
References
Cover, T. and Thomas, J. (1991) Elements of Information Theory, John Wiley and Sons, Inc.,
Information theory
Entropy
Markov models
Temporal rates | Entropy rate | [
"Physics",
"Chemistry",
"Mathematics",
"Technology",
"Engineering"
] | 612 | [
"Temporal quantities",
"Thermodynamic properties",
"Telecommunications engineering",
"Physical quantities",
"Applied mathematics",
"Quantity",
"Temporal rates",
"Computer science",
"Entropy",
"Information theory",
"Asymmetry",
"Wikipedia categories named after physical quantities",
"Symmetry... |
11,071,487 | https://en.wikipedia.org/wiki/Babylon%20%28software%29 | Babylon is a computer dictionary and translation program developed by the Israeli company Babylon Software Ltd. based in the city of Or Yehuda. The company was established in 1997 by the Israeli entrepreneur Amnon Ovadia. Its IPO took place ten years later. It is considered a part of Israel's Download Valley, a cluster of software companies monetizing "free" software downloads through adware. Babylon includes in-house proprietary dictionaries, as well as community-created dictionaries and glossaries. It is a tool used for translation and conversion of currencies, measurements and time, and for obtaining other contextual information. The program also uses a text-to-speech agent, so users hear the proper pronunciation of words and text. Babylon has developed 36 English-based proprietary dictionaries in 21 languages. In 2008–2009, Babylon reported earnings of 50 million NIS through its collaboration with Google.
Between 2010 and 2013, Babylon became infamous for demonstrating questionable behavior typical of malware: A Babylon Toolbar bundled with Babylon and other software, has been widely identified as a browser hijacker that is very easy to install inadvertently and unnecessarily difficult to remove. This eventually led to Google terminating its agreement with Babylon Ltd. in 2013.
History
In 1995, Israeli entrepreneur Amnon Ovadia began a project for an online English–Hebrew dictionary that would not interrupt the reading process. As a result, Babylon Ltd. was founded in 1997 and launched the first version of Babylon. On 25 September 1997, the company filed a patent for text recognition and translation. In 1998, a year following its launch date, Babylon had two million users, mostly in Germany and Brazil, growing from 420,000 to 2.5 million users in the course of that year. In the same year, Formula Systems, headed by Dan Goldstein, acquired Mashov Computers and became the largest shareholder in the company. By 2000, the product had over 4 million users. In the spring of 2000, Babylon Ltd. failed to raise $20 million in a private placement and lost NIS 15 million. Further stress came with the collapse of the Dot-com bubble. In 2001, Babylon Ltd. continued shedding money, with the company costing its parent company Formula Vision NIS 4.7 million.
Since 2007, Babylon Ltd. () has been a publicly traded company. Its IPO took place in February 2007; Israeli businessman Noam Lanir purchased controlling interests in the company for $10.5 million, sharing management with second majority shareholder Reed Elsevier and the Company founder, Amnon Ovadia. According to Globes magazine in January 2011, Lanir received an offer for his stake from a foreign private equity fund that valued the company at NIS 248 million (approximately 70 million dollars).
In 2008–2009, Babylon reported earnings of NIS 50 million through its collaboration with Google. In 2010, Google Ireland signed an extended cooperation agreement with Babylon to provide it with online search and pay-per-click advertising services.
In 2011, Babylon was named the seventh most popular website in Libya, the eighth in Algeria and the eleventh in Tunisia.
According to Globes magazine, Noam Lanir, who acquired control of Babylon for NIS 20 million, made a paper profit of NIS 200 million on his investment in 2012. According to the same source, the Babylon website achieved an Alexa ranking of 45 in April 2012.
In October 2014, the translation business was purchased by Babylon Software Ltd.
Product features
A single click on any text using the right mouse button or combination of the right mouse button and a keyboard modifier, and the Babylon window appears providing a translation and definition of the clicked term. Babylon is a tool used for translation and conversion of currencies, measurements and time, and for obtaining other contextual information. Babylon has a patented OCR technology and a single-click activation that works in any Microsoft Windows application, such as Microsoft Word, Microsoft Outlook, Microsoft Excel, Internet Explorer and Adobe Reader. When activated, Babylon opens a small popup window that displays the translation or definition. To solve the incompatibility problem of Babylon OCR browsers extension; users can benefit from Capture2Text free app version 3.9 (only 3.9v) which is compatible with Babylon 8 or another version. While dragging its capture box in any text from any browsers, then a pop-up box appears and Babylon could easily grasp it. Babylon provides full text translation, full Web page and full document translation in many languages and supports integration with Microsoft Office. Babylon enables the translation of Microsoft Word documents and plain text files. It offers results from a database of over 1,700 sources in over 75 languages.
Dictionaries and encyclopedias
Babylon includes its in-house proprietary dictionaries, community-created dictionaries and glossaries (UGC), which include general and technical dictionaries, language and monolingual dictionaries, thesauri, encyclopedias and lexicons in a multitude of languages. They are indexed in 400 categories covering the arts, business, computers, health, law, entertainment, sports and so on.
The program also uses a text-to-speech agent so users hear the proper pronunciation of words and text. Babylon Ltd. has developed 36 English-based proprietary dictionaries in 21 languages (English, Arabic, Simplified Chinese, Traditional Chinese, Czech, Danish, Dutch, French, German, Greek, Hebrew, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Swedish and Turkish) that are free of charge to users of the software. These dictionaries comprise between 60,000 and 200,000 terms, phrases, acronyms and abbreviations and are enabled with a morphological engine which facilitates recognition of all inflected forms of single words and phrases, provides all forms of terms that include prefixes and extensions and supplies a solution for all formats of writing. Babylon's Linguistic Department is responsible for the extensive content and information database which is a significant component of Babylon's Product.
Malware issues
On 7 August 2010, Microsoft antivirus products identified the software application as adware (identified as "Adware: Win32/Babylon") due to potentially intrusive behavior. Sixteen days later, on 23 August 2010, Microsoft announced that Babylon Ltd. had modified the program and that it was no longer categorized as adware.
In 2011, Download.com started bundling the Babylon Toolbar with open-source packages such as Nmap. Gordon Lyon, the developer of Nmap, criticized the decision. The vice-president of Download.com, Sean Murphy, released an apology: "The bundling of this software was a mistake on our part and we apologize to the user and developer communities for the unrest it caused."
In 2012 the Babylon search toolbar was identified as a browser hijacker that, while very easy to install inadvertently, is unnecessarily difficult to remove afterwards. The toolbar is listed as an unwanted application by anti-spyware software such as Stopzilla or Spybot – Search & Destroy. Many users, trying to uninstall Babylon, have searched for help on different support forums. The toolbar tends to install itself onto computers as an add-on with other software and changes users' home page to the Babylon search engine, adds the search engine to the computer and sets itself as the default. It changes browser preferences such as the user's home page and search engine, changes that can be very difficult to reverse.
On 29 October 2013, Google notified Babylon that it did not intend to renew its cooperation agreement between the two companies, which terminated on 30 November 2013. Google said that complaints had been received from Google Chrome users, claiming that the Babylon toolbar damages the browser's user experience. According to Babylon, Google may have reconsidered the decision during 2014. Since that point Babylon Software no longer distributes any toolbars or offers any 3rd party software.
See also
Anki
StarDict
Comparison of machine translation applications
Download Valley
References
External links
Dictionary software
Machine translation
Israeli brands
Software companies of Israel
Companies listed on the Tel Aviv Stock Exchange
Israeli inventions | Babylon (software) | [
"Technology"
] | 1,661 | [
"Machine translation",
"Natural language and computing"
] |
11,071,609 | https://en.wikipedia.org/wiki/Essentials%20of%20Programming%20Languages | Essentials of Programming Languages (EOPL) is a textbook on programming languages by Daniel P. Friedman, Mitchell Wand, and Christopher T. Haynes.
EOPL surveys the principles of programming languages from an operational perspective. It starts with an interpreter in Scheme for a simple functional core language similar to the lambda calculus and then systematically adds constructs. For each addition, for example, variable assignment or thread-like control, the book illustrates an increase in expressive power of the programming language and a demand for new constructs for the formulation of a direct interpreter. The book also demonstrates that systematic transformations, say, store-passing style or continuation-passing style, can eliminate certain constructs from the language in which the interpreter is formulated.
The second part of the book is dedicated to a systematic translation of the interpreter(s) into register machines. The transformations show how to eliminate higher-order closures; continuation objects; recursive function calls; and more. At the end, the reader is left with an "interpreter" that uses nothing but tail-recursive function calls and assignment statements plus conditionals. It becomes trivial to translate this code into a C program or even an assembly program. As a bonus, the book shows how to pre-compute certain pieces of "meaning" and how to generate a representation of these pre-computations. Since this is the essence of compilation, the book also prepares the reader for a course on the principles of compilation and language translation, a related but distinct topic. Apart from the text explaining the key concepts, the book also comprises a series of exercises, enabling the readers to explore alternative designs and other issues.
Like SICP, EOPL represents a significant departure from the prevailing textbook approach in the 1980s. At the time, a book on the principles of programming languages presented four to six (or even more) programming languages and discussed their programming idioms and their implementation at a high level. The most successful books typically covered ALGOL 60 (and the so-called Algol family of programming languages), SNOBOL, Lisp, and Prolog. Even today, a fair number of textbooks on programming languages are just such surveys, though their scope has narrowed.
EOPL was started in 1983, when Indiana was one of the leading departments in programming languages research. Eugene Kohlbecker, one of Friedman's PhD students, transcribed and collected his "311 lectures". Other faculty members, including Mitch Wand and Christopher Haynes, started contributing and turned "The Hitchhiker's Guide to the Meta-Universe"—as Kohlbecker had called it—into the systematic, interpreter and transformation-based survey that it is now. Over the 25 years of its existence, the book has become a near-classic; it is now in its third edition, including additional topics such as types and modules. Its first part now incorporates ideas on programming from HtDP, another unconventional textbook, which uses Scheme to teach the principles of program design. The authors, as well as Matthew Flatt, have recently provided DrRacket plug-ins and language levels for teaching with EOPL.
EOPL has spawned at least two other related texts: Queinnec's Lisp in Small Pieces and Krishnamurthi's Programming Languages: Application and Interpretation.
See also
Structure and Interpretation of Computer Programs
How to Design Programs
References
Authors' home page for Essentials of Programming Languages, Third Edition
Book homepage for first edition
EoPL page on Schemewiki
Computer programming books
Computer science books
Programming language topics | Essentials of Programming Languages | [
"Engineering"
] | 722 | [
"Software engineering",
"Programming language topics"
] |
11,071,759 | https://en.wikipedia.org/wiki/Receptor%20activity-modifying%20protein | Receptor activity-modifying proteins (RAMPs) are a class of protein that interact with and modulate the activities of several Class B G protein-coupled receptors including the receptors for secretin, calcitonin (CT), glucagon, and vasoactive intestinal peptide (VIP). There are three distinct types of RAMPs in mammals (though more in fish), designated RAMP1, RAMP2, and RAMP3, each encoded by a separate gene.
Function
Currently, the function of RAMPs is divided into classes of activities. When associated with the Calcitonin receptor (CTR) or Calcitonin receptor-like (CALCRL) (below), RAMPs can change the selectivity of the receptor for a specific hormone. In the cases of the other receptors mentioned, however, there is no evidence that they can do this, but instead function to regulate trafficking of receptors from the ER / golgi to the membrane. These functions appear to be ones where there is redundancy, as neither RAMP1 nor RAMP3 knockout mice (KO) have grossly abnormal phenotypes. The likelihood is that the phenotype of RAMP2 KO mice is more connected with the abolition of most adrenomedullin (AM) signalling than effects on trafficking of other receptors, as those mice are almost identical to AM KO mice and mice lacking the Calcitonin-like receptor which are unable to form either AM1 or AM-2 adrenomedullin receptors (CLR/RAMP2 and CLR/RAMP3 respectively).
Types
Association of RAMPs with either the CT or CALCRL proteins forms 6 different receptors from the calcitonin receptor family:
References
External links
Single-pass transmembrane proteins | Receptor activity-modifying protein | [
"Chemistry"
] | 358 | [
"Biochemistry stubs",
"Protein stubs"
] |
11,071,966 | https://en.wikipedia.org/wiki/The%20Studio%20%28magazine%29 | The Studio: An Illustrated Magazine of Fine and Applied Art was an illustrated fine arts and decorative arts magazine published in London from 1893 until 1964. The founder and first editor was Charles Holme. The magazine exerted a major influence on the development of the Art Nouveau and Arts and Crafts movements. It was absorbed into Studio International magazine in 1964.
History
The Studio was founded by Charles Holme in 1893. Holme was in the wool and silk trades, had travelled extensively in Europe and had visited Japan and the United States with Lasenby Liberty and his wife Emma. During his travels he had formed: He retired from trade in order to start The Studio.
He had hoped to engage Lewis Hind as the editor of the new venture, but Hind went instead to William Waldorf Astor's Pall Mall Budget. He suggested Joseph Gleeson White as an alternative. Gleeson White edited The Studio from the first issue in April 1893. In 1895 Holme took over as editor himself, although Gleeson White continued to contribute. Holme retired as editor in 1919 for reasons of health, and was succeeded by his son Charles Geoffrey Holme, who was already the editor of special numbers and year-books of the magazine.
The magazine
The magazine was monthly; 853 issues were published between April 1893 and May 1964.
The Studio promoted the work of "New Art" artists, designers and architects. It played a major part in introducing the work of Charles Rennie Mackintosh and Charles Voysey to a wide audience, and was especially influential in Europe.
In keeping with Holme's original concept, the magazine was international in scope. A French edition was published in Paris, differing from the English one only in that the spine and parts of the cover were printed in French, and there was an insert consisting of a French translation of the article text and some French advertisements.
The American edition was titled The International Studio. It had its own editorial staff, and the content was different from that of the English edition, although many articles from that were reprinted. It was published in New York by John Lane & Company from May 1897 until 1921, and by International Studio, Inc., from 1922 until publication ceased in 1931.
In 1894 and then from 1896 on, special numbers of the magazine were also published, normally three times a year. These carried various titles; 117 of them were issued between 1894 and 1940.
From 1906 onwards The Studio published an annual, The Studio Year-Book of Decorative Art, which dealt with architecture, interior design and design of furniture, lighting, glassware, textiles, metalwork and ceramics. These annuals promoted Modernism in the 1920s, and later the Good Design movement.
The last edition was published in May 1964, after which it was absorbed into Studio International.
References
Further reading
Clive Ashwin (1983). "The Early Studio and Its Illustrations". Studio International 196 (1003): 22–29.
Clive Ashwin (1976). "The Studio and Modernism: A Periodical's Progress". Studio International 192 (983): 103–112.
D.J. Gordon (1968). "Dilemmas: The Studio in 1893-4". Studio International 175 (899): 175–183.
Full text of issues 1–90, covering 1893 to 1925.
External links
Art Nouveau magazines
Visual arts magazines published in the United Kingdom
Defunct magazines published in the United Kingdom
Design magazines
Magazines published in London
Magazines established in 1893
Magazines disestablished in 1964 | The Studio (magazine) | [
"Engineering"
] | 705 | [
"Design magazines",
"Design"
] |
11,072,527 | https://en.wikipedia.org/wiki/1-Naphthaleneacetamide | 1-Naphthaleneacetamide (NAAm) is a synthetic auxin that acts as a rooting hormone.
It can be found in commercial products such as Rootone.
See also
1-Naphthaleneacetic acid
Plant hormones
Auxins
Acetamides
1-Naphthyl compounds | 1-Naphthaleneacetamide | [
"Chemistry"
] | 64 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
11,072,810 | https://en.wikipedia.org/wiki/High-energy-density%20matter | High-energy-density matter (HEDM) is a class of energetic materials, particularly fuel, with a high ratio of potential chemical energy output to density, usually termed "thrust-to-weight ratio", hence "high energy density". The substances are extremely reactive, therefore potentially dangerous, and some consider them impractical. Researchers are looking into HEDM that can provide much more lift than the current liquid hydrogen-liquid oxygen reactions used in today's spacecraft.
See also
Energy density
Oxygen rings
References
https://fas.org/spp/military/docops/usaf/2020/app-i.htm
Energy storage
Fuels | High-energy-density matter | [
"Chemistry"
] | 134 | [
"Fuels",
"Chemical energy sources"
] |
11,073,030 | https://en.wikipedia.org/wiki/Caspase%2013 | Caspase 13 or ERICE ("evolutionarily related interleukin-1β converting enzyme") is a protein that was identified in cattle. It belongs to a family of enzymes called caspases that cleave their substrates at C-terminal aspartic acid residues. Although this enzyme was originally reported as a human caspase that could be activated by caspase 8, later studies confirmed the gene identified for caspase 13 came from bovine origin, and is the likely orthologue of human caspase 4.
References
External links
The MEROPS online database for peptidases and their inhibitors: C14.017
EC 3.4.22
Caspases | Caspase 13 | [
"Chemistry"
] | 147 | [
"Biochemistry stubs",
"Protein stubs"
] |
11,074,125 | https://en.wikipedia.org/wiki/Abradable%20coating | An abradable coating is a coating made of an abradable material – meaning if it rubs against a more abrasive material in motion, the former will be worn whereas the latter will face no wear. Abradable coatings provide a .1 to .2% performance improvement compared to those without coating.
Abradable coatings are used in aircraft jet engines in the compressor and turbine sections where a minimal clearance is needed between the blade tips and the casing.
Abradable coatings have been in use by aero-engine manufacturers in some form or fashion for roughly 50 years
Abradable powder coatings provide an economical and environmentally friendly way to improve the efficiency of engines, compressors and pumps by fine-tuning the operational fit of internal components such as pistons, rotors and cases. In typical turbo machinery, the clearance between blade tips and the casing must account for thermal and inertial expansion as well as changes in concentricity due to shock loading events. To prevent catastrophic tip to casing contact, conservatively large clearances must be employed.
In small turboprop aircraft, the angle at which abradable coating is applied is impacted by the necessity of the coating process performed at spray angles less than 60 degrees.
The role of abradable coatings is not only to allow for closer clearances, but to automatically adjust clearances, in-situ, to accept physical events and/or thermal scenarios that may be found in a devices operational history.
Manufacturing
Thermal spray: Many techniques are available (Plasma, Flame, etc.).
Sintering: Honeycomb coatings are sintered on the casing
Casting: in the case of polymer coating.
References
Gas turbine technology | Abradable coating | [
"Physics"
] | 346 | [
"Materials stubs",
"Materials",
"Matter"
] |
11,074,998 | https://en.wikipedia.org/wiki/Mooring%20%28oceanography%29 | A mooring in oceanography is a collection of devices connected to a wire and anchored on the sea floor. It is the Eulerian way of measuring ocean currents, since a mooring is stationary at a fixed location. In contrast to that, the Lagrangian way measures the motion of an oceanographic drifter, the Lagrangian drifter.
Construction principle
The mooring is held up in the water column with various forms of buoyancy such as glass balls and syntactic foam floats. The attached instrumentation is wide-ranging but often includes CTDs (conductivity, temperature depth sensors), current meters (e.g. acoustic Doppler current profilers or deprecated rotor current meters), and biological sensors to measure various parameters. Long-term moorings can be deployed for durations of two years or more, powered with alkaline or lithium battery packs.
Components
Top buoy
Surface buoys
Moorings often include surface buoys that transmit real time data back to shore. The traditional approach is to use the Argos System. Alternatively, one may use the commercial Iridium satellites which allow higher data rates.
Submerged buoys
In deeper waters, areas covered by sea ice, areas within or near shipping lines or areas that are prone to theft or vandalism, moorings are often submerged with no surface markers. Submerged moorings typically use an acoustic release or a Timed Release that connects the mooring to an anchor weight on the sea floor. The weight is released by sending a coded acoustic command signal and stays on the ground. Deep water anchors are typically made from steel and may be as large as 100 kg. A common deep water anchor consists of a stack of 2–4 railroad wheels. In shallow waters anchors may consist of a concrete block or small portable anchor.
The buoyancy of the floats, i.e. of the top buoy plus additional packs of glass bulbs of foam, is sufficient to carry the instruments back to the surface. In order to avoid entangled ropes, it has been practical to place additional floats directly above each instrument.
Instrument housing
Prawlers
Prawlers (profiling crawlers) are sensor bodies which climb and descend the cable, to observe multiple depths. The energy to move is "free," harnessed by ratcheting upward via wave energy, then returning downward via gravity.
Depth correction
Similar to a kite in the wind, the mooring line will follow a so-called (half-)catenary.
The influence of currents (and wind if the top buoy is above the sea surface) can be modeled and the shape of the mooring line can be determined by software. If the currents are strong (above 0.1 m/s) and the mooring lines are long (more than 1 km), the instrument position may vary up to 50 m.
See also
Benthic lander, a mooring which does not have any mooring line
References
Oceanography
Physical oceanography
Oceanographic instrumentation
Ocean currents
Biological oceanography
de:Boje (Schifffahrt) | Mooring (oceanography) | [
"Physics",
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 623 | [
"Ocean currents",
"Hydrology",
"Oceanographic instrumentation",
"Applied and interdisciplinary physics",
"Oceanography",
"Measuring instruments",
"Physical oceanography",
"Fluid dynamics"
] |
11,076,344 | https://en.wikipedia.org/wiki/Hemoperfusion | Hemoperfusion or hæmoperfusion (see spelling differences) is a method of filtering the blood extracorporeally (that is, outside the body) to remove a toxin. As with other extracorporeal methods, such as hemodialysis (HD), hemofiltration (HF), and hemodiafiltration (HDF), the blood travels from the patient into a machine, gets filtered, and then travels back into the patient, typically by venovenous access (out of a vein and back into a vein).
In hemoperfusion, the blood perfuses a filter composed of artificial cells filled with activated carbon or another microporous material. Small molecules in solution within the serum (such as the toxin) cross the membranes into the microporous material (and get trapped therein), but formed elements (the blood cells) brush past the artificial cells just as they brush past each other. In this way, the microporous material's filtering ability can be used without destroying the blood cells.
First introduced in the 1940s, hemoperfusion was refined during the 1950s through 1970s, and then introduced clinically for the treatment of poisoning in the 1970s and 1980s. It is sometimes used to treat drug overdose, sometimes in conjunction with the other extracorporeal techniques previously mentioned.
The US Food and Drug Administration (FDA) defines sorbent hemoperfusion as follows:
″(a) Identification. A sorbent hemoperfusion system is a prescription device that consists of an extracorporeal blood system similar to that identified in the hemodialysis system and accessories (876.5820) and a container filled with adsorbent material that removes a wide range of substances, both toxic and normal, from blood flowing through it. The adsorbent materials are usually activated-carbon or resins which may be coated or immobilized to prevent fine particles entering the patient's blood. The generic type of device may include lines and filters specifically designed to connect the device to the extracorporeal blood system. The device is used in the treatment of poisoning, drug overdose, hepatic coma, or metabolic disturbances.″
Hemoperfusion is also used in the treatment of specific intoxications, such as valproic acid, theophylline, and meprobamate.
Despite its availability, this technique is only infrequently utilized as a medical process used to remove toxic substances from a patient's blood.
Types
Two types of hemoperfusion are commonly used:
Charcoal hemoperfusion, which has been used to treat liver (hepatic) failure, various types of poisoning, and certain autoimmune diseases when coated with antigens or antibodies.
Certain resins (polystyrene - XAD series) are frequently more efficient at clearing lipid-soluble drugs than charcoal hemoperfusion.
Complications
Complications of hemoperfusion may include thrombocytopenia, leucopenia, hypoglycemia, and some reduction in clotting factors, with recovery typically occurring in 1–2 days. Risk of bleeding is also higher because of the high heparin dose and reduction in platelets and clotting factors.
Indications for use
During hemoperfusion, the blood passes through a column with absorptive properties aiming at removing specific toxic substances from the patient's blood. It especially targets small- to medium-sized molecules that tend to be more difficult to remove by conventional hemodialysis. The adsorbent substance most commonly used in hemoperfusion are resins and activated carbon. Hemoperfusion is an extracorporeal form of treatment because the blood is pumped through a device outside the patient's body.
Its major uses include removing drugs or poisons from the blood in emergency situations, removing waste products from the blood in patients with kidney failure, and as a supportive treatment for patients before and after liver transplantation.
References
Renal dialysis
Toxicology treatments
Transfusion medicine | Hemoperfusion | [
"Environmental_science"
] | 824 | [
"Toxicology treatments",
"Toxicology"
] |
11,076,807 | https://en.wikipedia.org/wiki/C%C3%A9a%27s%20lemma | Céa's lemma is a lemma in mathematics. Introduced by Jean Céa in his Ph.D. dissertation, it is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations.
Lemma statement
Let be a real Hilbert space with the norm Let be a bilinear form with the properties
for some constant and all in (continuity)
for some constant and all in (coercivity or -ellipticity).
Let be a bounded linear operator. Consider the problem of finding an element in such that
for all in
Consider the same problem on a finite-dimensional subspace of so, in satisfies
for all in
By the Lax–Milgram theorem, each of these problems has exactly one solution. Céa's lemma states that
for all in
That is to say, the subspace solution is "the best" approximation of in up to the constant
The proof is straightforward
for all in
We used the -orthogonality of and
which follows directly from
for all in .
Note: Céa's lemma holds on complex Hilbert spaces also, one then uses a sesquilinear form instead of a bilinear one. The coercivity assumption then becomes for all in (notice the absolute value sign around ).
Error estimate in the energy norm
In many applications, the bilinear form is symmetric, so
for all in
This, together with the above properties of this form, implies that is an inner product on The resulting norm
is called the energy norm, since it corresponds to a physical energy in many problems. This norm is equivalent to the original norm
Using the -orthogonality of and and the Cauchy–Schwarz inequality
for all in .
Hence, in the energy norm, the inequality in Céa's lemma becomes
for all in
(notice that the constant on the right-hand side is no longer present).
This states that the subspace solution is the best approximation to the full-space solution in respect to the energy norm. Geometrically, this means that is the projection of the solution onto the subspace in respect to the inner product (see the adjacent picture).
Using this result, one can also derive a sharper estimate in the norm . Since
for all in ,
it follows that
for all in .
An application of Céa's lemma
We will apply Céa's lemma to estimate the error of calculating the solution to an elliptic differential equation by the finite element method.
Consider the problem of finding a function satisfying the conditions
where is a given continuous function.
Physically, the solution to this two-point boundary value problem represents the shape taken by a string under the influence of a force such that at every point between and the force density is (where is a unit vector pointing vertically, while the endpoints of the string are on a horizontal line, see the adjacent picture). For example, that force may be the gravity, when is a constant function (since the gravitational force is the same at all points).
Let the Hilbert space be the Sobolev space which is the space of all square-integrable functions defined on that have a weak derivative on with also being square integrable, and satisfies the conditions The inner product on this space is
for all and in
After multiplying the original boundary value problem by in this space and performing an integration by parts, one obtains the equivalent problem
for all in ,
with
,
and
It can be shown that the bilinear form and the operator satisfy the assumptions of Céa's lemma.
In order to determine a finite-dimensional subspace of consider a partition
of the interval and let be the space of all continuous functions that are affine on each subinterval in the partition (such functions are called piecewise-linear). In addition, assume that any function in takes the value 0 at the endpoints of It follows that is a vector subspace of whose dimension is (the number of points in the partition that are not endpoints).
Let be the solution to the subspace problem
for all in
so one can think of as of a piecewise-linear approximation to the exact solution By Céa's lemma, there exists a constant dependent only on the bilinear form such that
for all in
To explicitly calculate the error between and consider the function in that has the same values as at the nodes of the partition (so is obtained by linear interpolation on each interval from the values of at interval's endpoints). It can be shown using Taylor's theorem that there exists a constant that depends only on the endpoints and such that
for all in where is the largest length of the subintervals in the partition, and the norm on the right-hand side is the L2 norm.
This inequality then yields an estimate for the error
Then, by substituting in Céa's lemma it follows that
where is a different constant from the above (it depends only on the bilinear form, which implicitly depends on the interval ).
This result is of a fundamental importance, as it states that the finite element method can be used to approximately calculate the solution of our problem, and that the error in the computed solution decreases proportionately to the partition size Céa's lemma can be applied along the same lines to derive error estimates for finite element problems in higher dimensions (here the domain of was in one dimension), and while using higher order polynomials for the subspace
References
(Original work from J. Céa)
Numerical differential equations
Hilbert spaces
Lemmas in analysis | Céa's lemma | [
"Physics",
"Mathematics"
] | 1,137 | [
"Theorems in mathematical analysis",
"Quantum mechanics",
"Lemmas in mathematical analysis",
"Hilbert spaces",
"Lemmas"
] |
11,077,605 | https://en.wikipedia.org/wiki/Submillimeter%20Wave%20Astronomy%20Satellite | Submillimeter Wave Astronomy Satellite (SWAS, also Explorer 74 and SMEX-3) is a NASA submillimetre astronomy satellite, and is the fourth spacecraft in the Small Explorer program (SMEX). It was launched on 6 December 1998, at 00:57:54 UTC, from Vandenberg Air Force Base aboard a Pegasus XL launch vehicle. The telescope was designed by the Smithsonian Astrophysical Observatory (SAO) and integrated by Ball Aerospace, while the spacecraft was built by NASA's Goddard Space Flight Center (GSFC). The mission's principal investigator is Gary J. Melnick.
History
The Submillimeter Wave Astronomy Satellite mission was approved on 1 April 1989. The project began with the Mission Definition Phase, officially starting on 29 September 1989, and running through 31 January 1992. During this time, the mission underwent a conceptual design review on 8 June 1990, and a demonstration of the Schottky receivers and acousto-optical spectrometer concept was performed on 8 November 1991.
Development
The mission's Development Phase ran from February 1992, through May 1996. The Submillimeter Wave Telescope underwent a preliminary design review on 13 May 1992, and a critical design review (CDR) on 23 February 1993. Ball Aerospace was responsible for the construction of and integration of components into the telescope. The University of Cologne delivered the acousto-optical spectrometer to Ball for integration into the telescope on 2 December 1993, while Millitech Corporation delivered the Schottky receivers to Ball on 20 June 1994. Ball delivered the finished telescope to Goddard Space Flight Center on 20 December 1994. GSFC, which was responsible for construction of the spacecraft bus, conducted integration of spacecraft and instruments from January through March 1995. Spacecraft qualification and testing took place between 1 April 1995, and 15 December 1995. After this, SWAS was placed into storage until 1 September 1998, when launch preparation was begun.
Mission
SWAS was designed to study the chemical composition, energy balance and structure of interstellar clouds, both galactic and extragalactic, and investigate the processes of stellar and planetary formation. Its sole instrument is a telescope operating in the submillimeter wavelengths of far infrared and microwave radiation. The telescope is composed of three main components: a elliptical off-axis Cassegrain reflector with a beam width of 4 arcminutes at operating frequencies, two Schottky diode receivers, and an acousto-optical spectrometer. The system is sensitive to frequencies between 487–557 GHz (538–616 μm), which allows it to focus on the spectral lines of molecular oxygen (O2) at 487.249 GHz; neutral carbon () at 492.161 GHz; isotopic water (H218O) at 548.676 GHz; isotopic carbon monoxide (13CO) at 550.927 GHz; and water (H2O) at 556.936 GHz. Detailed 1° x 1° maps of giant molecular and dark cloud cores are generated from a grid of measurements taken at 3.7 arcminutes spacings. SWAS's submillimeter radiometers are a pair of passively cooled subharmonic Schottky diode receivers, with receiver noise figures of 2500-3000 K. An acousto-optical spectrometer (AOS) was provided by the University of Cologne, in Germany. Outputs of the two SWAS receivers are combined to form a final intermediate frequency, which extends from 1.4 to 2.8 GHz and is dispersed into 1400 1-MHz channels by the AOS. SWAS is designed to make pointed observations stabilized on three axes, with a position accuracy of about 38 arcseconds, and jitter of about 24 arcseconds. Attitude information is obtained from gyroscopes whose drift is corrected via a star tracker. Momentum wheels are used to maneuver the spacecraft.
Experiment
Submillimeter Wave Telescope
The SWAS instrument is a submillimeter-wave telescope that incorporates dual heterodyne radiometers and an acousto-optical spectrometer. SWAS will measure water, molecular oxygen, atomic carbon, and isotopic carbon monoxide spectral line emissions from galactic interstellar clouds in the wavelength range 540-616 micrometres. Such submillimetre wave radiation cannot be detected from the ground because of atmospheric attenuation. The SWAS measurements will provide new information about the physical conditions (density and temperature) and chemistry in star-forming molecular clouds.
Launch
The spacecraft was delivered to Orbital Sciences Corporation at Vandenberg Air Force Base on 2 November 1998, for integration onto their Pegasus XL launch vehicle. Launch occurred on 6 December 1998, at 00:57:54 UTC, from Orbital Sciences' Stargazer L-1011 TriStar mothership. Its initial orbit was a near-circular with an inclination of 69.90°.
SWAS was originally scheduled to launch in June 1995 but was delayed due to back-to-back launch failures of the Pegasus XL launch vehicle in June 1994 and June 1995. A launch opportunity in January 1997 was again canceled due to a Pegasus XL launch failure in November 1996.
The commissioning phase of the mission lasted until 19 December 1998, when the telescope began producing useful science data. The SWAS mission had a planned duration of two years and a cost estimate of US$60 million, but mission extensions allowed for five and a half years of continuous science operations. During this time, data was taken on more than 200 astronomical objects. The decision was made to end science and spacecraft operations on 21 July 2004, at which time the spacecraft was placed into hibernation.
Deep Impact mission
To support the Deep Impact mission at comet 9P/Tempel, SWAS was brought out of hibernation on 1 June 2005. Vehicle check-out was completed on 5 June 2005 with no discernible degradation of equipment found. SWAS observations of the comet focused on isotopic water output both before and after the Deep Impact impactor struck the comet's nucleus on 4 July 2005. While water output was found to naturally vary by more than a factor of three during the observation campaign, SWAS data showed that there was no excessive release of water due to the impact event. After three months of observation, SWAS was once again placed into hibernation on 1 September 2005.
, SWAS remains in Earth orbit on stand-by.
See also
Explorer program
References
Further reading
External links
SWAS website by the Center for Astrophysics Harvard & Smithsonian
SWAS data archive by NASA's Legacy Archive for Microwave Background Data Analysis
SWAS data archive by the NASA/IPAC Infrared Science Archive
SWAS website (archive) by NASA's Goddard Space Flight Center
Satellites orbiting Earth
Explorers Program
Space telescopes
Spacecraft launched in 1998
Submillimetre telescopes
Spacecraft launched by Pegasus rockets | Submillimeter Wave Astronomy Satellite | [
"Astronomy"
] | 1,405 | [
"Space telescopes"
] |
11,077,711 | https://en.wikipedia.org/wiki/Freenex | Freenex Co, Ltd. is a Korean company that supplies navigation systems for electronics and automotive applications. It is headquartered in Gil-dong Gangdong-gu Seoul, Korea, established in 2002. Freenex companies develop consumer and aviation technologies employing the Global Positioning System. Freenex also creates OEM products for BMW, Hyundai Autonet, WIA brand navigation automotive markets and for Vitas.
Products include television, navigated teletext, digital maps and navigation. Its primary competitor in Hyundai Autonet and Garmin. Freenex CEO is Lee Woo Yeol (이우열).
Products
L-Vision model:520
H-Vision model:2200, 700
D-Vision model:700, 750, 720G
DM-720CL
DXM-760
DMB-100
Major competitors
Hyundai Autonet
Thinkway
Exroad
Garmin
See also
Automotive navigation system
References
External links
Freenex Homepage (Korean)
Navigation system companies
Auto parts suppliers of South Korea
Manufacturing companies based in Seoul
Electronics companies of South Korea
South Korean brands
Automotive companies established in 2002
South Korean companies established in 2002
Global Positioning System | Freenex | [
"Technology",
"Engineering"
] | 233 | [
"Global Positioning System",
"Wireless locating",
"Aircraft instruments",
"Aerospace engineering"
] |
11,077,785 | https://en.wikipedia.org/wiki/Elixir%20sulfanilamide | Elixir sulfanilamide was an improperly prepared sulfonamide antibiotic that caused mass poisoning in the United States in 1937. It is believed to have killed 107 people. The public outcry caused by this incident and other similar disasters led to the passing of the 1938 Federal Food, Drug, and Cosmetic Act, which significantly increased the Food and Drug Administration's powers to regulate drugs.
History
Aside from the Pure Food and Drug Act of 1906 and the Harrison Act of 1914 banning the sale of some narcotic drugs, there was no federal regulatory control in the United States of America for drugs until Congress enacted the 1938 Food, Drug, and Cosmetic Act in response to the elixir sulfanilamide poisonings.
In 1937, S. E. Massengill Company, a pharmaceutical manufacturer, created an oral preparation of sulfanilamide using diethylene glycol (DEG) as the solvent or excipient, and called the preparation "Elixir Sulfanilamide". DEG is poisonous to humans and other mammals, but Harold Watkins, the company's chief pharmacist and chemist, was not aware of this. (Although the first case of a fatality from the related ethylene glycol occurred in 1930 and studies had been published in medical journals stating DEG could cause kidney damage or failure, its toxicity was not widely known prior to the incident.) Watkins simply mixed raspberry flavoring into the powdered drug and then dissolved the mixture in DEG. Animal testing was not required by law, and Massengill performed none; there were no regulations at the time requiring premarket safety testing of drugs.
The company started selling and distributing the medication in September 1937. By October 11, the American Medical Association received a report of several deaths caused by the medication. The Food and Drug Administration was notified, and an extensive search was conducted to recover the distributed medicine. Frances Oldham Kelsey assisted on a research project that verified that the DEG solvent was responsible for the fatal adverse effects. At least 100 deaths were blamed on the medication.
The owner of the company, when pressed to admit some measure of culpability, infamously answered, "We have been supplying a legitimate professional demand and not once could have foreseen the unlooked-for results. I do not feel that there was any responsibility on our part." Watkins, the chemist, committed suicide while awaiting trial.
A woman wrote to U.S. President Roosevelt and described the death of her daughter:
"The first time I ever had occasion to call in a doctor for [Joan] and she was given Elixir of Sulfanilamide. All that is left to us is the caring for her little grave. Even the memory of her is mixed with sorrow for we can see her little body tossing to and fro and hear that little voice screaming with pain and it seems as though it would drive me insane. ... It is my plea that you will take steps to prevent such sales of drugs that will take little lives and leave such suffering behind and such a bleak outlook on the future as I have tonight."
Congress responded to public outrage by passing the 1938 Food, Drug, and Cosmetic Act, which required companies to perform animal safety tests on their proposed new drugs and submit the data to the FDA before being allowed to market their products. The Massengill Company paid a minimum fine under provisions of the 1906 Pure Food and Drugs Act, which prohibited labeling the preparation an "elixir" if it contained no ethanol.
See also
Human subject research legislation in the United States
List of medicine contamination incidents
1985 Austrian diethylene glycol wine scandal
References
Adulteration
Drug safety
Food and Drug Administration
Health disasters in the United States
Mass poisoning
Medical scandals in the United States
Pharmaceuticals policy
Poisons
Sulfonamide antibiotics
United States federal health legislation
1937 disasters in the United States
1937 health disasters | Elixir sulfanilamide | [
"Chemistry",
"Environmental_science"
] | 798 | [
"Poisons",
"Adulteration",
"Toxicology",
"Drug safety"
] |
11,079,376 | https://en.wikipedia.org/wiki/WHEE-LO | WHEE-LO is a trademark for a handheld toy that propels a plastic wheel along both sides of a metal track with magnets built into the wheel. As the track is tilted up and down, the wheel rolls the length of the track, top and bottom, and then again on the opposite side of the wire. In this way, the wheel always keeps in contact with the track, and can be continually propelled on its cyclical course. With proper timing, the wheel can be brought to a great speed. The trademark was first registered in 1958, but the toy had been marketed in 1953 by the Maggie Magnetic company in New York City. It included six colorful cardboard cutout discs ("Whee-lets") that attached to the wheel and created optical illusions as it spun.
In his autobiography, The Stringless Yo-yo, Harvey Matusow states that he invented the toy and sold the rights to the Maggie Magnetic company in the early 1950s.
Over the years, several other companies have marketed related toys with different names. As of 2024, the Schylling company manufactures a "Magnetic gyro wheel" toy. The Ipidi company manufactures a toy titled "Retro Magic Rail Twirler".
A plastic piece at one end of the track serves as both a handhold for the toy and an adjustable slider to position the width of the track. The narrower the track, the faster the wheel goes, because the axle is thicker in the middle and you get more distance per each rotation of the wheel. Widen the track, and the wheel goes slower.
Cultural References
-In S4E25 of Family Guy, news anchor Tom Tucker says, "and now this!" and begins playing with a WHEE-LO during a broadcast. He continues, "Look at that. In the thirties, they called this an Uncle Spinny Dervish," to which co-anchor Diane Simmons replies, "Really?" Tom answers, "I don't know, I'm just bored."
References
Magnetic devices
Mechanical toys | WHEE-LO | [
"Physics",
"Technology"
] | 417 | [
"Physical systems",
"Machines",
"Mechanical toys"
] |
11,080,599 | https://en.wikipedia.org/wiki/Mallee%20%28habit%29 | Mallee are trees or shrubs, mainly certain species of eucalypts, which grow with multiple stems springing from an underground lignotuber, usually to a height of no more than . The term is widely used for trees with this growth habit across southern Australia, in the states of Western Australia, South Australia, New South Wales and Victoria, and has given rise to other uses of the term, including the ecosystems where such trees predominate, specific geographic areas within some of the states and as part of various species' names.
Etymology
The word is thought to originate from the word mali, meaning water, in the Wemba Wemba language, an Aboriginal Australian language of southern New South Wales and Victoria. The word is also used in the closely related Woiwurrung language and other Aboriginal languages of Victoria, South Australia, and southern New South Wales.
Overview
The term mallee is used describe various species of trees or woody plants, mainly of the genus Eucalyptus, which grow with multiple stems springing from an underground bulbous woody structure called a lignotuber, or mallee root, usually to a height of no more than . The term is widely used for trees with this across southern Australia, across the states of Western Australia, South Australia, New South Wales and Victoria. The term is also applied to other eucalypts with a similar growth habit, in particular those in the closely related genera Corymbia and Angophora.
Some of the species grow as single-stemmed trees initially, but recover in mallee form if burnt to the ground by bushfire.
Over 50 per cent of eucalypt species are mallees, and they are mostly slow-growing and tough. The lignotuber enables the plant to regenerate after fire, wind damage or other type of trauma.
Range
Mallees are the dominant vegetation throughout semi-arid areas of Australia with reliable winter rainfall. Within this area, they form extensive woodlands and shrublands covering over in New South Wales, north-western Victoria, southern South Australia and southern Western Australia, with the greatest extent being in South Australia ().
There are also some species found in the Northern Territory, namely Eucalyptus gamophylla (blue mallee), Eucalyptus pachycarpa and Eucalyptus setosa.
Farming on mallee land
Grubbing the land of mallee stumps for agricultural purposes was difficult for early settler farmers, as the land could not be easily ploughed and sown even after the trees were removed. In the colony of South Australia in the late 19th century, legislation which encouraged closer settlement made it even tougher for farmers to make a living.
Grubbing the mallee lands was a laborious and expensive task estimated at £2–7 per acre, and the government offered a £200 reward for the invention of an effective machine that would remove the stumps. To assist with the challenges of farming on mallee lands, some settlers turned their minds to the invention of technologies that could make some of the tasks easier. First the scrub or mallee roller was invented, which flattened the stumps and other vegetation, after which it would all be burnt and crops sown. The technique became known as "mullenising", as the invention of the device was attributed to a farmer called Mullen.
A few years later the stump jump plough was invented on the Yorke Peninsula by Richard Bowyer Smith and perfected by his brother, Clarence Herbert Smith. This machine had individually movable ploughshares, enabling the whole plough to move over stumps rather than having to steer around them, and proved a great success.
Uses of the term
The term is applied to both the tree itself and the whole plant community in which it predominates, giving rise to the classification of mallee woodlands and shrublands as one of Australia's major vegetation groups.
Several common names of eucalypt species have "mallee" in them, such as the Blue Mountains mallee (Eucalyptus stricta) and blue mallee (E. gamophylla and E. polybractea).
The term is used in the phrase strong as a mallee bull, and is colloquially used is for any remote or isolated area, or as a synonym for outback.
Species
Widespread mallee species include:
E. dumosa (white mallee)
E. socialis (red mallee)
E. gracilis (yorrell)
E. oleosa (red mallee)
E. incrassata (ridge-fruited mallee)
E. diversifolia (soap mallee)
The following four Western Australian species can be found in the Waite Arboretum in Adelaide, and are suitable for gardens:
Eucalyptus pleurocarpa, or tallerack
Eucalyptus pyriformis, or dowerin rose
Eucalyptus preissiana, or bell-fruited mallee
Eucalyptus grossa, or coarse-leaved mallee
See also
Coppice
References
Further reading
Eucalyptus
.Habit
Mediterranean forests, woodlands, and scrub in Australia
Flora of Australia
Plant common names
Plant life-forms
Plant morphology
Australian Aboriginal words and phrases | Mallee (habit) | [
"Biology"
] | 1,037 | [
"Common names of organisms",
"Plants",
"Plant morphology",
"Plant common names",
"Plant life-forms"
] |
11,081,176 | https://en.wikipedia.org/wiki/Mind%E2%80%93body%20problem | The mind–body problem is a philosophical problem concerning the relationship between thought and consciousness in the human mind and body.
It is not obvious how the concept of the mind and the concept of the body relate. For example, feelings of sadness (which are mental events) cause people to cry (which is a physical state of the body). Finding a joke funny (a mental event) causes one to laugh (another bodily state). Feelings of pain (in the mind) cause avoidance behaviours (in the body), and so on.
Similarly, changing the chemistry of the body (and the brain especially) via drugs (such as antipsychotics, SSRIs, or alcohol) can change one's state of mind in nontrivial ways. Alternatively, therapeutic interventions like cognitive behavioral therapy can change cognition in ways that have downstream effects on the bodily health.
In general, the existence of these mind–body connections seems unproblematic. Issues arise, however, once one considers what exactly we should make of these relations from a metaphysical or scientific perspective. Such reflections quickly raise a number of questions like:
Are the mind and body two distinct entities, or a single entity?
If the mind and body are two distinct entities, do the two of them causally interact?
Is it possible for these two distinct entities to causally interact?
What is the nature of this interaction?
Can this interaction ever be an object of empirical study?
If the mind and body are a single entity, then are mental events explicable in terms of physical events, or vice versa?
Is the relation between mental and physical events something that arises de novo at a certain point in development?
These and other questions that discuss the relation between mind and body are questions that all fall under the banner of the 'mind–body problem'.
Mind–body interaction and mental causation
Philosophers David L. Robb and John F. Heil introduce mental causation in terms of the mind–body problem of interaction:
Contemporary neurophilosopher Georg Northoff suggests that mental causation is compatible with classical formal and final causality.
Biologist, theoretical neuroscientist and philosopher, Walter J. Freeman, suggests that explaining mind–body interaction in terms of "circular causation" is more relevant than linear causation.
In neuroscience, much has been learned about correlations between brain activity and subjective, conscious experiences. Many suggest that neuroscience will ultimately explain consciousness:
"...consciousness is a biological process that will eventually be explained in terms of molecular signaling pathways used by interacting populations of nerve cells..." However, this view has been criticized because consciousness has yet to be shown to be a process, and the "hard problem" of relating consciousness directly to brain activity remains elusive.
Since 1927, at the Solvay Conference in Austria, European physicists of the late 19th and early 20th centuries realized that the interpretations of their experiments with light and electricity required a different theory to explain why light behaves both as a wave and particle. The implications were profound. The usual empirical model of explaining natural phenomena could not account for this duality of matter and non-matter. In a significant way, this has brought back the conversation on the mind–body duality.
Neural correlates
The neural correlates of consciousness "are the smallest set of brain mechanisms and events sufficient for some specific conscious feeling, as elemental as the color red or as complex as the sensual, mysterious, and primeval sensation evoked when looking at [a] jungle scene..." Neuroscientists use empirical approaches to discover neural correlates of subjective phenomena.
Neurobiology and neurophilosophy
A science of consciousness must explain the exact relationship between subjective conscious mental states and brain states formed by electrochemical interactions in the body, the so-called hard problem of consciousness. Neurobiology studies the connection scientifically, as do neuropsychology and neuropsychiatry. Neurophilosophy is the interdisciplinary study of neuroscience and philosophy of mind. In this pursuit, neurophilosophers, such as Patricia Churchland, Paul Churchland and Daniel Dennett, have focused primarily on the body rather than the mind. In this context, neuronal correlates may be viewed as causing consciousness, where consciousness can be thought of as an undefined property that depends upon this complex, adaptive, and highly interconnected biological system. However, it's unknown if discovering and characterizing neural correlates may eventually provide a theory of consciousness that can explain the first-person experience of these "systems", and determine whether other systems of equal complexity lack such features.
The massive parallelism of neural networks allows redundant populations of neurons to mediate the same or similar percepts. Nonetheless, it is assumed that every subjective state will have associated neural correlates, which can be manipulated to artificially inhibit or induce the subject's experience of that conscious state. The growing ability of neuroscientists to manipulate neurons using methods from molecular biology in combination with optical tools was achieved by the development of behavioral and organic models that are amenable to large-scale genomic analysis and manipulation. Non-human analysis such as this, in combination with imaging of the human brain, have contributed to a robust and increasingly predictive theoretical framework.
Arousal and content
There are two common but distinct dimensions of the term consciousness, one involving arousal and states of consciousness and the other involving content of consciousness and conscious states. To be conscious of something, the brain must be in a relatively high state of arousal (sometimes called vigilance), whether awake or in REM sleep. Brain arousal level fluctuates in a circadian rhythm but these natural cycles may be influenced by lack of sleep, alcohol and other drugs, physical exertion, etc. Arousal can be measured behaviorally by the signal amplitude required to trigger a given reaction (for example, the sound level that causes a subject to turn and look toward the source). High arousal states involve conscious states that feature specific perceptual content, planning and recollection or even fantasy. Clinicians use scoring systems such as the Glasgow Coma Scale to assess the level of arousal in patients with impaired states of consciousness such as the comatose state, the persistent vegetative state, and the minimally conscious state. Here, "state" refers to different amounts of externalized, physical consciousness: ranging from a total absence in coma, persistent vegetative state and general anesthesia, to a fluctuating, minimally conscious state, such as sleep walking and epileptic seizure.
Many nuclei with distinct chemical signatures in the thalamus, midbrain and pons must function for a subject to be in a sufficient state of brain arousal to experience anything at all. These nuclei therefore belong to the enabling factors for consciousness. Conversely it is likely that the specific content of any particular conscious sensation is mediated by particular neurons in the cortex and their associated satellite structures, including the amygdala, thalamus, claustrum and the basal ganglia.
Theoretical frameworks
A variety of approaches have been proposed. Most are either dualist or monist. Dualism maintains a rigid distinction between the realms of mind and matter. Monism maintains that there is only one unifying reality as in neutral or substance or essence, in terms of which everything can be explained.
Each of these categories contains numerous variants. The two main forms of dualism are substance dualism, which holds that the mind is formed of a distinct type of substance not governed by the laws of physics, and property dualism, which holds that mental properties involving conscious experience are fundamental properties, alongside the fundamental properties identified by a completed physics. The three main forms of monism are physicalism, which holds that the mind consists of matter organized in a particular way; idealism, which holds that only thought truly exists and matter is merely a representation of mental processes; and neutral monism, which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them. Psychophysical parallelism is a third possible alternative regarding the relation between mind and body, between interaction (dualism) and one-sided action (monism).
Several philosophical perspectives that have sought to escape the problem by rejecting the mind–body dichotomy have been developed. The historical materialism of Karl Marx and subsequent writers, itself a form of physicalism, held that consciousness was engendered by the material contingencies of one's environment. An explicit rejection of the dichotomy is found in French structuralism, and is a position that generally characterized post-war Continental philosophy.
An ancient model of the mind known as the Five-Aggregate Model, described in the Buddhist teachings, explains the mind as continuously changing sense impressions and mental phenomena. Considering this model, it is possible to understand that it is the constantly changing sense impressions and mental phenomena (i.e., the mind) that experience/analyze all external phenomena in the world as well as all internal phenomena including the body anatomy, the nervous system as well as the organ brain. This conceptualization leads to two levels of analyses: (i) analyses conducted from a third-person perspective on how the brain works, and (ii) analyzing the moment-to-moment manifestation of an individual's mind-stream (analyses conducted from a first-person perspective). Considering the latter, the manifestation of the mind-stream is described as happening in every person all the time, even in a scientist who analyzes various phenomena in the world, including analyzing and hypothesizing about the organ brain.
Dualism
The following is a very brief account of some contributions to the mind–body problem.
Interactionism
The viewpoint of interactionism suggests that the mind and body are two separate substances, but that each can affect the other. This interaction between the mind and body was first put forward by the philosopher René Descartes. Descartes believed that the mind was non-physical and permeated the entire body, but that the mind and body interacted via the pineal gland. This theory has changed throughout the years, and in the 20th century its main adherents were the philosopher of science Karl Popper and the neurophysiologist John Carew Eccles. A more recent and popular version of Interactionism is the viewpoint of emergentism. This perspective states that mental states are a result of the brain states, and that the mental events can then influence the brain, resulting in a two way communication between the mind and body.
The absence of an empirically identifiable meeting point between the non-physical mind (if there is such a thing) and its physical extension (if there is such a thing) has been raised as a criticism of interactionalist dualism. This criticism has led many modern philosophers of mind to maintain that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology, and the neurosciences.
Epiphenomenalism
The viewpoint of epiphenomenalism suggests that the physical brain can cause mental events in the mind, but that the mind cannot interact with the brain at all; stating that mental occurrences are simply a side effect of the brain's processes. This viewpoint explains that while one's body may react to them feeling joy, fear, or sadness, that the emotion does not cause the physical response. Rather, it explains that joy, fear, sadness, and all bodily reactions are caused by chemicals and their interaction with the body.
Psychophysical parallelism
The viewpoint of psychophysical parallelism suggests that the mind and body are entirely independent from one another. Furthermore, this viewpoint states that both mental and physical stimuli and reactions are experienced simultaneously by both the mind and body, however, there is no interaction nor communication between the two.
Double aspectism
Double aspectism is an extension of psychophysical parallelism which also suggests that the mind and body cannot interact, nor can they be separated. Baruch Spinoza and Gustav Fechner were two of the notable users of double aspectism, however, Fechner later expanded upon it to form the branch of psychophysics in an attempt to prove the relationship of the mind and body.
Pre-established harmony
The viewpoint of pre-established harmony is another offshoot of psychophysical parallelism which suggests that mental events and bodily events are separate and distinct, but that they are both coordinated by an external agent, an example of such an agent could be God. A notable adherent to the idea of pre-established harmony is Gottfried Wilhelm von Leibniz in his theory of Monadology. His explanation of pre-established harmony relied heavily upon God as the external agent who coordinated the mental and bodily events of all things in the beginning.
Gottfried Wilhelm Leibniz's theory of pre-established harmony () is a philosophical theory about causation under which every "substance" affects only itself, but all the substances (both bodies and minds) in the world nevertheless seem to causally interact with each other because they have been programmed by God in advance to "harmonize" with each other. Leibniz's term for these substances was "monads", which he described in a popular work (Monadology §7) as "windowless".
The concept of pre-established harmony can be understood by considering an event with both seemingly mental and physical aspects. For example, consider saying 'ouch' after stubbing one's toe. There are two general ways to describe this event: in terms of mental events (where the conscious sensation of pain caused one to say 'ouch') and in terms of physical events (where neural firings in one's toe, carried to the brain, are what caused one to say 'ouch'). The main task of the mind–body problem is figuring out how these mental events (the feeling of pain) and physical events (the nerve firings) relate. Leibniz's pre-established harmony attempts to answer this puzzle, by saying that mental and physical events are not genuinely related in any causal sense, but only seem to interact due to psycho-physical fine-tuning.
Leibniz's theory is best known as a solution to the mind–body problem of how mind can interact with the body. Leibniz rejected the idea of physical bodies affecting each other, and explained all physical causation in this way.
Under pre-established harmony, the preprogramming of each mind must be extremely complex, since only it causes its own thoughts or actions, for as long as it exists. To appear to interact, each substance's "program" must contain a description of either the entire universe, or of how the object behaves at all times during all interactions that appear to occur.
An example:
An apple falls on Alice's head, apparently causing the experience of pain in her mind. In fact, the apple does not cause the pain—the pain is caused by some previous state of Alice's mind. If Alice then seems to shake her hand in anger, it is not actually her mind that causes this, but some previous state of her hand.
Note that if a mind behaves as a windowless monad, there is no need for any other object to exist to create that mind's sense perceptions, leading to a solipsistic universe that consists only of that mind. Leibniz seems to admit this in his Discourse on Metaphysics, section 14. However, he claims that his principle of harmony, according to which God creates the best and most harmonious world possible, dictates that the perceptions (internal states) of each monad "expresses" the world in its entirety, and the world expressed by the monad actually exists. Although Leibniz says that each monad is "windowless", he also claims that it functions as a "mirror" of the entire created universe.
On occasion, Leibniz styled himself as "the author of the system of pre-established harmony".
Immanuel Kant's professor Martin Knutzen regarded pre-established harmony as "the pillow for the lazy mind".
In his sixth Metaphysical Meditation, Descartes talked about a "coordinated disposition of created things set up by God", shortly after having identified "nature in its general aspect" with God himself. His conception of the relationship between God and his normative nature actualized in the existing world recalls both the pre-established harmony of Leibniz and the Deus sive Natura of Baruch Spinoza.
Occasionalism
The viewpoint of Occasionalism is another offshoot of psychophysical parallelism, however, the major difference is that the mind and body have some indirect interaction. Occasionalism suggests that the mind and body are separate and distinct, but that they interact through divine intervention. Nicolas Malebranche was one of the main contributors to this idea, using it as a way to address his disagreements with Descartes' view of the mind–body problem. In Malebranche's occasionalism, he viewed thoughts as a wish for the body to move, which was then fulfilled by God causing the body to act.
Historical background
The problem was popularized by René Descartes in the 17th century, which resulted in Cartesian dualism, also by pre-Aristotelian philosophers, in Avicennian philosophy, and in earlier Asian traditions.
The Buddha
The Buddha (480–400 B.C.E), founder of Buddhism, described the mind and the body as depending on each other in a way that two sheaves of reeds were to stand leaning against one another and taught that the world consists of mind and matter which work together, interdependently. Buddhist teachings describe the mind as manifesting from moment to moment, one thought moment at a time as a fast flowing stream. The components that make up the mind are known as the five aggregates (i.e., material form, feelings, perception, volition, and sensory consciousness), which arise and pass away continuously. The arising and passing of these aggregates in the present moment is described as being influenced by five causal laws: biological laws, psychological laws, physical laws, volitional laws, and universal laws. The Buddhist practice of mindfulness involves attending to this constantly changing mind-stream.
Ultimately, the Buddha's philosophy is that both mind and forms are conditionally arising qualities of an ever-changing universe in which, when nirvāna is attained, all phenomenal experience ceases to exist. According to the anattā doctrine of the Buddha, the conceptual self is a mere mental construct of an individual entity and is basically an impermanent illusion, sustained by form, sensation, perception, thought and consciousness. The Buddha argued that mentally clinging to any views will result in delusion and stress, since, according to the Buddha, a real self (conceptual self, being the basis of standpoints and views) cannot be found when the mind has clarity.
Plato
Plato (429–347 B.C.E.) believed that the material world is a shadow of a higher reality that consists of concepts he called Forms. According to Plato, objects in our everyday world "participate in" these Forms, which confer identity and meaning to material objects. For example, a circle drawn in the sand would be a circle only because it participates in the concept of an ideal circle that exists somewhere in the world of Forms. He argued that, as the body is from the material world, the soul is from the world of Forms and is thus immortal. He believed the soul was temporarily united with the body and would only be separated at death, when it, if pure, would return to the world of Forms; otherwise, reincarnation follows. Since the soul does not exist in time and space, as the body does, it can access universal truths. For Plato, ideas (or Forms) are the true reality, and are experienced by the soul. The body is for Plato empty in that it cannot access the abstract reality of the world; it can only experience shadows. This is determined by Plato's essentially rationalistic epistemology.
Aristotle
For Aristotle (384–322 BC) mind is a faculty of the soul. Regarding the soul, he said:
In the end, Aristotle saw the relation between soul and body as uncomplicated, in the same way that it is uncomplicated that a cubical shape is a property of a toy building block. The soul is a property exhibited by the body, one among many. Moreover, Aristotle proposed that when the body perishes, so does the soul, just as the shape of a building block disappears with destruction of the block.
Medieval Aristotelianism
Working in the Aristotelian-influenced tradition of Thomism, Thomas Aquinas (1225–1274), like Aristotle, believed that the mind and the body are one, like a seal and wax; therefore, it is pointless to ask whether or not they are one. However, (referring to "mind" as "the soul") he asserted that the soul persists after the death of the body in spite of their unity, calling the soul "this particular thing". Since his view was primarily theological rather than philosophical, it is impossible to fit it neatly within either the category of physicalism or dualism.
Influences of Eastern monotheistic religions
In religious philosophy of Eastern monotheism, dualism denotes a binary opposition of an idea that contains two essential parts. The first formal concept of a "mind–body" split may be found in the divinity–secularity dualism of the ancient Persian religion of Zoroastrianism around the mid-fifth century BC. Gnosticism is a modern name for a variety of ancient dualistic ideas inspired by Judaism popular in the first and second century AD. These ideas later seem to have been incorporated into Galen's "tripartite soul" that led into both the Christian sentiments expressed in the later Augustinian theodicy and Avicenna's Platonism in Islamic Philosophy.
Descartes
René Descartes (1596–1650) believed that mind exerted control over the brain via the pineal gland:
His posited relation between mind and body is called Cartesian dualism or substance dualism. He held that mind was distinct from matter, but could influence matter. How such an interaction could be exerted remains a contentious issue.
Kant
For Immanuel Kant (1724–1804) beyond mind and matter there exists a world of a priori forms, which are seen as necessary preconditions for understanding. Some of these forms, space and time being examples, today seem to be pre-programmed in the brain.
Kant views the mind–body interaction as taking place through forces that may be of different kinds for mind and body.
Huxley
For Thomas Henry Huxley (1825–1895) the conscious mind was a by-product of the brain that has no influence upon the brain, a so-called epiphenomenon.
Whitehead
Alfred North Whitehead advocated a sophisticated form of panpsychism that has been called by David Ray Griffin panexperientialism.
Popper
For Karl Popper (1902–1994) there are three aspects of the mind–body problem: the worlds of matter, mind, and of the creations of the mind, such as mathematics. In his view, the third-world creations of the mind could be interpreted by the second-world mind and used to affect the first-world of matter. An example might be radio, an example of the interpretation of the third-world (Maxwell's electromagnetic theory) by the second-world mind to suggest modifications of the external first world.
Ryle
With his 1949 book, The Concept of Mind, Gilbert Ryle "was seen to have put the final nail in the coffin of Cartesian dualism".
In the chapter "Descartes' Myth," Ryle introduces "the dogma of the Ghost in the machine" to describe the philosophical concept of the mind as an entity separate from the body:I hope to prove that it is entirely false, and false not in detail but in principle. It is not merely an assemblage of particular mistakes. It is one big mistake and a mistake of a special kind. It is, namely, a category mistake.
Searle
For John Searle (b. 1932) the mind–body problem is a false dichotomy; that is, mind is a perfectly ordinary aspect of the brain. Searle proposed biological naturalism in 1980.
See also
Binding problem
Bodymind
Chinese room
Cognitive closure (philosophy)
Cognitive neuroscience
Connectionism
Consciousness in animals
Downward causation
Descartes' Error
Embodied cognition
Existentialism
Explanatory gap
Free will
Ideasthesia
Namarupa (Buddhist concept)
Neuroscience of free will
Philosophical zombie
Philosophy of artificial intelligence
Pluralism
Problem of mental causation
Problem of other minds
Qualia
Reductionism
Sacred–profane dichotomy
Sentience
Strange loop (self-reflective thoughts)
The Mind's I (book on the subject)
Turing test
Vertiginous question
William H. Poteat
References
Bibliography
Kim, J. (1995). "Mind–Body Problem", Oxford Companion to Philosophy. Ted Honderich (ed.). Oxford: Oxford University Press.
Massimini, M.; Tononi, G. (2018). Sizing up Consciousness: Towards an Objective Measure of the Capacity for Experience. Oxford University Press.
Turner, Bryan S. (1996). The Body and Society: Exploration in Social Theory.
External links
Plato's Middle Period Metaphysics and Epistemology - The Stanford Encyclopedia of Philosophy
The Mind/Body Problem, BBC Radio 4 discussion with Anthony Grayling, Julian Baggini & Sue James (In Our Time, Jan. 13, 2005)
Baruch Spinoza
Cognition
Consciousness
Dichotomies
Enactive cognition
History of psychology
Metaphysics of mind
Neuroscience
Philosophical problems
René Descartes | Mind–body problem | [
"Biology"
] | 5,323 | [
"Neuroscience"
] |
11,081,541 | https://en.wikipedia.org/wiki/Windowed%20envelope | A windowed envelope is a conventional envelope with a transparent (typically PET or BOPS Bi-oriented polystyrene plastic film) window to allow the recipient's address to be printed on the paper contained within.
History
Americus F. Callahan of Chicago, Illinois, in the United States, received the first patent for a windowed envelope on 10 June 1902. Originally called the "outlook envelop", the patent initially anticipated using thin rice paper as the transparent material forming the window. That material was soon replaced by glassine and by the end of the century, clear plastics; some uses omit the film entirely, leaving an open cut-out. The design has otherwise remained nearly unchanged.
The design and patent letter were completed on 15 November 1901, with the patent filing occurring on 9 December 1901. The United States patent number for Callahan's design is 701,839.
Callahan specifically recommended the use of Manila paper, which is considerably cheaper than thicker writing paper and also provides an opaque background for secure covering of the letter within. Callahan also recommends the use of black paper, which would likewise provide an opaque background whilst simultaneously increasing the contrast with the white address blocks.
Advantages and disadvantages
The window permits text on the letter itself to be used simultaneously as the address of the recipient and the return-address of the sender, reducing the need to print the addresses onto the envelope itself, which at the time of Callahan's invention was done with the aid of a typewriter. This arguably amounts to a savings in materials, particularly through the reduction in ink usage; but on the other hand the window has to be cut out and sometimes replaced by an additional material called a patch (originally glassine but now plastic). This makes the envelope more expensive. There is also the argument that the paper of the envelope can be substituted with lesser-quality paper as the envelope no longer must be written upon; this perhaps was more relevant at the time of Callahan's invention but is a somewhat specious claim today. Over time the quality of paper generally has improved. Satisfactorily-strong envelopes for business and general-purpose domestic correspondence can be, and are, in fact made out of paper of various qualities. In Britain by the 1940s, during World War 2, envelopes were made out of newspaper because of the paper shortage.
Additional savings can be achieved by removing the time spent inscribing additional addresses upon the envelope. At the time, large business offices—particularly within the telegram industry—employed corps of envelope addressers who wrote the addresses upon envelopes. In addition to the labor costs, this method was prone to mismatches, where the address on the letter header within would not be the same as the address upon the envelope.
Providing a windowed envelope for returning a bill payment forces the payee to return the bill's tear-off stub with the payment, making it easier to ensure the payment is credited to the correct account.
Owing to the benefits in both time, cost, and quality, the windowed envelope design has become nearly ubiquitous among modern commercial mailings.
Regarding recycling after use: plastic windows are not normally a problem for paper mills as the window can usually be easily screened out during the manufacturing process.
References
Envelopes
Postal systems
American inventions | Windowed envelope | [
"Technology"
] | 669 | [
"Transport systems",
"Postal systems"
] |
11,081,803 | https://en.wikipedia.org/wiki/Phase%20retrieval | Phase retrieval is the process of algorithmically finding solutions to the phase problem. Given a complex spectrum , of amplitude , and phase :
where x is an M-dimensional spatial coordinate and k is an M-dimensional spatial frequency coordinate. Phase retrieval consists of finding the phase that satisfies a set of constraints for a measured amplitude. Important applications of phase retrieval include X-ray crystallography, transmission electron microscopy and coherent diffractive imaging, for which . Uniqueness theorems for both 1-D and 2-D cases of the phase retrieval problem, including the phaseless 1-D inverse scattering problem, were proven by Klibanov and his collaborators (see References).
Problem formulation
Here we consider 1-D discrete Fourier transform (DFT) phase retrieval problem. The DFT of a complex signal is given by
,
and the oversampled DFT of is given by
,
where .
Since the DFT operator is bijective, this is equivalent to recovering the phase . It is common recovering a signal from its autocorrelation sequence instead of its Fourier magnitude. That is, denote by the vector after padding with zeros. The autocorrelation sequence of is then defined as
,
and the DFT of , denoted by , satisfies .
Methods
Error reduction algorithm
The error reduction is a generalization of the Gerchberg–Saxton algorithm. It solves for from measurements of by iterating a four-step process. For the th iteration the steps are as follows:
Step (1): , , and are estimates of, respectively, , and . In the first step we calculate the Fourier transform of :
Step (2): The experimental value of , calculated from the diffraction pattern via the signal equation, is then substituted for , giving an estimate of the Fourier transform:
where the ' denotes an intermediate result that will be discarded later on.
Step (3): the estimate of the Fourier transform is then inverse Fourier transformed:
Step (4): then must be changed so that the new estimate of the object, , satisfies the object constraints. is therefore defined piecewise as:
where is the domain in which does not satisfy the object constraints. A new estimate is obtained and the four step process is repeated.
This process is continued until both the Fourier constraint and object constraint are satisfied. Theoretically, the process will always lead to a convergence, but the large number of iterations needed to produce a satisfactory image (generally >2000) results in the error-reduction algorithm by itself being unsuitable for practical applications.
Hybrid input-output algorithm
The hybrid input-output algorithm is a modification of the error-reduction algorithm - the first three stages are identical. However, no longer acts as an estimate of , but the input function corresponding to the output function , which is an estimate of . In the fourth step, when the function violates the object constraints, the value of is forced towards zero, but optimally not to zero. The chief advantage of the hybrid input-output algorithm is that the function contains feedback information concerning previous iterations, reducing the probability of stagnation. It has been shown that the hybrid input-output algorithm converges to a solution significantly faster than the error reduction algorithm. Its convergence rate can be further improved through step size optimization algorithms.
Here is a feedback parameter which can take a value between 0 and 1. For most applications, gives optimal results.{Scientific Reports volume 8, Article number: 6436 (2018)}
Shrinkwrap
For a two dimensional phase retrieval problem, there is a degeneracy of solutions as and its conjugate have the same Fourier modulus. This leads to "image twinning" in which the phase retrieval algorithm stagnates producing an image with features of both the object and its conjugate. The shrinkwrap technique periodically updates the estimate of the support by low-pass filtering the current estimate of the object amplitude (by convolution with a Gaussian) and applying a threshold, leading to a reduction in the image ambiguity.
Semidefinite relaxation-based algorithm for short time Fourier transform
The phase retrieval is an ill-posed problem. To uniquely identify the underlying signal, in addition to the methods that adds additional prior information like Gerchberg–Saxton algorithm, the other way is to add magnitude-only measurements like short time Fourier transform (STFT).
The method introduced below mainly based on the work of Jaganathan et al.
Short time Fourier transform
Given a discrete signal which is sampled from . We use a window of length W: to compute the STFT of , denoted by :
for and , where the parameter denotes the separation in time between adjacent short-time sections and the parameter denotes the number of short-time sections considered.
The other interpretation (called sliding window interpretation) of STFT can be used with the help of discrete Fourier transform (DFT). Let denotes the window element obtained from shifted and flipped window . Then we have
, where .
Problem definition
Let be the measurements corresponding to the magnitude-square of the STFT of , be the diagonal matrix with diagonal elements STFT phase retrieval can be stated as:
Find such that for and , where is the -th column of the -point inverse DFT matrix.
Intuitively, the computational complexity growing with makes the method impractical. In fact, however, for the most cases in practical we only need to consider the measurements corresponding to , for any parameter satisfying .
To be more specifically, if both the signal and the window are not vanishing, that is, for all and for all , signal can be uniquely identified from its STFT magnitude if the following requirements are satisfied:
,
.
The proof can be found in Jaganathan' s work, which reformulates STFT phase retrieval as the following least-squares problem:
.
The algorithm, although without theoretical recovery guarantees, empirically able to converge to the global minimum when there is substantial overlap between adjacent short-time sections.
Semidefinite relaxation-based algorithm
To establish recovery guarantees, one way is to formulate the problems as a semidefinite program (SDP), by embedding the problem in a higher dimensional space using the transformation and relax the rank-one constraint to obtain a convex program. The problem reformulated is stated below:
Obtain by solving:for and
Once is found, we can recover signal by best rank-one approximation.
Applications
Phase retrieval is a key component of coherent diffraction imaging (CDI). In CDI, the intensity of the diffraction pattern scattered from a target is measured. The phase of the diffraction pattern is then obtained using phase retrieval algorithms and an image of the target is constructed. In this way, phase retrieval allows for the conversion of a diffraction pattern into an image without an optical lens.
Using phase retrieval algorithms, it is possible to characterize complex optical systems and their aberrations. For example, phase retrieval was used to diagnose and repair the flawed optics of the Hubble Space Telescope.
Other applications of phase retrieval include X-ray crystallography and transmission electron microscopy.
See also
Phase problem
Crystallography
X-ray crystallography
Coherent diffraction imaging
Transport-of-Intensity Equation
Phase correlation
References
Crystallography
Mathematical physics
Mathematical chemistry
Inverse problems | Phase retrieval | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,483 | [
"Drug discovery",
"Applied mathematics",
"Theoretical physics",
"Materials science",
"Molecular modelling",
"Mathematical chemistry",
"Crystallography",
"Theoretical chemistry",
"Condensed matter physics",
"Inverse problems",
"Mathematical physics"
] |
11,082,896 | https://en.wikipedia.org/wiki/Medical%20statistics | Medical statistics (also health statistics) deals with applications of statistics to medicine and the health sciences, including epidemiology, public health, forensic medicine, and clinical research. Medical statistics has been a recognized branch of statistics in the United Kingdom for more than 40 years, but the term has not come into general use in North America, where the wider term 'biostatistics' is more commonly used. However, "biostatistics" more commonly connotes all applications of statistics to biology. Medical statistics is a subdiscipline of statistics. It is the science of summarizing, collecting, presenting and interpreting data in medical practice, and using them to estimate the magnitude of associations and test hypotheses. It has a central role in medical investigations. It not only provides a way of organizing information on a wider and more formal basis than relying on the exchange of anecdotes and personal experience, but also takes into account the intrinsic variation inherent in most biological processes.
Use in medical hypothesis testing
In medical hypothesis testing, the medical research is often evaluated by means of the confidence interval, the P value, or both.
Confidence interval
Frequently reported in medical research studies is the confidence interval (CI), which indicates the consistency and variability of the medical results of repeated medical trials. In other words, the confidence interval shows the range of values where the expected true estimate would exist within this specific range, if the study was performed many times.
Most biomedical research is not able to use a total population for a study. Instead, samples of the total population are what are often used for a study. From the sample, inferences can be made of the total population by means of a sample statistic and the estimation of error, presented as a range of values.
P value
Frequently used in medical studies is the statistical significance of P < 0.05.
The P value is the probability of no effect or no difference (null hypothesis) of obtaining a result essentially equal to what was actually observed. The P stands for probability and measures how likely it is that any observed difference between groups is due to chance. The P value function between 0 and 1. The closer to 0, the less likely the results are due to chance. The closer to 1, the higher the probability that the results are actually due to chance.
Pharmaceutical statistics
Pharmaceutical statistics is the application of statistics to matters concerning the pharmaceutical industry. This can be from issues of design of experiments, to analysis of drug trials, to issues of commercialization of a medicine.
There are many professional bodies concerned with this field including:
European Federation of Statisticians in the Pharmaceutical Industry
Statisticians In The Pharmaceutical Industry
Clinical biostatistics
Clinical biostatistics is concerned with research into the principles and methodology used in the design and analysis of clinical research and to apply statistical theory to clinical medicine.
Clinical biostatistics is taught in postgraduate biostatistical and applied statistical degrees, for example as part of the BCA Master of Biostatistics program in Australia.
Basic concepts
For describing situations
Incidence (epidemiology) vs. Prevalence vs. Cumulative incidence
Many medical tests (such as pregnancy tests) have two possible results: positive or negative. However, tests will sometimes yield incorrect results in the form of false positives or false negatives. False positives and false negatives can be described by the statistical concepts of type I and type II errors, respectively, where the null hypothesis is that the patient will test negative. The precision of a medical test is usually calculated in the form of positive predictive values (PPVs) and negative predicted values (NPVs). PPVs and NPVs of medical tests depend on intrinsic properties of the test as well as the prevalence of the condition being tested for. For example, if any pregnancy test was administered to a population of individuals who were biologically incapable of becoming pregnant, then the test's PPV will be 0% and its NPV will be 100% simply because true positives and false negatives cannot exist in this population.
Mortality rate vs. standardized mortality ratio vs. age-standardized mortality rate
Pandemic vs. epidemic vs. endemic vs. syndemic
Serial interval vs. incubation period
Cancer cluster
Sexual network
Years of potential life lost
Maternal mortality rate
Perinatal mortality rate
Low birth weight ratio
For assessing the effectiveness of an intervention
Absolute risk reduction
Control event rate
Experimental event rate
Number needed to harm
Number needed to treat
Odds ratio
Relative risk reduction
Relative risk
Relative survival
Minimal clinically important difference
Related statistical theory
Survival analysis
Proportional hazards models
Active control trials: clinical trials in which a kind of new treatment is compared with some other active agent rather than a placebo.
ADLS(Activities of daily living scale): a scale designed to measure physical ability/disability that is used in investigations of a variety of chronic disabling conditions, such as arthritis. This scale is based on scoring responses to questions about self-care, grooming, etc.
Actuarial statistics: the statistics used by actuaries to calculate liabilities, evaluate risks and plan the financial course of insurance, pensions, etc.
See also
Herd immunity
False positives and false negatives
Rare disease
Hilda Mary Woods – the first author (with William Russell) of the first British textbook of medical statistics, published in 1931
References
Further reading
External links
Health-EU Portal EU health statistics
Biostatistics
Medical specialties
Applied statistics
Pharmaceutical statistics
Clinical research | Medical statistics | [
"Mathematics"
] | 1,097 | [
"Applied mathematics",
"Applied statistics"
] |
11,083,346 | https://en.wikipedia.org/wiki/Penitrem%20A | Penitrem A (tremortin) is an indole-diterpenoid mycotoxin produced by certain species of Aspergillus, Claviceps, and Penicillium, which can be found growing on various plant species such as ryegrass. Penitrem A is one of many secondary metabolites following the synthesis of paxilline in Penicillium crostosum. Penitrem A poisoning in humans and animals usually occurs through the consumption of contaminated foods by mycotoxin-producing species, which is then distributed through the body by the bloodstream. It bypasses the blood-brain barrier to exert its toxicological effects on the central nervous system. In humans, penitrem A poisoning has been associated with severe tremors, hyperthermia, nausea/vomiting, diplopia, and bloody diarrhea. In animals, symptoms of penitrem A poisoning has been associated with symptoms ranging from tremors, seizures, and hyperthermia to ataxia and nystagmus.
Roquefortine C has been commonly detected in documented cases of penitrem A poisoning, making it a possible biomarker for diagnoses.
Mechanism of action
Penitrem A impairs GABAergic amino acid neurotransmission and antagonizes high-conductance Ca2+-activated potassium channels in both humans and animals. Impairment of the GABAergic amino acid neurotransmission comes with the spontaneous release of the excitatory amino acids glutamate and aspartate as well as the inhibitory neurotransmitter γ-aminobutyric acid (GABA). The sudden release of these neurotransmitters results in imbalanced GABAergic signalling, which gives rise to neurological disorders such as the tremors associated with penitrem A poisoning.
Penitrem A also induces the production of reactive oxygen species (ROS) in the neutrophil granulocytes of humans and animals. Increased ROS production results in tissue damage in the brain and other afflicted organs as well as hemorrhages in acute poisonings.
Synthesis
In Penicillium crustosum, synthesis of penitrem A and other secondary metabolites follows the synthesis of paxilline. Synthesis of penitrem A involves six oxidative-transformation enzymes (four cytochrome P450 monooxygenases and two flavin adenine dinucleotide (FAD)-dependent monooxygenases), two acetyltransferases, one oxidoreductase, and one prenyltransferase. These enzymes are encoded by a cluster of genes used in paxilline synthesis and penitrem A-F synthesis. The pathway is described below:
Oxidoreductase catalyzes the reduction of paxilline's ketone and also adds a dimethylallyl group to its aromatic ring.
Acetyltransferases catalyze the removal of the intermediate's lower right-hand hydroxyl group and reduce of one of the nearby methyl groups to a methylene group.
Oxidative-transformation enzyme catalyzes the addition of a hydroxyl group to the intermediate's dimethylallyl group. The dimethylallyl's double bond migrates down one carbon.
Prenyltransferase catalyzes the formation of a dimethyl-cyclopentane and a cyclobutane using the intermediate's aromatic ring-alcohol group.
Oxidative-transformation enzyme catalyzes the formation of a methylenecyclohexane using the intermediate's dimethyl-cyclopentane, forming secopenitrem D.
Oxidative-transformation enzyme catalyzes the formation of a cyclooctane using cyclobutane's alcohol group and the carbon joining secopenitrem D's cyclohexane and cyclopentane, forming penitrem D.
Oxidative-transformation enzyme catalyzes the addition a chlorine atom at penitrem D's aromatic ring, forming penitrem C.
Oxidative-transformation enzyme catalyzes the formation of an epoxide ring at penitrem C's oxane-double bond, forming penitrem F.
Oxidative-transformation enzyme catalyzes the addition of a hydroxyl group at the carbon joining penitrem F's methylenecyclohexane and cyclobutane, forming penitrem A.
See also
Paxilline
Roquefortine C
References
Indole alkaloids
Neurotoxins
Penicillium
Cell communication
Chloroarenes
Alcohols
Halogen-containing natural products
Cyclobutanes
Mycotoxins | Penitrem A | [
"Chemistry",
"Biology"
] | 1,011 | [
"Cell communication",
"Indole alkaloids",
"Alkaloids by chemical classification",
"Cellular processes",
"Neurochemistry",
"Neurotoxins"
] |
11,083,602 | https://en.wikipedia.org/wiki/Omapatrilat | Omapatrilat (INN, proposed trade name Vanlev) is an experimental antihypertensive agent that was never marketed. It inhibits both neprilysin (neutral endopeptidase, NEP) and angiotensin-converting enzyme (ACE). NEP inhibition results in elevated natriuretic peptide levels, promoting natriuresis, diuresis, vasodilation, and reductions in preload and ventricular remodeling.
It was discovered and developed by Bristol-Myers Squibb but failed in clinical trials as a potential treatment for congestive heart failure due to safety concerns about its causing angioedema.
Omapatrilat angioedema was attributed to its dual mechanism of action, inhibiting both angiotensin-converting enzyme (ACE), and neprilysin (neutral endopeptidase), both of these enzymes are responsible for the metabolism of bradykinin which causes vasodilation, angioedema, and airway obstruction.
See also
Gemopatrilat
Cilazapril
Sacubitril
References
Further reading
ACE inhibitors
Heterocyclic compounds with 2 rings
Carboxylic acids
Lactams
Propionamides
Thiols
Nitrogen heterocycles
Sulfur heterocycles
Abandoned drugs | Omapatrilat | [
"Chemistry"
] | 272 | [
"Carboxylic acids",
"Thiols",
"Drug safety",
"Functional groups",
"Organic compounds",
"Abandoned drugs"
] |
11,083,676 | https://en.wikipedia.org/wiki/Myriad%20year%20clock | The , was a universal clock designed by the Japanese inventor Hisashige Tanaka in 1851. It belongs to the category of Japanese clocks called Wadokei. This clock is designated as an Important Cultural Property and a Mechanical Engineering Heritage by the Japanese government.
The clock is driven by a spring. Once it is fully wound, it can work for one year without another winding. It can show the time in 7 ways (such as usual time, the day of the week, month, moon phase, Japanese time, Solar term). Since the time system in Japan at that time was temporal hour, a day was 12 hours, and a day was divided into day and night, and each divided into 6 equal parts was regarded as 1 hour. Because the length of the day and night changes according to the season, the time dial was automatically movable, and it was linked with the other six clocks, making it an extremely complicated mechanism. It also rings chimes every hour. It consists of more than 1,000 parts to realize these complex functions, and it is said that Tanaka made all the parts by himself with simple tools such as files and saws. It took more than three years for him to finish the assembly.
In 2004 the Japanese government funded a project aimed at making a copy of this clock. More than 100 engineers joined the project and it took more than 6 months with the latest industrial technologies. However, even then it was not possible to make exact copies of some parts, such as the brass metal plate used as its spring, before the presentation at Expo 2005. The original clock is displayed at the National Museum of Nature and Science, while a copy is at Toshiba Corporation.
The clock was listed in the Japanese Mechanical Engineering Heritage as item No. 22 in 2007.
Notes
External links
National Project to Restore Man-nen Jimeisho
Individual clocks
Japanese inventions
Astronomical clocks | Myriad year clock | [
"Astronomy"
] | 380 | [
"Time in astronomy",
"Astronomical clocks",
"Astronomical instruments"
] |
11,083,753 | https://en.wikipedia.org/wiki/Turbo-compound%20engine | A turbo-compound engine is a reciprocating engine that employs a turbine to recover energy from the exhaust gases. Instead of using that energy to drive a turbocharger as found in many high-power aircraft engines, the energy is instead sent to the output shaft to increase the total power delivered by the engine. The turbine is usually mechanically connected to the crankshaft, as on the Wright R-3350 Duplex-Cyclone, but electric and hydraulic power recovery systems have been investigated as well.
As this recovery process does not increase fuel consumption, it has the effect of reducing the specific fuel consumption, the ratio of fuel use to power. Turbo-compounding was used for commercial airliners and similar long-range, long-endurance roles before the introduction of turbojet engines. Examples using the Duplex-Cyclone include the Douglas DC-7B and Lockheed L-1049 Super Constellation, while other designs did not see production use.
Concept
Most piston engines produce a hot exhaust that still contains considerable undeveloped energy that could be used for propulsion if extracted. A turbine is often used to extract energy from such a stream of gases. A conventional gas turbine is fed high-pressure, high-velocity air, extracts energy from it, and leaves as a lower-pressure, slower-moving stream. This action has the side-effect of increasing the upstream pressure, which makes it undesirable for use with a piston engine as it increases the back-pressure in the engine, which decreases scavenging of the exhaust gas from the cylinders and thereby lowers the efficiency of the piston portion of a compound engine.
Through the late 1930s and early 1940s one solution to this problem was the introduction of "jet stack" exhaust manifolds. These were simply short sections of metal pipe attached to the exhaust ports, shaped so that they would interact with the airstream to produce a jet of air that produced forward thrust. Another World War II introduction was the use of the Meredith effect to recover heat from the radiator system to provide additional thrust.
By the late-war era, turbine development had improved dramatically and led to a new turbine design known as the "blowdown turbine" or "power-recovery turbine". This design extracts energy from the momentum of the moving exhaust, but does not appreciably increase back-pressure. This means it does not have the undesirable effects of conventional designs when connected to the exhaust of a piston engine, and a number of manufacturers began studying the design.
History
The first aircraft engine to be tested with a power-recovery turbine was the Rolls-Royce Crecy. This was used primarily to drive a geared centrifugal supercharger, although it was also coupled to the crankshaft and gave an extra 15 to 35 percent fuel economy.
Blowdown turbines became relatively common features in the late- and post-war era, especially for engines designed for long overwater flights. Turbo-compounding was used on several airplane engines after World War II, including the Napier Nomad and the Wright R-3350. The exhaust restriction imparted by the three blowdown turbines used on the Wright R-3350 is equal to a well-designed jet stack system used on a conventional radial engine, while recovering about at METO (maximum continuous except for take-off) power. In the case of the R-3350, maintenance crews sometimes nicknamed the turbine the parts recovery turbine due to its negative effect on engine reliability. Turbo-compound versions of the Napier Deltic, Rolls-Royce Crecy, Rolls-Royce Griffon, and Allison V-1710 were constructed but none was developed beyond the prototype stage. It was realized in many cases the power produced by the simple turbine was approaching that of the enormously complex and maintenance-intensive piston engine to which it was attached. As a result, turbo-compound aero engines were soon supplanted by turboprop and turbojet engines.
Some modern heavy truck diesel manufacturers have incorporated turbo-compounding into their designs. Examples include the Detroit Diesel DD15 and Scania in production from 1991.
Starting with the 2014 season, Formula One switched to a new 1.6 liter turbocharged V6 formula that uses turbo-compounding. The engines use a single turbocharger that is connected to an electric motor/generator called the MGU-H. The MGU-H uses a turbine to drive a generator, converting waste heat from the exhaust into electrical energy that is either stored in a battery or sent directly to an electric motor in the car's powertrain.
List of types
Detroit Diesel
DD15
Napier
Napier Nomad
Wright Aeronautical
Wright R-3350: The turbo-compound version was the only turbo-compound aero-engine to see mass production and widespread usage.
Dobrynin
Dobrynin VD-4K
Zvezda
Zvezda M503: Soviet-built 42 cylinder diesel naval engine used in the Osa-class missile boat
Renault
Renault Energy F1: 1.6 liter turbocharged V6 engine built for Formula 1. Unlike its contemporaries, still uses a wastegate as an emergency measure to control boost pressure in case the turbo-compounding with the MGU-H fails.
Ferrari
Ferrari 059: 1.6 liter turbocharged V6 engine built for Formula 1 for the Ferrari F14 T as well as the Sauber C33.
Mercedes-Benz
Mercedes PU106: 1.6 liter turbocharged V6 engine built for Mercedes-Benz Formula 1 programme.
Honda
Honda RA615H: 1.6 liter turbocharged V6 engine built for Formula 1 for the McLaren MP4-30.
See also
Motorjet
Turbosteamer
Cogeneration
Turbocharger
Gas turbine
Electric turbo-compound
References
Turbo-compound engines
Turbochargers
Engine technology
Internal combustion engine
Piston engines | Turbo-compound engine | [
"Technology",
"Engineering"
] | 1,174 | [
"Internal combustion engine",
"Engines",
"Piston engines",
"Combustion engineering",
"Engine technology"
] |
8,908,740 | https://en.wikipedia.org/wiki/Occurrences%20of%20Grandi%27s%20series | This article lists occurrences of the paradoxical infinite "sum" +1 -1 +1 -1 ... , sometimes called Grandi's series.
Parables
Guido Grandi illustrated the series with a parable involving two brothers who share a gem.
Thomson's lamp is a supertask in which a hypothetical lamp is turned on and off infinitely many times in a finite time span. One can think of turning the lamp on as adding 1 to its state, and turning it off as subtracting 1. Instead of asking the sum of the series, one asks the final state of the lamp.
One of the best-known classic parables to which infinite series have been applied, Achilles and the tortoise, can also be adapted to the case of Grandi's series.
Numerical series
The Cauchy product of Grandi's series with itself is 1 − 2 + 3 − 4 + · · ·.
Several series resulting from the introduction of zeros into Grandi's series have interesting properties; for these see Summation of Grandi's series#Dilution.
Grandi's series is just one example of a divergent geometric series.
The rearranged series 1 − 1 − 1 + 1 + 1 − 1 − 1 + · · · occurs in Euler's 1775 treatment of the pentagonal number theorem as the value of the Euler function at q = 1.
Power series
The power series most famously associated with Grandi's series is its ordinary generating function,
Fourier series
Hyperbolic sine
In his 1822 Théorie Analytique de la Chaleur, Joseph Fourier obtains what is currently called a Fourier sine series for a scaled version of the hyperbolic sine function,
He finds that the general coefficient of sin nx in the series is
For n > 1 the above series converges, while the coefficient of sin x appears as 1 − 1 + 1 − 1 + · · · and so is expected to be 1⁄2. In fact, this is correct, as can be demonstrated by directly calculating the Fourier coefficient from an integral:
Dirac comb
Grandi's series occurs more directly in another important series,
At x = , the series reduces to −1 + 1 − 1 + 1 − · · · and so one might expect it to meaningfully equal −1⁄2. In fact, Euler held that this series obeyed the formal relation Σ cos kx = −1⁄2, while d'Alembert rejected the relation, and Lagrange wondered if it could be defended by an extension of the geometric series similar to Euler's reasoning with Grandi's numerical series.
Euler's claim suggests that
for all x. This series is divergent everywhere, while its Cesàro sum is indeed 0 for almost all x. However, the series diverges to infinity at x = 2n in a significant way: it is the Fourier series of a Dirac comb. The ordinary, Cesàro, and Abel sums of this series involve limits of the Dirichlet, Fejér, and Poisson kernels, respectively.
Dirichlet series
Multiplying the terms of Grandi's series by 1/nz yields the Dirichlet series
which converges only for complex numbers z with a positive real part. Grandi's series is recovered by letting z = 0.
Unlike the geometric series, the Dirichlet series for η is not useful for determining what 1 − 1 + 1 − 1 + · · · "should" be. Even on the right half-plane, η(z) is not given by any elementary expression, and there is no immediate evidence of its limit as z approaches 0. On the other hand, if one uses stronger methods of summability, then the Dirichlet series for η defines a function on the whole complex plane — the Dirichlet eta function — and moreover, this function is analytic. For z with real part > −1 it suffices to use Cesàro summation, and so η(0) = 1⁄2 after all.
The function η is related to a more famous Dirichlet series and function:
where ζ is the Riemann zeta function. Keeping Grandi's series in mind, this relation explains why ζ(0) = −1⁄2; see also 1 + 1 + 1 + 1 + · · ·. The relation also implies a much more important result. Since η(z) and (1 − 21−z) are both analytic on the entire plane and the latter function's only zero is a simple zero at z = 1, it follows that ζ(z) is meromorphic with only a simple pole at z = 1.
Euler characteristics
Given a CW complex S containing one vertex, one edge, one face, and generally exactly one cell of every dimension, Euler's formula for the Euler characteristic of S returns . There are a few motivations for defining a generalized Euler characteristic for such a space that turns out to be 1/2.
One approach comes from combinatorial geometry. The open interval (0, 1) has an Euler characteristic of −1, so its power set 2(0, 1) should have an Euler characteristic of 2−1 = 1/2. The appropriate power set to take is the "small power set" of finite subsets of the interval, which consists of the union of a point (the empty set), an open interval (the set of singletons), an open triangle, and so on. So the Euler characteristic of the small power set is . James Propp defines a regularized Euler measure for polyhedral sets that, in this example, replaces with , sums the series for |t| < 1, and analytically continues to t = 1, essentially finding the Abel sum of , which is 1/2. Generally, he finds χ(2A) = 2χ(A) for any polyhedral set A, and the base of the exponent generalizes to other sets as well.
Infinite-dimensional real projective space RP∞ is another structure with one cell of every dimension and therefore an Euler characteristic of . This space can be described as the quotient of the infinite-dimensional sphere by identifying each pair of antipodal points. Since the infinite-dimensional sphere is contractible, its Euler characteristic is 1, and its 2-to-1 quotient should have an Euler characteristic of 1/2.
This description of RP∞ also makes it the classifying space of Z2, the cyclic group of order 2. Tom Leinster gives a definition of the Euler characteristic of any category which bypasses the classifying space and reduces to 1/|G| for any group when viewed as a one-object category. In this sense the Euler characteristic of Z2 is itself 1⁄2.
In physics
Grandi's series, and generalizations thereof, occur frequently in many branches of physics; most typically in the discussions of quantized fermion fields (for example, the chiral bag model), which have both positive and negative eigenvalues; although similar series occur also for bosons, such as in the Casimir effect.
The general series is discussed in greater detail in the article on spectral asymmetry, whereas methods used to sum it are discussed in the articles on regularization and, in particular, the zeta function regulator.
In art
The Grandi series has been applied to e.g. ballet by Benjamin Jarvis, in The Invariant journal. PDF here: https://invariants.org.uk/assets/TheInvariant_HT2016.pdf The noise artist Jliat has a 2000 musical single Still Life #7: The Grandi Series advertised as "conceptual art"; it consists of nearly an hour of silence.
Notes
References
Grandi's series, Occurrences of
Grandi's series | Occurrences of Grandi's series | [
"Mathematics"
] | 1,624 | [
"Grandi's series",
"Mathematical problems",
"Mathematical paradoxes"
] |
8,908,895 | https://en.wikipedia.org/wiki/Thioacetic%20acid | Thioacetic acid is an organosulfur compound with the molecular formula . It is a thioic acid: the sulfur analogue of acetic acid (), as implied by the thio- prefix. It is a yellow liquid with a strong thiol-like odor. It is used in organic synthesis for the introduction of thiol groups () in molecules.
Synthesis and properties
Thioacetic acid is prepared by the reaction of acetic anhydride with hydrogen sulfide:
It has also been produced by the action of phosphorus pentasulfide on glacial acetic acid, followed by distillation.
Thioacetic acid is typically contaminated by acetic acid.
The compound exists exclusively as the thiol tautomer, consistent with the strength of the double bond. Reflecting the influence of hydrogen-bonding, the boiling point (93 °C) and melting points are 20 and 75 K lower than those for acetic acid.
Reactivity
Acidity
With a pKa near 3.4, thioacetic acid is about 15 times more acidic than acetic acid. The conjugate base is thioacetate:
In neutral water, thioacetic acid is fully ionized.
Reactivity of thioacetate
Most of the reactivity of thioacetic acid arises from the conjugate base, thioacetate. Salts of this anion, e.g. potassium thioacetate, are used to generate thioacetate esters. Thioacetate esters undergo hydrolysis to give thiols. A typical method for preparing a thiol from an alkyl halide using thioacetic acid proceeds in four discrete steps, some of which can be conducted sequentially in the same flask:
, where X = Cl, Br, I
In an application that illustrates the use of its radical behavior, thioacetic acid is used with AIBN in a free radical mediated nucleophilic addition to an exocyclic alkene forming a thioester:
Reductive acetylation
Potassium thioacetate can be used convert nitroarenes to aryl acetamides in one step. This is particularly useful in the preparation of pharmaceuticals, e.g., paracetamol from 4-nitrophenol or 4-nitroanisole.
References
Thiocarboxylic acids
Reagents for organic chemistry
Foul-smelling chemicals | Thioacetic acid | [
"Chemistry"
] | 503 | [
"Reagents for organic chemistry"
] |
8,908,943 | https://en.wikipedia.org/wiki/Nordic%20Institute%20for%20Theoretical%20Physics | The Nordic Institute for Theoretical Physics, or NORDITA, or Nordita (), is an international organisation for research in theoretical physics. It was established as Nordisk Institut for Teoretisk Atomfysik in 1957 by Niels Bohr and the Swedish physicist Torsten Gustafson. Nordita was originally located at the Niels Bohr Institute in Copenhagen (Denmark), but moved to the AlbaNova University Centre in Stockholm (Sweden) on 1 January 2007. The main research areas at Nordita are astrophysics, hard and soft condensed matter physics, and high-energy physics.
Research
Since Nordita's establishment in 1957 the original focus on research in atomic and nuclear physics has been broadened.
Research carried out by Nordita's academic staff presently includes astrophysics, biological physics, hard condensed matter physics and materials physics, soft condensed matter physics, cosmology, statistical physics and complex systems, high-energy physics, and gravitational physics and cosmology. The in-house research forms the backbone of Nordita activities and complements the more service oriented functions. By mission, Nordita has the task of facilitating interactions between physicists in the Nordic countries as well as with the international community; therefore the comparably small institute has a large number of visitors, conferences and scientific programs that last several weeks.
Notable former or present researchers at Nordita include Alexander V. Balatsky, Holger Bech Nielsen, Axel Brandenburg, Gerald E. Brown, Paolo Di Vecchia, James Hamilton, John Hertz, Sabine Hossenfelder, Alan Luther, Ben Roy Mottelson, Christopher J. Pethick, Leon Rosenfeld, Kim Sneppen, John Wettlaufer, and Konstantin Zarembo.
Organization
Nordita is governed by a board consisting of one representative and one alternate member from each Nordic country, headed by a chair person. The board appoints a number of research committees which evaluate proposals and advice the board on scientific and educational matters.
The Nordita board nominates a director who is appointed by the president of KTH Royal Institute of Technology and the vice-chancellor of Stockholm University. The director, currently Niels Obers, is responsible for the day-to-day administration of the institute and provides scientific leadership.
Funding
Nordita is funded jointly by the Nordic countries via the Nordic Council of Ministers, the Swedish Research Council, and the host universities KTH Royal Institute of Technology, Stockholm University and Uppsala University.
References
See also
Institute for Theoretical Physics (disambiguation)
Center for Theoretical Physics (disambiguation)
External links
Nordita homepage
Nordita Information Brochure 2012
Theoretical physics
Physics research institutes
Research institutes in Sweden
International research institutes
Theoretical physics institutes | Nordic Institute for Theoretical Physics | [
"Physics"
] | 555 | [
"Theoretical physics",
"Theoretical physics institutes"
] |
8,909,028 | https://en.wikipedia.org/wiki/Summation%20of%20Grandi%27s%20series |
General considerations
Stability and linearity
The formal manipulations that lead to 1 − 1 + 1 − 1 + · · · being assigned a value of 1⁄2 include:
Adding or subtracting two series term-by-term,
Multiplying through by a scalar term-by-term,
"Shifting" the series with no change in the sum, and
Increasing the sum by adding a new term to the series' head.
These are all legal manipulations for sums of convergent series, but 1 − 1 + 1 − 1 + · · · is not a convergent series.
Nonetheless, there are many summation methods that respect these manipulations and that do assign a "sum" to Grandi's series. Two of the simplest methods are Cesàro summation and Abel summation.
Cesàro sum
The first rigorous method for summing divergent series was published by Ernesto Cesàro in 1890. The basic idea is similar to Leibniz's probabilistic approach: essentially, the Cesàro sum of a series is the average of all of its partial sums. Formally one computes, for each n, the average σn of the first n partial sums, and takes the limit of these Cesàro means as n goes to infinity.
For Grandi's series, the sequence of arithmetic means is
1, 1⁄2, 2⁄3, 2⁄4, 3⁄5, 3⁄6, 4⁄7, 4⁄8, …
or, more suggestively,
(1⁄2+1⁄2), 1⁄2, (1⁄2+1⁄6), 1⁄2, (1⁄2+1⁄10), 1⁄2, (1⁄2+1⁄14), 1⁄2, …
where
for even n and for odd n.
This sequence of arithmetic means converges to 1⁄2, so the Cesàro sum of Σak is 1⁄2. Equivalently, one says that the Cesàro limit of the sequence 1, 0, 1, 0, … is 1⁄2.
The Cesàro sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3. So the Cesàro sum of a series can be altered by inserting infinitely many 0s as well as infinitely many brackets.
The series can also be summed by the more general fractional (C, a) methods.
Abel sum
Abel summation is similar to Euler's attempted definition of sums of divergent series, but it avoids Callet's and N. Bernoulli's objections by precisely constructing the function to use. In fact, Euler likely meant to limit his definition to power series, and in practice he used it almost exclusively in a form now known as Abel's method.
Given a series a0 + a1 + a2 + · · ·, one forms a new series a0 + a1x + a2x2 + · · ·. If the latter series converges for 0 < x < 1 to a function with a limit as x tends to 1, then this limit is called the Abel sum of the original series, after Abel's theorem which guarantees that the procedure is consistent with ordinary summation. For Grandi's series one has
Related series
The corresponding calculation that the Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 involves the function (1 + x)/(1 + x + x2).
Whenever a series is Cesàro summable, it is also Abel summable and has the same sum. On the other hand, taking the Cauchy product of Grandi's series with itself yields a series which is Abel summable but not Cesàro summable:
1 − 2 + 3 − 4 + · · ·
has Abel sum 1⁄4.
Dilution
Alternating spacing
That the ordinary Abel sum of 1 + 0 − 1 + 1 + 0 − 1 + · · · is 2⁄3 can also be phrased as the (A, λ) sum of the original series 1 − 1 + 1 − 1 + · · · where (λn) = (0, 2, 3, 5, 6, …). Likewise the (A, λ) sum of 1 − 1 + 1 − 1 + · · · where (λn) = (0, 1, 3, 4, 6, …) is 1⁄3.
Power-law spacing
Exponential spacing
The summability of 1 − 1 + 1 − 1 + · · · can be frustrated by separating its terms with exponentially longer and longer groups of zeros. The simplest example to describe is the series where (−1)n appears in the rank 2n:
0 + 1 − 1 + 0 + 1 + 0 + 0 + 0 − 1 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + 1 + 0 + · · ·.
This series is not Cesaro summable. After each nonzero term, the partial sums spend enough time lingering at either 0 or 1 to bring the average partial sum halfway to that point from its previous value. Over the interval following a (− 1) term, the nth arithmetic means vary over the range
or about 2⁄3 to 1⁄3.
In fact, the exponentially spaced series is not Abel summable either. Its Abel sum is the limit as x approaches 1 of the function
F(x) = 0 + x − x2 + 0 + x4 + 0 + 0 + 0 − x8 + 0 + 0 + 0 + 0 + 0 + 0 + 0 + x16 + 0 + · · ·.
This function satisfies a functional equation:
This functional equation implies that F(x) roughly oscillates around 1⁄2 as x approaches 1. To prove that the amplitude of oscillation is nonzero, it helps to separate F into an exactly periodic and an aperiodic part:
where
satisfies the same functional equation as F. This now implies that , so Ψ is a periodic function of loglog(1/x). Since dy (p.77) speaks of "another solution" and "plainly not constant", although technically he does not prove that F and Φ are different.</ref> Since the Φ part has a limit of 1⁄2, F oscillates as well.
Separation of scales
Given any function φ(x) such that φ(0) = 1, and the derivative of φ is integrable over (0, +∞), then the generalized φ-sum of Grandi's series exists and is equal to 1⁄2:
The Cesaro or Abel sum is recovered by letting φ be a triangular or exponential function, respectively. If φ is additionally assumed to be continuously differentiable, then the claim can be proved by applying the mean value theorem and converting the sum into an integral. Briefly:
Euler transform and analytic continuation
Borel sum
The Borel sum of Grandi's series is again 1⁄2, since
and
The series can also be summed by generalized (B, r) methods.
Spectral asymmetry
The entries in Grandi's series can be paired to the eigenvalues of an infinite-dimensional operator on Hilbert space. Giving the series this interpretation gives rise to the idea of spectral asymmetry, which occurs widely in physics. The value that the series sums to depends on the asymptotic behaviour of the eigenvalues of the operator. Thus, for example, let be a sequence of both positive and negative eigenvalues. Grandi's series corresponds to the formal sum
where is the sign of the eigenvalue. The series can be given concrete values by considering various limits. For example, the heat kernel regulator leads to the sum
which, for many interesting cases, is finite for non-zero t, and converges to a finite value in the limit.
Methods that fail
The integral function method with pn = exp (−cn2) and c > 0.
The moment constant method with
and k > 0.
Geometric series
The geometric series in ,
is convergent for . Formally substituting would give
However, is outside the radius of convergence, , so this conclusion cannot be made.
Notes
References
Grandi's series, Summation of
Grandi's series | Summation of Grandi's series | [
"Mathematics"
] | 1,720 | [
"Grandi's series",
"Mathematical problems",
"Mathematical paradoxes"
] |
8,909,414 | https://en.wikipedia.org/wiki/Gadget%20%28computer%20science%29 | In computational complexity theory, a gadget is a subunit of a problem instance that simulates the behavior of one of the fundamental units of a different computational problem. Gadgets are typically used to construct reductions from one computational problem to another, as part of proofs of NP-completeness or other types of computational hardness. The component design technique is a method for constructing reductions by using gadgets.
traces the use of gadgets to a 1954 paper in graph theory by W. T. Tutte, in which Tutte provided gadgets for reducing the problem of finding a subgraph with given degree constraints to a perfect matching problem. However, the "gadget" terminology has a later origin, and does not appear in Tutte's paper.
Example
Many NP-completeness proofs are based on many-one reductions from 3-satisfiability, the problem of finding a satisfying assignment to a Boolean formula that is a conjunction (Boolean and) of clauses, each clause being the disjunction (Boolean or) of three terms, and each term being a Boolean variable or its negation. A reduction from this problem to a hard problem on undirected graphs, such as the Hamiltonian cycle problem or graph coloring, would typically be based on gadgets in the form of subgraphs that simulate the behavior of the variables and clauses of a given 3-satisfiability instance. These gadgets would then be glued together to form a single graph, a hard instance for the graph problem in consideration.
For instance, the problem of testing 3-colorability of graphs may be proven NP-complete by a reduction from 3-satisfiability of this type. The reduction uses two special graph vertices, labeled as "Ground" and "False", that are not part of any gadget. As shown in the figure, the gadget for a variable x consists of two vertices connected in a triangle with the ground vertex; one of the two vertices of the gadget is labeled with x and the other is labeled with the negation of x. The gadget for a clause consists of six vertices, connected to each other, to the vertices representing the terms t0, t1, and t2, and to the ground and false vertices by the edges shown. Any 3-CNF formula may be converted into a graph by constructing a separate gadget for each of its variables and clauses and connecting them as shown.
In any 3-coloring of the resulting graph, one may designate the three colors as being true, false, or ground, where false and ground are the colors given to the false and ground vertices (necessarily different, as these vertices are made adjacent by the construction) and true is the remaining color not used by either of these vertices. Within a variable gadget, only two colorings are possible: the vertex labeled with the variable must be colored either true or false, and the vertex labeled with the variable's negation must correspondingly be colored either false or true. In this way, valid assignments of colors to the variable gadgets correspond one-for-one with truth assignments to the variables: the behavior of the gadget with respect to coloring simulates the behavior of a variable with respect to truth assignment.
Each clause assignment has a valid 3-coloring if at least one of its adjacent term vertices is colored true, and cannot be 3-colored if all of its adjacent term vertices are colored false. In this way, the clause gadget can be colored if and only if the corresponding truth assignment satisfies the clause, so again the behavior of the gadget simulates the behavior of a clause.
Restricted reductions
considered what they called "a radically simple form of gadget reduction", in which each bit describing part of a gadget may depend only on a bounded number of bits of the input, and used these reductions to prove an analogue of the Berman–Hartmanis conjecture stating that all NP-complete sets are polynomial-time isomorphic.
The standard definition of NP-completeness involves polynomial time many-one reductions: a problem in NP is by definition NP-complete if every other problem in NP has a reduction of this type to it, and the standard way of proving that a problem in NP is NP-complete is to find a polynomial time many-one reduction from a known NP-complete problem to it. But (in what Agrawal et al. called "a curious, often observed fact") all sets known to be NP-complete at that time could be proved complete using the stronger notion of AC0 many-one reductions: that is, reductions that can be computed by circuits of polynomial size, constant depth, and unbounded fan-in. Agrawal et al. proved that every set that is NP-complete under AC0 reductions is complete under an even more restricted type of reduction, NC0 many-one reductions, using circuits of polynomial size, constant depth, and bounded fan-in. In an NC0 reduction, each output bit of the reduction can depend only on a constant number of input bits.
The Berman–Hartmanis conjecture is an unsolved problem in computational complexity theory stating that all NP-complete problem classes are polynomial-time isomorphic. That is, if A and B are two NP-complete problem classes, there is a polynomial-time one-to-one reduction from A to B whose inverse is also computable in polynomial time. Agrawal et al. used their equivalence between AC0 reductions and NC0 reductions to show that all sets complete for NP under AC0 reductions are AC0-isomorphic.
Optimization of gadgets
One application of gadgets is in proving hardness of approximation results, by reducing a problem that is known to be hard to approximate to another problem whose hardness is to be proven. In this application, one typically has a family of instances of the first problem in which there is a gap in the objective function values, and in which it is hard to determine whether a given instance has an objective function that is on the low side or on the high side of the gap. The reductions used in these proofs, and the gadgets used in the reductions, must preserve the existence of this gap, and the strength of the inapproximability result derived from the reduction will depend on how well the gap is preserved.
formalize the problem of finding gap-preserving gadgets, for families of constraint satisfaction problems in which the goal is to maximize the number of satisfied constraints. They give as an example a reduction from 3-satisfiability to 2-satisfiability by , in which the gadget representing a 3-SAT clause consists of ten 2-SAT clauses, and in which a truth assignment that satisfies 3-SAT clause also satisfies at least seven clauses in the gadget, while a truth assignment that fails to satisfy a 3-SAT clause also fails to satisfy more than six clauses of the gadget. Using this gadget, and the fact that (unless P = NP) there is no polynomial-time approximation scheme for maximizing the number of 3-SAT clauses that a truth assignment satisfies, it can be shown that there is similarly no approximation scheme for MAX 2-SAT.
Trevisan et al. show that, in many cases of the constraint satisfaction problems they study, the gadgets leading to the strongest possible inapproximability results may be constructed automatically, as the solution to a linear programming problem. The same gadget-based reductions may also be used in the other direction, to transfer approximation algorithms from easier problems to harder problems. For instance, Trevisan et al. provide an optimal gadget for reducing 3-SAT to a weighted variant of 2-SAT (consisting of seven weighted 2-SAT clauses) that is stronger than the one by ; using it, together with known semidefinite programming approximation algorithms for MAX 2-SAT, they provide an approximation algorithm for MAX 3-SAT with approximation ratio 0.801, better than previously known algorithms.
References
Reduction (complexity)
Proof techniques | Gadget (computer science) | [
"Mathematics"
] | 1,643 | [
"Functions and mappings",
"Proof techniques",
"Mathematical objects",
"Reduction (complexity)",
"Mathematical relations"
] |
8,909,871 | https://en.wikipedia.org/wiki/Transition%20%28linguistics%29 | A transition or linking word is a word or phrase that shows the relationship between paragraphs or sections of a text or speech. Transitions provide greater cohesion by making it more explicit or signaling how ideas relate to one another. Transitions are, in fact, "bridges" that "carry a reader from section to section". Transitions guide a reader/listener through steps of logic, increments of time, or through physical space. Transitions "connect words and ideas so that [...] readers don't have to do the mental work for [themselves]."
Transitions reveal the internal structure of an author's reasoning. While they are used primarily for rhetoric, they are also used in a strictly grammatical sense for structural composition, reasoning, and comprehension. Indeed, they are
an essential part of any language.
In simple terms, a transition word demonstrates the relationship between two portions of a text or spoken language. By using these words, people can better build a sentence and convey what they are trying to say in a more concise manner.
Categories
Transition words and phrases categories include: Conclusion, Continuation, Contrast, Emphasis, Evidence, Illustration and Sequence.
Each category serves its own function, as do the keywords inside of a given category.
Coordinating transitions
Elements in a coordinate relationship are equal in rank, quality, or significance. They help to show a link between equal elements.
To show similarity or reinforce: also, and, as well as, by the same token, comparatively, correspondingly, coupled with, equally, equally important, furthermore, identically, in the light of, in the same fashion/way, likewise, moreover, not only... but also, not to mention, similarly, to say nothing of, together with, too, uniquely
To introduce an opposing point: besides, but, however, in contrast, neither, nevertheless, nor, on the contrary, on the other hand, still, yet
To signal a restatement: in other words, in simpler terms, indeed, that is, to put it differently
Subordinating transitions
To introduce an item in a series: finally, first, for another, for one thing, in addition, in the first place, in the second place, last, next, second, then
To introduce an example: for example, for instance, in particular, namely, specifically, that is
To show causality: accordingly, as a result, because, consequently, for, hence, since, so, then, therefore, thus
To introduce a summary or conclusion: actually, all in all, altogether, clearly, evidently, finally, in conclusion, of course, to sum up
To signal a concession: certainly, granted, it is true, naturally, of course, to be sure
To resume main argument after a concession: all the same, even though, nevertheless, nonetheless, still
Temporal transitions
To show frequency: again and again, day after day, every so often, frequently, hourly, now and then, occasionally, often
To show duration: briefly, during, for a long time, minute by minute, while
To show a particular time: at six o'clock, at that time, first thing in the morning, in 1999, in the beginning of August, in those days, last Sunday, next Christmas, now, then, two months ago, when
To introduce a beginning: at first, before then, in the beginning, since
To introduce a middle: as it was happening, at that moment, at the same time, in the meantime, meanwhile, next, simultaneously, then
To signal an end (or beyond): afterward/afterwards, at last, eventually, finally, in the end, later
Spatial transitions
To show closeness: adjacent to, alongside, close to, facing, near, next to, side by side
To show long distance: away, beyond, far, in the distance, there
To show direction: above, across, along, away from, behind, below, down, in front of, inside, outside, sideways, to the left, to the right, toward/towards, up
Transition words of agreement, addition, or similarity
The transition words, such as also, in addition, and likewise, add information, reinforce ideas, and express agreement with preceding material.
additionally
again
also
and
as
as a matter of fact
as well as
by the same token
comparatively
correspondingly
coupled with
equally
equally important
first
furthermore
identically
in addition
in like manner
in the first place
in the light of
in the same fashion/way
like
likewise
moreover
not only ... but also
not to mention
of course
second
similarly
then
third
to
to say nothing of
together with
too
uniquely
what's more
See also
Conjunction
Level of measurement
Concept map
Notes
References
Parts of speech
Plain English
writing
Rhetoric | Transition (linguistics) | [
"Technology"
] | 959 | [
"Parts of speech",
"Components"
] |
8,910,249 | https://en.wikipedia.org/wiki/Fontana%20dei%20Dioscuri | The Fontana dei Dioscuri is the fountain set opposite the Palazzo del Quirinale, the official residence of the President of the Italian Republic in the Piazza del Quirinale.
The original fountain, which no longer exists, commissioned by Pope Sixtus V in 1588, had the Dioscuri, Castor and Pollux statues, from Constantine I of Rome's Baths, moved to the piazza, from the site, thought to have been nearby, flanking it.
In the late 1780s by Antinori, was commissioned by Pope Pius VI, to make a better layout for the piazza and had the fountain and the Dioscuri moved, and included the large Obelisk which had been moved from the Campus Martius in the design.
Some time between, that original fountain was lost, and, in 1818 a new one, which can be seen today, was commissioned by Pope Pius VII and was designed by Raffaele Stern using an ancient granite Roman shell that had been found in the 16th century supported on top of a large basin. It was sited in front of the two statues with the obelisk between them.
In 1810, the Rome-based sculptor Paolo Triscornia sent the reduced copies of the Dioscuri to Saint Petersburg for his compatriot Giacomo Quarenghi. They were put up in front of the Saint Petersburg Manege six years later.
See also
Fountains in Rome
1588 establishments in Italy
Buildings and structures completed in 1588
Dioscuri
Rome R. II Trevi
Sculptures of horses
Animal sculptures in Italy
Castor and Pollux
Pope Sixtus V | Fontana dei Dioscuri | [
"Astronomy"
] | 330 | [
"Castor and Pollux",
"Astronomical myths"
] |
8,910,528 | https://en.wikipedia.org/wiki/Zonal%20polynomial | In mathematics, a zonal polynomial is a multivariate symmetric homogeneous polynomial. The zonal polynomials form a basis of the space of symmetric polynomials. Zonal polynomials appear in special functions with matrix argument which on the other hand appear in matrixvariate distributions such as the Wishart distribution when integrating over compact Lie groups. The theory was started in multivariate statistics in the 1960s and 1970s in a series of papers by Alan Treleven James and his doctorial student Alan Graham Constantine.
They appear as zonal spherical functions of the Gelfand pairs
(here, is the hyperoctahedral group) and , which means that they describe canonical basis of the double class
algebras and .
The zonal polynomials are the case of the C normalization of the Jack function.
References
Literature
Robb Muirhead, Aspects of Multivariate Statistical Theory, John Wiley & Sons, Inc., New York, 1984.
Homogeneous polynomials
Symmetric functions
Multivariate statistics | Zonal polynomial | [
"Physics",
"Mathematics"
] | 193 | [
"Algebra",
"Symmetric functions",
"Algebra stubs",
"Symmetry"
] |
8,910,573 | https://en.wikipedia.org/wiki/Mechanical%20amplifier | A mechanical amplifier or a mechanical amplifying element is a linkage mechanism that amplifies the magnitude of mechanical quantities such as force, displacement, velocity, acceleration and torque in linear and rotational systems. In some applications, mechanical amplification induced by nature or unintentional oversights in man-made designs can be disastrous, causing situations such as the 1940 Tacoma Narrows Bridge collapse. When employed appropriately, it can help to magnify small mechanical signals for practical applications.
No additional energy can be created from any given mechanical amplifier due to conservation of energy. Claims of using mechanical amplifiers for perpetual motion machines are false, due to either a lack of understanding of the working mechanism or a simple hoax.
Generic mechanical amplifiers
Amplifiers, in the most general sense, are intermediate elements that increase the magnitude of a signal. These include mechanical amplifiers, electrical/electronic amplifiers, hydraulic/fluidic amplifiers, pneumatic amplifiers, optical amplifiers and quantum amplifiers. The purpose of employing a mechanical amplifier is generally to magnify the mechanical signal fed into a given transducer such as gear trains in generators or to enhance the mechanical signal output from a given transducer such as diaphragm in speakers and gramophones.
Electrical amplifiers increase the power of the signal with energy supplied from an external source. This is generally not the case with most devices described as mechanical amplifiers; all the energy is provided by the original signal and there is no power amplification. For instance a lever can amplify the displacement of a signal, but the force is proportionately reduced. Such devices are more correctly described as transformers, at least in the context of mechanical–electrical analogies.
Transducers are devices that convert energy from one form to another, such as mechanical-to-electrical or vice versa; and mechanical amplifiers are employed to improve the efficiency of this energy conversion from mechanical sources. Mechanical amplifiers can be broadly classified as resonating/oscillating amplifiers (such as diaphragms) or non-resonating/oscillating amplifiers (such as gear trains).
Resonating amplifiers
Any mechanical body that is not infinitely rigid (infinite damping) can exhibit vibration upon experiencing an external forcing. Most vibrating elements can be represented by a second order mass-spring-damper system governed by the following second order differential equation.
where, x is the displacement, m is the effective mass, c is the damping coefficient, k is the spring constant of the restoring force, and F(t) is external forcing as a function of time.
"A mechanical amplifier is basically a mechanical resonator that resonates at the operating frequency and magnifies the amplitude of the vibration of the transducer at anti-node location."
Resonance is the physical phenomenon where the amplitude of oscillation (output) exhibit a buildup over time when the frequency of the external forcing (input) is in the vicinity of a resonant frequency. The output thus achieved is generally larger than the input in terms of displacement, velocity or acceleration. Although resonant frequency is generally used synonymously with natural frequency, there is in fact a distinction. While resonance can be achieved at the natural frequency, it can also be achieved at several other modes such as flexural modes. Therefore, the term resonant frequency encompasses all frequency bandwidths where some forms of resonance can be achieved; and this includes the natural frequency.
Direct resonators
All mechanical vibrating systems possess a natural frequency fn, which is presented as the following in its most basic form.
When an external forcing is applied directly (parallel to the plane of the oscillatory displacement) to the system around the frequency of its natural frequency, then the fundamental mode of resonance can be achieved. The oscillatory amplitude outside this frequency region is typically smaller than the resonant peak and the input amplitude. The amplitude of the resonant peak and the bandwidth of resonance is dependent on the damping conditions and is quantified by the dimensionless quantity Q factor. Higher resonant modes and resonant modes at different planes (transverse, lateral, rotational and flexural) are usually triggered at higher frequencies. The specific frequency vicinity of these modes depends on the nature and boundary conditions of each mechanical system. Additionally, subharmonics, superharmonics or subsuperharmonics of each mode can also be excited at the right boundary conditions.
“As a model for a detector we note that if you hang a weight on a spring and then move the upper end of the spring up and down, the amplitude of the weight will be much larger than the driving amplitude if you are at the resonant frequency of the mass and spring assembly. It is essentially a mechanical amplifier and serves as a good candidate for a sensitive detector."
Parametric resonators
Parametric resonance is the physical phenomenon where an external excitation, at a specific frequency and typically orthogonal to the plane of displacement, introduces a periodic modulation in one of the system parameters resulting in a buildup in oscillatory amplitude. It is governed by the Mathieu equation. The following is a damped Mathieu equation.
where δ is the squared of the natural frequency and ε is the amplitude of the parametric excitation.
The first order or the principal parametric resonance is achieved when the driving/excitation frequency is twice the natural frequency of a given system. Higher orders of parametric resonance are observed either at or at submultiples of the natural frequency. For direct resonance, the response frequency always matches the excitation frequency. However, regardless of which order of parametric resonance is activated, the response frequency of parametric resonance is always in the vicinity of the natural frequency. Parametric resonance has the ability to exhibit higher mechanical amplification than direct resonance when operating at favourable conditions, but usually has a longer build up/transient state.
“The parametric resonator provides a very useful instrument that has been developed by a number of researchers, in part because a parametric resonator can serve as a mechanical amplifier, over a narrow band of frequencies.”
Swing analogy
Direct resonance can be equated to someone pushing a child on a swing. If the frequency of the pushing (external forcing) matches the natural frequency of the child-swing system, direct resonance can be achieved. Parametric resonance, on the other hand, is the child shifting his/her own weight with time (twice the frequency of the natural frequency) and building up the oscillatory amplitude of the swing without anyone helping to push. In other words, there is an internal transfer of energy (instead of simply dissipating all available energy) as the system parameter (child's weight) modulates and changes with time.
Other resonators/oscillators
Other means of signal enhancement, applicable to both mechanical and electrical domains, exist. This include chaos theory, stochastic resonance and many other nonlinear or vibrational phenomena. No new energy is created. However, through mechanical amplification, more of the available power spectrum can be utilised at a more optimal efficiency rather than dissipated.
Non-resonating amplifiers
Levers and gear trains are classical tools used to achieve mechanical advantage MA, which is a measure of mechanical amplification.
Lever
Lever can be used to change the magnitude of a given mechanical signal, such as force or displacement. Levers are widely used as mechanical amplifiers in actuators and generators.
It is a mechanism that usually consist of a rigid beam/rod fixed about a pivot. Levers are balanced when there is a balance of moment or torque about the pivot. Three major classifications exist, depending on the position of the pivot, input and output forces. The fundamental principle of lever mechanism is governed by the following ratio, dating back to Archimedes.
where FA is a force acting on point A on the rigid lever beam, FB is a force acting on point B on the rigid lever beam and a and b are the respective distances from points A and B to the pivot point.
If FB is the output force and FA is the input force, then mechanical advantage MA is given by the ratio of output force to input force.
Gear train
Gear trains are usually formed by the meshing engagement of two or more gears on a frame to form a transmission. This can provide translation (linear motion) or rotation as well as mechanically alter displacement, speed, velocity, acceleration, direction and torque depending on the type of gears employed, transmission configuration and gearing ratio.
The mechanical advantage of a gear train is given by the ratio of the output torque TB and input torque TA, which is also the same ratio of number of teeth of the output gear NB and the number of teeth of the input gear NA.
Therefore, torque can be amplified if the number of teeth of the output gear is larger than that of the input gear.
The ratio of the number of gear teeth is also related to the gear velocities ωA and ωB as follows.
Therefore, if the number of teeth of the output gear is less than that of the input, the output velocity is amplified.
Others
The above-mentioned mechanical quantities can also be amplified and/or converted either through a combination of above or other iterations of mechanical transmission systems, such as, cranks, cam, torque amplifiers, hydraulic jacks, mechanical comparator such as Johansson Mikrokator and many more.
References
See also
Amplifier (disambiguation)
Mechanical advantage device
Resonator | Mechanical amplifier | [
"Technology"
] | 1,946 | [
"Mechanical amplifiers",
"Amplifiers"
] |
8,911,269 | https://en.wikipedia.org/wiki/Franz%20Dischinger | Franz Dischinger (8 October 1887 - 9 January 1953) was a pioneering German civil and structural engineer, responsible for the development of the modern cable-stayed bridge. He was also a pioneer of the use of prestressed concrete, patenting the technique of external prestressing (where the prestressing bars or tendons are not encased in the concrete) in 1934.
After completing Gymnasium in Karlsruhe, Germany, Dischinger went to the Technische Hoschschule Karlsruhe (now Karlsruhe Institute of Technology) where he studied and received a degree in building engineering. After getting his degree in 1913, he then started working for Dyckerhoff & Widmann A.G., an engineering firm in Germany. In 1928 Dischinger went back to school to receive his doctorate at the Technische Hoschschule Dresden (now TU Dresden), Germany.
In 1922, he designed the Zeiss Planetarium in Jena with Walther Bauersfeld, using a thin-shell concrete roof in the shape of a hemisphere. Their system was subsequently patented, and Dischinger published a paper on the relevant mathematics in 1928.
Since the previous stay and cable bridges in Dischinger's opinion were both flawed technically and disturbing looking, he decided to publish his own cable stayed bridge. This design has been used ever since, more than 100 of these cable stayed bridges have been built.
For the 1938 design of a rail suspension bridge (not built), he had studied historical bridges incorporating inclined stay elements, such as those by Ferdinand Arnodin and John Roebling. He went on to design the 183 m span Strömsund Bridge in Sweden, completed in 1955 and generally considered the first of the modern tradition of cable-stayed bridges, although there had been many isolated examples of the bridge form before then. This employed a steel deck and cables, with large spacings between the stays typical of the early designs. It appears in Strömsund's coat of arms.
Other key works include:
Großmarkthalle, Basel, Switzerland, 1929 (dome roof)
Market Hall, Leipzig, Germany, 1930 (polygonal dome roofs)
Koblenz bridge, Germany, 1935 (three arch concrete bridge)
Aue bridge, Germany, 1936
Cologne Rodenkirchen Bridge, Cologne, Germany, 1954 (with others, including Fritz Leonhardt)
Notes
External links
Cable-Stayed Bridges from ASCE journal library
Bridge engineers
Structural engineers
German civil engineers
1887 births
1953 deaths
Members of the German Academy of Sciences at Berlin
Academic staff of Technische Universität Berlin
20th-century German engineers
Engineers from Karlsruhe
Engineers from the German Empire | Franz Dischinger | [
"Engineering"
] | 541 | [
"Structural engineering",
"Structural engineers"
] |
8,911,332 | https://en.wikipedia.org/wiki/Miguel%20%C3%81ngel%20Virasoro%20%28physicist%29 | Miguel Ángel Virasoro (;9 May 1940 – 23 July 2021) was an Argentine (naturalized Italian) mathematician and theoretical physicist. Virasoro worked in Argentina, Israel, the United States, and France, but he spent most of his professional career in Italy at La Sapienza University of Rome. He shared a name with his father, the philosopher Miguel Ángel Virasoro. He was known for his foundational work in string theory, the study of spin glasses, and his research in other areas of mathematical and statistical physics. The Virasoro–Shapiro amplitude, the Virasoro algebra, the super Virasoro algebra, the Virasoro vertex operator algebra, the Virasoro group, the Virasoro conjecture, the Virasoro conformal block, and the Virasoro minimal model are all named after him.
Biography
Early life in Argentina
Miguel Ángel Virasoro was born in Buenos Aires, Argentina in on May 9, 1940. He shared a name with his father, a noted Argentinian philosopher who founded dialectical existentialism. The younger Virasoro studied physics at the University of Buenos Aires (UBA) from 1958 to 1966. He received his Licenciate degree in 1962 and his PhD in 1966.
Research in Israel and the United States
In 1966, Virasoro left Argentina after La Noche de los Bastones Largos, a violent dislodging of students and teachers from UBA who opposed the military government of Argentinian General Juan Carlos Onganía. The military dictatorship of Onganía would last from 1966 to 1970. After leaving Argentina, Virasoro worked as a postdoctoral researcher at the Weizmann Institute of Science in Rehovot, Israel until 1968. He then worked at the University of Wisconsin-Madison (UW-M) in the United States until 1969. After his time at UW-M, Virasoro spent another year as a postdoc in the United States at the University of California, Berkeley.
Return to Argentina
Virasoro returned to Argentina after the end of Juan Carlos Onganía's dictatorship in 1970. In 1971, he accepted a professorship at his alma mater UBA. Virasoro remained at UBA until 1975, at which time he accepted a year-long position at the Institute for Advanced Study in Princeton, New Jersey. Then in 1976, General Jorge Rafael Videla came to power in Argentina and established another military dictatorship. As a result, Virasoro was unable to return to his home country after his year in the United States and instead moved to Europe.
Professional career in Europe
In Europe, Virasoro took a temporary position at the École normale supérieure in Paris, France in 1976. Virasoro then moved to Italy in 1977 where he worked as a professor at the Istituto Nazionale di Fisica Nucleare at the University of Turin from 1977 until 1981. He then moved to La Sapienza University of Rome, where he remained for thirty years until his Italian retirement and his return to Argentina in 2011. At La Sapienza, Virasoro performed research in mathematical physics, string theory, and statistical mechanics and taught courses on electromagnetism and on physical-mathematical models for economics. He was also a director of the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste, Italy from 1995 until 2002.
Later years and death
In his later years, Virasoro received several awards, honors, and appointments. In 1987, he was awarded a Guggenheim fellowship from the John Simon Guggenheim Memorial Foundation. In 1993, he was awarded the Rammal Award by the French Physical Society. In 1998, he was elected as an international honorary member of the American Academy of Arts and Sciences. In 2009, he was awarded the Enrico Fermi Prize of the Italian Physical Society, which he shared with Greek physicist Dimitri Nanopoulos, for "the discovery of an infinite-dimensional algebra of primary importance for the construction of string theories." In 2020, he was awarded the Dirac Medal of the ICTP, which he shared with French physicists André Neveu and Pierre Ramond, "for their pioneering contributions to the inception and formulation of string theory which introduced new bosonic and fermionic symmetries into physics."
From 2011 until his death, Virasoro was an honorary professor at the Universidad Nacional de General Sarmiento in his home country of Argentina. Virasoro died on July 23, 2021, at the age of 81.
Research
String theory
Much of Virasoro's early work helped found a branch of theoretical particle physics which would later be understood as string theory. In 1968 while Virasoro was in Israel, his colleague Gabriele Veneziano discovered a formula (the Veneziano amplitude) which described the scattering of open strings. Then in 1969 during his time at University of Wisconsin-Madison, Virasoro successfully generalized Veneziano's theory and discovered a formula (the Virasoro-Shapiro amplitude) which described the scattering of closed strings. At the time, the formulas of Veneziano and Virasoro were understood in terms of so-called dual resonance models. Only later was their work understood to describe strings.
Soon after his discovery of the Virasoro-Shapiro amplitude, Virasoro introduced what became known as the Virasoro algebra. The Virasoro algebra is an infinite-dimensional Lie algebra which describes the conformal symmetry of the worldsheet of a string embedded in spacetime. A supersymmetric generalization of this algebra, the super Virasoro algebra, describes the super conformal symmetry of the worldsheet of a supersymmetric string (or superstring). Pedagogical introductions to the Virasoro-Shapiro amplitude and the Virasoro algebra may be found in David Tong's introductory lectures on string theory.
Several mathematical concepts related to Lie algebras and conformal field theory are named after Virasoro. These include the Virasoro vertex operator algebra, the Virasoro group, the Virasoro conjecture, the Virasoro conformal block, and the Virasoro minimal model.
Spin glasses
While working in Italy, Virasoro studied spin glasses and other systems in statistical mechanics. Together with Italian physicist Giorgio Parisi and French physicist Marc Mézard, Virasoro discovered the ultrametric organization of low-temperature spin glass states in infinite dimensions.
References
1940 births
2021 deaths
Theoretical physicists
String theorists
Argentine scientists
Argentine physicists
Scientists from Buenos Aires
Academic staff of the Sapienza University of Rome
University of Buenos Aires alumni
Enrico Fermi Award recipients
Italian physicists | Miguel Ángel Virasoro (physicist) | [
"Physics"
] | 1,351 | [
"Theoretical physics",
"Theoretical physicists"
] |
8,911,499 | https://en.wikipedia.org/wiki/1999%20South%20Dakota%20Learjet%20crash | On October 25, 1999, a chartered Learjet 35 business jet was scheduled to fly from Orlando, Florida, United States to Dallas, Texas, United States. Early in the flight, the aircraft, which was climbing to its assigned altitude on autopilot, lost cabin pressure, and all six on board were incapacitated by hypoxia, a lack of oxygen in the brain and body. The aircraft continued climbing past its assigned altitude, then failed to make the westward turn toward Dallas over North Florida and continued on its northwestern course, flying over the southern and midwestern United States for almost four hours and . The plane ran out of fuel over South Dakota and crashed into a field near Aberdeen after an uncontrolled descent, killing all six on board.
The two pilots were Michael Kling and Stephanie Bellegarrigue. The four passengers on board were PGA golfer Payne Stewart; his agent, and former Alabama football quarterback, Robert Fraley; president of the Leader Enterprises sports management agency, Van Ardan; and Bruce Borland, a golf architect with the Jack Nicklaus golf course design company.
History of the flight
Note: all times are presented in 24-hour format. Because the flight took place in both the Eastern Time zone – Eastern Daylight Time (EDT) – and the Central Time zone – Central Daylight Time (CDT) – all times are given in this article in Coordinated Universal Time (which is indicated by the time followed by the letters UTC).
Departure
On October 25, 1999, a Learjet 35, registration operated by Sunjet Aviation of Sanford, Florida, departed Orlando Sanford International Airport at 13:19 UTC (09:19 EDT) on a two-day, five-flight trip. Before departure, the aircraft had been fueled with of Jet A, enough for four hours and 45 minutes of flight. On board were two pilots and four passengers.
At 13:27:13 UTC, the air traffic controller from the Jacksonville Air Route Traffic Control Center (ARTCC) instructed the pilot to climb and maintain flight level (FL) 390 ( above sea level). At 13:27:18 UTC (09:27:18 EDT), the pilot acknowledged the clearance by stating, "three nine zero bravo alpha." This was the last known radio transmission from the airplane, and occurred while the aircraft was passing through . The next attempt to contact the aircraft occurred six minutes and twenty seconds later, fourteen minutes after departure, with the aircraft at . The controller's message went unacknowledged. The controller attempted to contact N47BA five more times in the next minutes, again with no answer.
First interception
About 14:54 UTC (now 09:54 CDT in the Central Time zone), Colonel Olson, a United States Air Force F-16 test pilot from the 40th Flight Test Squadron at Eglin Air Force Base in western Florida, who happened to be in the air nearby, was directed by controllers to intercept N47BA. When the fighter was about from the Learjet, at an altitude of about , Olson made two radio calls to N47BA, but did not receive a response. The F-16 pilot made a visual inspection of the Lear, finding no visible damage to the airplane. Both engines were running and the plane's red, rotating anti-collision beacon was on, which is standard operation for aircraft in flight.
Olson could not see inside the passenger section of the airplane, because the windows seemed to be dark. He stated that the entire right cockpit windshield was opaque, as if condensation or ice covered the inside. He indicated that the left cockpit windshield was opaque, although several sections of the center of the windshield seemed to be only thinly covered by condensation or ice. A small rectangular section of the windshield was clear, with only a small section of the glare shield visible through this area. He did not see any flight control movement. At about 15:12 UTC, Olson concluded his inspection of N47BA and broke formation, proceeding to Scott Air Force Base in southwestern Illinois.
Second interception
At 16:13 UTC, almost three hours into the flight of the unresponsive Learjet, two F-16s from the 138th Fighter Wing of the Oklahoma Air National Guard, flying under the call-sign "TULSA 13 flight," were directed by the Minneapolis ARTCC to intercept the Learjet. The TULSA 13 lead pilot reported that he could not see any movement in the cockpit, that the windshield was dark and that he could not tell if the windshield was iced. A few minutes later, a TULSA 13 pilot reported, "We're not seeing anything inside, could be just a dark cockpit though... he is not reacting, moving or anything like that he should be able to have seen us by now." At 16:39 UTC, TULSA 13 left to rendezvous with a tanker for refueling.
The aircraft reached a maximum altitude of .
Third interception and escort
About 16:50 UTC, two F-16s from the 119th Wing of the North Dakota Air National Guard with the identification "NODAK 32" were directed to intercept N47BA. TULSA 13 flight also returned from refueling and all three fighters maneuvered close to the Lear. The TULSA 13 lead pilot reported, "We've got two visuals on it. It's looking like the cockpit window is iced over and there's no displacement in any of the control surfaces as far as the ailerons or trims." About 17:01 UTC, TULSA 13 flight returned to the tanker again, while NODAK 32 remained with N47BA.
Officials at the Pentagon denied that a shoot down of the Learjet was considered to prevent a possible crash in a heavily populated area, indicating that the fighter jets were not armed with air-to-air missiles.
Canadian Prime Minister Jean Chrétien authorized the Canadian Forces Air Command to shoot down the plane if it entered Canadian airspace without making contact. He writes in his 2018 memoirs, "The plane was heading toward the city of Winnipeg and the air traffic controllers feared that it would crash into the Manitoba capital. I was asked to give permission for the military to bring down the plane if that became necessary. With a heavy heart, I authorized the procedure. Shortly after I made my decision, I learned that the plane had crashed in South Dakota."
Crash
The Learjet's cockpit voice recorder (CVR), which was recovered from the wreckage, contained an audio recording of the last thirty minutes of the flight. It was an older model, which only recorded thirty minutes of audio. The aircraft was not equipped with a flight data recorder. At 17:10:41 UTC, the Learjet's engines can be heard winding down on the CVR recording, indicating that the plane's fuel had been exhausted. In addition, sounds of the stick shaker and the disconnection of the autopilot can be heard. With the engines powered down, the autopilot would have attempted to maintain altitude, causing the plane's airspeed to drop until it approached stall speed, at which point the stick shaker would have automatically engaged to warn the pilot and the autopilot would have switched itself off.
At 17:11:01 UTC, the Lear began a right turn and descent. NODAK 32 remained to the west, while TULSA 13 broke away from the tanker and followed N47BA down. At 17:11:26 UTC, the NODAK 32 lead pilot reported, "The target is descending and he is doing multiple rolls, looks like he's out of control... in a severe descent, request an emergency descent to follow target." The TULSA 13 pilot reported, "It's soon to impact the ground; he is in a descending spiral."
Impact occurred approximately 17:13 UTC, or 12:13 local, after a total flight time of 3 hours, 54 minutes, with the aircraft hitting the ground at nearly supersonic speed and at an extreme angle. The Learjet crashed in South Dakota, just outside Mina in Edmunds County, on relatively flat ground and left a crater long, wide, and deep. The aircraft was destroyed.
Passengers and crew
In addition to Payne Stewart and three others, there were two pilots on board:
The 42-year-old captain, Michael Kling, held an airline transport pilot certificate and type ratings for the Boeing 707, Boeing 737, and Learjet 35. He had Air Force experience flying the KC-135 and Boeing E-3 Sentry. Kling was also an instructor pilot on the KC-135E in the Maine Air National Guard. According to Sunjet Aviation records, the captain had accumulated a total of 4,280 hours of flight time, military and commercial. He had flown a total of 60 hours with Sunjet, 38 as a Learjet pilot-in-command, and 22 as a Learjet second-in-command.
The first officer, 27-year-old Stephanie Bellegarrigue, held a commercial pilot certificate and type ratings for Learjet and Cessna Citation 500. She was a certified flight instructor. She had accumulated a total of 1,751 hours of flight time, of which 251 hours were with Sunjet Aviation, as a second-in-command and 99 as a Learjet second-in-command.
Investigation
The National Transportation Safety Board (NTSB) has several levels of investigation, of which the highest is a "major" investigation. Because of the extraordinary circumstances in this crash, a major investigation was performed.
The NTSB determined that:
The Board added a commentary regarding the possible reasons why the crew did not obtain supplemental oxygen:
The NTSB report showed that the plane had several instances of maintenance work related to cabin pressure in the months leading up to the crash. The NTSB was unable to determine whether they stemmed from a common problem – replacements and repairs were documented, but not the pilot discrepancy reports that prompted them or the frequency of such reports. The report criticised Sunjet Aviation for the possibility that this would have made the problem harder to identify, track, and resolve, as well as the fact that in at least one instance the plane was flown with an unauthorized maintenance deferral for cabin pressure problems.
Aftermath
Stewart was ultimately headed to Houston for the 1999 Tour Championship but planned a stop in Dallas for discussions with the athletic department of his alma mater, Southern Methodist University, about building a new home course for the school's golf program. Stewart was memorialized at the Tour Championship with a lone bagpipe player playing at the first hole at Champions Golf Club prior to the beginning of the first day of play.
The owner of the crash site, after consulting the wives of Stewart and several other victims, created a memorial on about of the site. At its center is a rock pulled from the site inscribed with the names of the victims and a Bible passage.
The 2000 U.S. Open, held at Pebble Beach Golf Links, began with a golf version of a 21-gun salute when 21 of Stewart's fellow players simultaneously hit balls into the Pacific Ocean.
In 2001, Stewart was posthumously inducted into the World Golf Hall of Fame.
On June 8, 2005, a Florida state court jury in Orlando found that Learjet was not liable for the deaths of Stewart and his agents.
Documentaries
The documentary series Mayday features this incident in the first episode of its 16th season. The episode, titled "Deadly Silence", was first aired on June 7, 2016.
See also
Bo Rein – another US sportsman who died in a similar aircraft accident
2000 Australia Beechcraft King Air crash
Helios Airways Flight 522
2022 Baltic Sea Cessna crash
2023 Virginia plane crash
References
External links
National Transportation Safety Board Aircraft Accident Brief
Learjet aircraft
Aviation accidents and incidents in South Dakota
Aviation accidents and incidents in the United States in 1999
Decompression accidents and incidents
Edmunds County, South Dakota
Learjet crash
October 1999 events in the United States
Airliner accidents and incidents caused by pilot incapacitation
Airliner accidents and incidents caused by fuel exhaustion
Accidents and incidents involving the Learjet 35 family | 1999 South Dakota Learjet crash | [
"Chemistry"
] | 2,455 | [
"Decompression accidents and incidents",
"Pressure vessels"
] |
8,911,974 | https://en.wikipedia.org/wiki/History%20of%20the%20steel%20industry%20%281850%E2%80%931970%29 | Before 1800 A.D., the iron and steel industry was located where raw material, power supply and running water were easily available. After 1950, the iron and steel industry began to be located on large areas of flat land near sea ports. The history of the modern steel industry began in the late 1850s. Since then, steel has become a staple of the world's industrial economy. This article is intended only to address the business, economic and social dimensions of the industry, since the bulk production of steel began as a result of Henry Bessemer's development of the Bessemer converter, in 1857. Previously, steel was very expensive to produce, and was only used in small, expensive items, such as knives, swords and armor.
Technology
Steel is an alloy composed of between 0.2 and 2.0 percent carbon, with the balance being iron. From prehistory through the creation of the blast furnace, iron was produced from iron ore as wrought iron, 99.82–100 percent Fe, and the process of making steel involved adding carbon to iron, usually in a serendipitous manner, in the forge, or via the cementation process. The introduction of the blast furnace reversed the problem. A blast furnace produces pig iron — an alloy of approximately 90 percent iron and 10 percent carbon. When the process of steel-making is started with pig iron, instead of wrought iron, the challenge is to remove a sufficient amount of carbon to reduce it to the 0.2 to 2 percentage for steel.
Before about 1860, steel was an expensive product, made in small quantities and used mostly for swords, tools and cutlery; all large metal structures were made of wrought or cast iron. Steelmaking was centered in Sheffield and Middlesbrough, Britain, which supplied the European and American markets. The introduction of cheap steel was due to the Bessemer and the open hearth processes, two technological advances made in England. In the Bessemer process, molten pig iron is converted to steel by blowing air through it after it was removed from the furnace. The air blast burned the carbon and silicon out of the pig iron, releasing heat and causing the temperature of the molten metal to rise. Henry Bessemer demonstrated the process in 1856 and had a successful operation going by 1864. By 1870 Bessemer steel was widely used for ship plate. By the 1850s, the speed, weight, and quantity of railway traffic was limited by the strength of the wrought iron rails in use. The solution was to turn to steel rails, which the Bessemer process made competitive in price. Experience quickly proved steel had much greater strength and durability and could handle the increasingly heavy and faster engines and cars.
After 1890 the Bessemer process was gradually supplanted by open-hearth steelmaking and by the middle of the 20th century was no longer in use. The open-hearth process originated in the 1860s in Germany and France. The usual open-hearth process used pig iron, ore, and scrap, and became known as the Siemens-Martin process. Its process allowed closer control over the composition of the steel; also, a substantial quantity of scrap could be included in the charge. The crucible process remained important for making high-quality alloy steel into the 20th century. By 1900 the electric arc furnace was adapted to steelmaking and by the 1920s, the falling cost of electricity allowed it to largely supplant the crucible process for specialty steels.
Britain
19th century
Britain led the world's Industrial Revolution with its early commitment to coal mining, steam power, textile mills, machinery, railways, and shipbuilding. Britain's demand for iron and steel, combined with ample capital and energetic entrepreneurs, made it the world leader in the first half of the 19th century. Steel has a vital role during the industrial revolution.
In 1875, Britain accounted for 47% of world production of pig iron, a third of which came from the Middlesbrough area and almost 40% of steel. 40% of British output was exported to the U.S., which was rapidly building its rail and industrial infrastructure. Two decades later in 1896, however, the British share of world production had plunged to 29% for pig iron and 22.5% for steel, and little was sent to the U.S. The U.S. was now the world leader and Germany was catching up to Britain. Britain had lost its American market, and was losing its role elsewhere; indeed American products were now underselling British steel in Britain.
The growth of pig iron output was dramatic. Britain went from 1.3 million tons in 1840 to 6.7 million in 1870 and 10.4 in 1913. The US started from a lower base, but grew faster; from 0.3 million tons in 1840, to 1.7 million in 1870, and 31.5 million in 1913. Germany went from 0.2 million tons in 1859 to 1.6 in 1871 and 19.3 in 1913. France, Belgium, Austria-Hungary, and Russia, combined, went from 2.2 million tons in 1870 to 14.1 million tons in 1913, on the eve of the First World War. During the war the demand for artillery shells and other supplies caused a spurt in output and a diversion to military uses.
20th century
Abé (1996) explores the record of iron and steel firms in Victorian England by analyzing Bolckow Vaughan & Company. It was wedded for too long to obsolescent technology and was a very late adopter of the open hearth furnace method. Abé concludes that the firm—and the British steel industry—suffered from a failure of entrepreneurship and planning.
Blair (1997) explores the history of the British Steel industry since the Second World War to evaluate the impact of government intervention in a market economy. Entrepreneurship was lacking in the 1940s; the government could not persuade the industry to upgrade its plants. For generations the industry had followed a patchwork growth pattern which proved inefficient in the face of world competition. In 1946 the first steel development plan was put into practice with the aim of increasing capacity; the Iron and Steel Act 1949 meant nationalization of the industry in the form of the Iron and Steel Corporation of Great Britain. However, the reforms were dismantled by the Conservative Party governments in the 1950s. In 1967, under Labour Party control again, the industry was again nationalized. But by then twenty years of political manipulation had left companies such as the British Steel Corporation with serious problems: a complacency with existing equipment, plants operating under capacity (low efficiency), poor quality assets, outdated technology, government price controls, higher coal and oil costs, lack of funds for capital improvement, and increasing world market competition. By the 1970s the Labour government had its main goal to keep employment high in the declining industry. Since British Steel was a main employer in depressed regions, it had kept many mills and facilities that were operating at a loss. In the 1980s, Conservative Prime Minister Margaret Thatcher re-privatized BSC as British Steel plc.
Australia
There were various iron-making ventures during the 19th Century, and steel was made but only on a very small scale.
The first commercial scale production of steel in Australia was by William Sandford Limited at the Eskbank Ironworks at Lithgow, New South Wales, in 1901. The plant became Australia's first integrated iron and steel works in 1907. It was later expanded by Charles Hoskins. The first steel rails rolled in Australia were rolled there in 1911. Between 1928 and 1932, the operations at Lithgow were transferred, under the management of Cecil Hoskins, to a new plant at Port Kembla, still the site of most of Australia's steel production today.
The Minister for Public Works, Arthur Hill Griffith, had consistently advocated for the greater industrialization of Newcastle, then, under William Holman, personally negotiated the establishment of a steelworks with G. D. Delprat of BHP. Griffith was also the architect of the Walsh Island establishment.
In 1915, BHP ventured into steel manufacturing with its Newcastle Steelworks, which was closed in 1999. The 'long products' side of the steel business was spun off to form OneSteel in 2000. BHP's decision to move from mining ore to open a steelworks at Newcastle was precipitated by the technical limitations in recovering value from mining the 'lower-lying sulphide ores'. The discovery of Iron Knob and Iron Monarch near the western shore of the Spencer Gulf in South Australia combined with the development by the BHP metallurgist, Archibald Drummond Carmichael, of a technique for 'separating zinc sulphides from the accompanying earth and rock' led BHP 'to implement the startlingly simple and cheap process for liberating vast amounts of valuable metals out of sulphide ores, including huge heaps of tailings and slimes up to' high.
Germany
The Ruhr Valley provided an excellent location for the German iron and steel industry because of the availability of raw materials, coal, transport, a skilled labor force, nearby markets, and an entrepreneurial spirit that led to the creation of many firms, often in close conjunction with coal mines. By 1850 the Ruhr had 50 iron works with 2,813 full-time employees. The first modern furnace was built in 1849. The unification of Germany in 1871 gave further impetus to rapid growth, as the German Empire started to catch up with Britain. From 1880 to World War I, the industry of the Ruhr area consisted of numerous enterprises, each working on a separate level of production. Mixed enterprises could unite all levels of production through vertical integration, thus lowering production costs. Technological progress brought new advantages as well. These developments set the stage for the creation of combined business concerns.
The leading firm was Friedrich Krupp AG run by the Krupp family. Many diverse, large-scale family firms such as Krupp's reorganized in order to adapt to the changing conditions and meet the economic depression of the 1870s, which reduced the earnings in the German iron and steel industry. Krupp reformed his accounting system to better manage his growing empire, adding a specialized bureau of calculation as well as a bureau for the control of times and wages. The rival firm GHH quickly followed, as did Thyssen AG, which had been founded by August Thyssen in 1867. Germany became Europe's leading steel-producing nation in the late 19th century, thanks in large part to the protection from American and British competition afforded by tariffs and cartels.
By 1913 American and German exports dominated the world steel market, and Britain slipped to third place. German steel production grew explosively from 1 million metric tons in 1885 to 10 million in 1905 and peaked at 19 million in 1918. In the 1920s Germany produced about 15 million tons, but output plunged to 6 million in 1933. Under Nazi rule, steel output peaked at 22 million tons in 1940, then dipped to 18 million in 1944 under Allied bombing.
The merger of four major firms into the German Steel Trust (Vereinigte Stahlwerke) in 1926 was modeled on the U.S. Steel corporation in the U.S. The goal was to move beyond the limitations of the old cartel system by incorporating advances simultaneously inside a single corporation. The new company emphasized rationalization of management structures and modernization of the technology; it employed a multi-divisional structure and used return on investment as its measure of success. It represented the "Americanization" of the German steel industry because its internal structure, management methods, use of technology, and emphasis on mass production. The chief difference was that consumer capitalism as an industrial strategy did not seem plausible to German steel industrialists.
In iron and steel and other industries, German firms avoided cut-throat competition and instead relied on trade associations. Germany was a world leader because of its prevailing "corporatist mentality", its strong bureaucratic tradition, and the encouragement of the government. These associations regulated competition and allowed small firms to function in the shadow of much larger companies.
With the need to rebuild the bombed-out infrastructure after the Second World War, Marshall Plan (1948–51) enabled West Germany to rebuild and modernize its mills. It produced 3 million tons of steel in 1947, 12 million in 1950, 34 million in 1960 and 46 million in 1970. East Germany produced about a tenth as much.
France
The French iron industry lagged behind Britain and Belgium in the early 19th century. After 1850 it also lagged behind Germany and Luxembourg. Its industry comprised too many small, inefficient firms. 20th century growth was not robust, due more to traditional social and economic attitudes than to inherent geographic, population, or resource factors. Despite a high national income level, the French steel industry remained laggard. The industry was based on large supplies of coal and iron ore, and was dispersed across the country. The greatest output came in 1929, at 10.4 million metric tons. The industry suffered sharply during the Great Depression and World War II. Prosperity returned by mid-1950s, but profits came largely from strong domestic demand rather than competitive capacity. Late modernization delayed the development of powerful unions and collective bargaining.
Italy
In Italy a shortage of coal led the steel industry to specialize in the use of hydro-electrical energy, exploiting ideas pioneered by from 1898 (Stassano furnace). Despite periods of innovation (1907–14), growth (1915–18), and consolidation (1918–22), early expectations were only partly realized. Steel output in the 1920s and 1930s averaged about 2.1 million metric tons. Per capita consumption was much lower than the average of Western Europe. Electrical processes were an important substitute, yet did not improve competitiveness or reduce prices. Instead, they reinforced the dualism of the sector and initiated a vicious circle that prevented market expansion. Italy modernized its industry in the 1950s and 1960s and it grew rapidly, becoming second only to West Germany in the 1970s. Strong labour unions kept employment levels high. Troubles multiplied after 1980, however, as foreign competition became stiffer. In 1980 the largest producer Nuova Italsider [now dubbed Ilva (company) lost 746 billion lira in its inefficient operations. In the 1990s the Italian steel industry, then mostly state-owned, was largely privatised. Today the country is the world's seventh-largest steel exporter.
United States
From 1875 to 1920 American steel production grew from 380,000 tons to 60 million tons annually, making the U.S. the world leader. The annual growth rates in steel 1870–1913 were 7.0% for the US; 1.0% for Britain; 6.0% for Germany; and 4.3% for France, Belgium, and Russia, the other major producers. This explosive American growth rested on solid technological foundations and the continuous rapid expansion of urban infrastructures, office buildings, factories, railroads, bridges and other sectors that increasingly demanded steel. The use of steel in automobiles and household appliances came in the 20th century.
Some key elements in the growth of steel production included the easy availability of iron ore, and coal. Iron ore of fair quality was abundant in the eastern states, but the Lake Superior region contained huge deposits of exceedingly rich ore; the Marquette Iron Range was discovered in 1844; operations began in 1846. Other ranges were opened by 1910, including the Menominee, Gogebic, Vermilion, Cuyuna, and, greatest of all, (in 1892) the Mesabi range in Minnesota. This iron ore was shipped through the Great Lakes to ports such as Chicago, Detroit, Cleveland, Erie and Buffalo for shipment by rail to the steel mills. Abundant coal was available in Pennsylvania, West Virginia, and Ohio. Manpower was short. Few Native Americans wanted to work in the mills, but immigrants from Britain and Germany (and later from Eastern Europe) arrived in great numbers.
In 1869 iron was already a major industry, accounting for 6.6% of manufacturing employment and 7.8% of manufacturing output. By then the central figure was Andrew Carnegie, who made Pittsburgh the center of the industry. He sold his operations to US Steel in 1901, which became the world's largest steel corporation for decades.
In the 1880s, the transition from wrought iron puddling to mass-produced Bessemer steel greatly increased worker productivity. Highly skilled workers remained essential, but the average level of skill declined. Nevertheless, steelworkers earned much more than ironworkers despite their fewer skills. Workers in an integrated, synchronized mass production environment wielded greater strategic power, for the greater cost of mistakes bolstered workers' status. The experience demonstrated that the new technology did not decrease worker bargaining leverage by creating an interchangeable, unskilled workforce.
Alabama
In Alabama, industrialization was generating a ravenous appetite for the state's coal and iron ore. Production was booming, and unions were attempting to organize unincarcerated miners. Convicts provided an ideal captive work force: cheap, usually docile, unable to organize and available when unincarcerated laborers went on strike." The Southern agrarian economy did not accommodate convict leasing as well as the industrial economy did, whose jobs were often unappealing or dangerous, offering hard-labor and low pay. The competition, expansion, and growth of mining and steel companies also created a high demand for labor, but union labor posed a threat to expanding companies. As unions bargained for higher wages and better conditions, often organizing strikes in order to achieve their goals, the growing companies would be forced to agree to union demands or face abrupt halts in production. The rate companies paid for convict leases, which paid the laborer nothing, was regulated by government and state officials who entered the labor contracts with companies. "The companies built their own prisons, fed and clothed the convicts, and supplied guards as they saw fit." (Blackmon 2001) Alabama's use of convict leasing was commanding; 51 of its 67 counties regularly leased convicts serving for misdemeanors at a rate of about $5–20 per month, equal to about $160–500 in 2015. Although the influence of labor unions forced some states to move away from the profitable convict lease agreements and run traditional prisons, plenty of companies began substituting convict labor in their operations in the twentieth century. "The biggest user of forced labor in Alabama at the turn of the century was Tennessee Coal, Iron & Railroad Co., [of] U.S. Steel"
Carnegie
Andrew Carnegie, a Scottish immigrant, advanced the cheap and efficient mass production of steel rails for railroad lines, by adopting the Bessemer process. After an early career in railroads, Carnegie foresaw the potential for steel to amass vast profits. He asked his cousin, George Lauder to join him in America from Scotland. Lauder was a leading mechanical engineer who had studied under Lord Kelvin. Lauder devised several new systems for the Carnegie Steel Company including the process for washing and coking dross from coal mines, which resulted in a significant increase in scale, profits, and enterprise value.
Carnegie's first mill was the Edgar Thomson Works in Braddock, PA, just outside of Pittsburgh. In 1888, he bought the rival Homestead Steel Works, which included an extensive plant served by tributary coal and iron fields, a 425-mile (685 km) long railway, and a line of lake steamships. He would also add the Duquesne Works to his empire. These three mills on the Monongahela River would make Pittsburgh the steel capital of the world. In the late 1880s, the Carnegie Steel Company was the largest manufacturer of pig iron, steel rails, and coke in the world, with a capacity to produce approximately 2,000 tons of pig iron per day. A consolidation of Carnegie's assets and those of his associates occurred in 1892 with the launching of the Carnegie Steel Company.
Lauder would go on to lead the development of the use of steel in armor and armaments for the Carnegie Steel Company, spending significant time at the Krupp factory in Germany in 1886 before returning to build the massive armor plate mill at the Homestead Steel Works that would revolutionize naval warfare.
By 1889, the U.S. output of steel exceeded that of Britain, and Andrew Carnegie owned a large part of it. By 1900, the profits of Carnegie Bros. & Company alone stood at $480,000,000 with $225,000,000 being Carnegie's share.
Carnegie, through Keystone, supplied the steel for and owned shares in the landmark Eads Bridge project across the Mississippi River in St. Louis, Missouri (completed 1874). This project was an important proof-of-concept for steel technology which marked the opening of a new steel market.
The Homestead Strike was a violent labor dispute in 1892 that involved an attack by strikers against private security guards. The governor called in the National Guard. The strike failed and the union collapsed. The dispute took place at Carnegie's Homestead Steel Works between the Amalgamated Association of Iron and Steel Workers and the Carnegie Steel Company. The final result was a major defeat for the union and a setback for efforts to unionize steelworkers.
Carnegie sold all his steel holdings in 1901; they were merged into U.S. Steel and it was non-union until the late 1930s.
US Steel
By 1900 the US was the largest producer and also the lowest cost producer, and demand for steel seemed inexhaustible. Output had tripled since 1890, but customers, not producers, mostly benefitted. Productivity-enhancing technology encouraged faster and faster rates of investment in new plants. However, during recessions, demand fell sharply taking down output, prices, and profits. Charles M. Schwab of Carnegie Steel proposed a solution: consolidation. Financier J. P. Morgan arranged the buyout of Carnegie and most other major firms, and put Elbert Gary in charge. The massive Gary Works steel mill on Lake Michigan was for many years the largest steel producing facility in the world.
US Steel combined finishing firms (American Tin Plate (controlled by William Henry "Judge" Moore), American Steel and Wire, and National Tube) with two major integrated companies, Carnegie Steel and Federal Steel. It was capitalized at $1.466 billion, and included 213 manufacturing mills, one thousand miles of railroad, and 41 mines. In 1901, it accounted for 66% of America's steel output, and almost 30% of the world's. During World War I, its annual production exceeded the combined output of all German and Austrian firms.
The Steel Strike of 1919 disrupted the entire industry for months, but the union lost and its membership sharply declined. Rapid growth of cities made the 1920s boom years. President Harding and social reformers forced it to end the 12-hour day in 1923.
Earnings were recorded at $2.650 billion for 2016.
Bethlehem Steel
Charles M. Schwab (1862–1939) and Eugene Grace (1876–1960) made Bethlehem Steel the second-largest American steel company by the 1920s. Schwab had been the operating head of Carnegie Steel and US Steel. In 1903 he purchased the small firm Bethlehem Steel, and in 1916 made Grace president. Innovation was the keynote at a time when U.S. Steel under Judge Elbert Henry Gary moved slowly. Bethlehem concentrated on government contracts, such as ships and naval armor, and on construction beams, especially for skyscrapers and bridges. Its subsidiary Bethlehem Shipbuilding Corporation operated 15 shipyards in World War II. It produced 1,121 ships, more than any other builder during the war and nearly one-fifth of the U.S. Navy's fleet. Its peak employment was 180,000 workers, out of a company-wide wartime peak of 300,000. After 1945 Bethlehem doubled its steel capacity, a measure of the widespread optimism in the industry. However the company ignored the new technologies then being developed in Europe and Japan. Seeking labor peace in order to avoid strikes, Bethlehem like the other majors agreed to large wage and benefits increases that kept its costs high. After Grace retired the executives concentrated on short term profits and postponed innovations that led to long-term inefficiency. It went bankrupt in 2001.
Republic Steel
Cyrus Eaton (1883–1979) in 1925 purchased the small Trumbull Steel Company of Warren, Ohio, for $18 million. In the late 1920s he purchased undervalued steel and rubber companies. In 1930, Eaton consolidated his steel holdings into the Republic Steel, based in Cleveland; it became the third-largest steel producer in the U.S., after US Steel and Bethlehem Steel.
Unions
The American Federation of Labor (AFL) tried and failed to organize the steelworkers in 1919. Although the strike gained widespread middle-class support because of its demand and the 12-hour day, the strike failed and unionization was postponed until the late 1930s. The mills ended the 12-hour day in the early 1920s.
The second surge of unionization came under the auspices of the militant Congress of Industrial Organizations in the late 1930s, when it set up the Steel Workers Organizing Committee. The SWOC focused almost exclusively on the achievement of a signed contract, with "Little Steel" (the major producers except for US Steel). At the grassroots however, women of the steel auxiliaries, workers on the picket line, and middle-class liberals from across Chicago sought to transform the strike into something larger than a showdown over union recognition. In Chicago, the Little Steel strike raised the possibility that steelworkers might embrace the ‘civic unionism’ that animated the left-led unions of the era. The effort failed, and while the strike was won, the resulting powerful United Steelworkers of America union suppressed grassroots opinions.
Apogee and decline
Integration was the watchword as the various processes were brought together by large corporations, from mining the iron ore to shipping the finished product to wholesalers. The typical steelworks was a giant operation, including blast furnaces, Bessemer converters, open-hearth furnaces, rolling mills, coke ovens and foundries, as well as supported transportation facilities. The largest ones were operated in the region from Chicago to St. Louis to Baltimore, Philadelphia and Buffalo. Smaller operations appeared in Birmingham, Alabama, and in California.
The industry grew slowly but other industries grew even faster, so that by 1967, as the downward spiral began, steel accounted for 4.4% of manufacturing employment and 4.9% of manufacturing output. After 1970 American steel producers could no longer compete effectively with low-wage producers elsewhere. Imports and local mini-mills undercut sales.
Per-capita steel consumption in the U.S. peaked in 1977, then fell by half before staging a modest recovery to levels well below the peak.
Most mills were closed. Bethlehem went bankrupt in 2001. In 1984, Republic merged with Jones and Laughlin Steel Company; the new firm went bankrupt in 2001. US Steel diversified into oil (Marathon Oil was spun off in 2001). Finally US Steel reemerged in 2002 with plants in three American locations (plus one in Europe) that employed fewer than one-tenth the 168,000 workers of 1902. By 2001 steel accounted for only 0.8% of manufacturing employment and 0.8% of manufacturing output.
The world steel industry peaked in 2007. That year, ThyssenKrupp spent $12 billion to build the two most modern mills in the world, in Alabama and Brazil. The worldwide great recession starting in 2008, however, with its heavy cutbacks in construction, sharply lowered demand and prices fell 40%. ThyssenKrupp lost $11 billion on its two new plants, which sold steel below the cost of production. Finally in 2013, ThyssenKrupp offered the plants for sale at under $4 billion.
Legacy
The President of the United States is authorized to declare each May "Steelmark Month" to recognize the contribution made by the steel industry to the United States.
Asia
Japan
Yonekura shows the steel industry was central to the economic development of Japan. The nation's sudden transformation from feudal to modern society in the late nineteenth century, its heavy industrialization and imperialist war ventures in 1900–1945, and the post-World War II high-economic growth, all depended on iron and steel. The other great Japanese industries, such as shipbuilding, automobiles, and industrial machinery are closely linked to steel. From 1850 to 1970, the industry increased its crude steel production from virtually nothing to 93.3 million tons (the third largest in the world).
The government's activist Ministry of International Trade and Industry (MITI) played a major role in coordination. The transfer of technology from the West and the establishment of competitive firms involved far more than buying foreign hardware. MITI located steel mills and organized a domestic market; it sponsored Yawata Steel Company. Japanese engineers and entrepreneurs internally developed the necessary technological and organizational capabilities, planned the transfer and adoption of technology, and gauged demand and sources of raw materials and finances.
India
The Bengal Iron Works was founded at Kulti, Bengal, in 1870 which began its production in 1874 followed by The Tata Iron and Steel Company (TISCO) was established by Dorabji Tata in 1907, as part of his father's conglomerate. By 1939 it operated the largest steel plant in the British Empire. The company launched a major modernization and expansion program in 1951.
Prime Minister Jawaharlal Nehru, a believer in socialism, decided that the technological revolution in India needed maximization of steel production. He, therefore, formed a government owned company, Hindustan Steel Limited (HSL) and set up three steel plants in the 1950s.
The Indian steel industry began expanding into Europe in the 21st century. In January 2007 India's Tata Steel made a successful $11.3 billion offer to buy European steel maker Corus Group. In 2006, Mittal Steel Company (based in London but with Indian management) merged with Arcelor after a takeover bid for $34.3 billion to become ArcelorMittal (based in Luxembourg City), with 10% of the world's output.
China
Communist party Chairman Mao Zedong disdained the cities and put his faith in the Chinese peasantry for a Great Leap Forward. Mao saw steel production as the key to overnight economic modernization, promising that within 15 years China's steel production would surpass that of Britain. In 1958 he decided that steel production would double within the year, using backyard steel furnaces run by inexperienced peasants. The plan was a fiasco, as the small amounts of steel produced were of very poor quality, and the diversion of resources out of agriculture produced a massive famine in 1959–61 that killed millions.
With economic reforms brought in by Deng Xiaoping, who led China from 1978 to 1992, China began to develop a modern steel industry by building new steel plants and recycling scrap metal from the United States and Europe. As of 2013 China produced 779 million metric tons of steel each year, making it by far the largest steel producing country in the world. This is compared to 165 for the European Union, 110 for Japan, 87 for the United States and 81 for India. China's 2013 steel production was equivalent to an average of 3.14 cubic meters of steel per second.
See also
American Iron and Steel Institute
British Steel Corporation
Dominion Steel and Coal Corporation, in Canada
Steelmaking
References
Bibliography
Ashton, T. S. Iron and Steel in the Industrial Revolution (2nd edn., 1951).
Bernal, John Desmond, Science and Industry in the Nineteenth Century, Indiana University Press, 1970.
D’Costa, Anthony P. The Global Restructuring of the Steel Industry: Innovations, Institutions, and Industrial Change London: Routledge, 1999
Hasegawa, Harukiyu. The Steel Industry in Japan: A Comparison with Britain 1996
Landes, David S., The Unbound Prometheus: Technical Change and Industrial Development in Western Europe from 1750 to the Present (2nd ed. Cambridge University Press, 2003)
Pounds, Norman J. G., and William N. Parker; Coal and Steel in Western Europe; the Influence of Resources and Techniques on Production (Indiana University Press, 1957)
Singer, Charles Joseph, ed. A history of technology: vol 4: The Industrial Revolution c 1750–c 1860 (1960) ch 4, and vol 5: The Late Nineteenth Century, c 1850–c 1900, ch 3; online at ACLS e-books
Stoddard, Brooke C. Steel: From Mine to Mill, the Metal that Made America (2015) short, global popular history excerpt
Woytinsky, W. S., and E. S. Woytinsky. World Population and Production Trends and Outlooks (1953) pp 1098–1143, with many tables and maps on the worldwide steel industry
Yonekura, Seiichiro. The Japanese iron and steel industry: Continuity and discontinuity, 1850–1970 (1994) excerpt and text search
Britain
Birch, Alan. Economic History of the British Iron and Steel Industry (Routledge, 2013).
Burn, D. L. “Recent Trends in the History of the Steel Industry.” Economic History Review, 17#2 1947, pp. 95–102. online.
Burn, Duncan. The Steel Industry, 1939–1959: A Study in Competition and Planning (1961)
Burn, Duncan. The Economic History of Steelmaking, 1867–1939: A Study in Competition. Cambridge University Press, 1961
Carr, J. C. and W. Taplin; History of the British Steel Industry Harvard University Press, 1962
Tweedale, Geoffrey. Steel City: Entrepreneurship, Strategy, and Technology in Sheffield, 1743–1993. (Oxford U.P. 1995)
Vaizy, John. The history of British steel (1974), well illustrated
Warren, Kenneth. British Iron and Sheet Steel Industry since 1840 (1970) Economic geography.
United States
Hoerr, John P. And the Wolf Finally Came: The Decline of the American Steel Industry (1988) excerpt and text search
Hogan, William T. Economic History of the Iron and Steel Industry in the United States (5 vol 1971) monumental detail
Ingham, John N. The Iron Barons: A Social Analysis of an American Urban Elite, 1874-1965 (1978)
Krass, Peter. Carnegie (2002). .
Livesay, Harold C. Andrew Carnegie and the Rise of Big Business, 2nd Edition (1999). .
Misa, Thomas J. A Nation of Steel: The Making of Modern America, 1865–1925 (1995) Chapter 1 "The Dominance of Rails"
Nasaw, David. Andrew Carnegie (The Penguin Press, 2006).
Paskoff, Paul F. Iron and Steel in the Nineteenth Century (Encyclopedia of American Business History and Biography) (1989) 385 pp; biographies and brief corporate histories
Rogers, Robert P. An Economic History of the American Steel Industry (2009) excerpt and text search
Scamehorn, H. Lee. Mill & Mine: The Cf&I in the Twentieth Century University of Nebraska Press, 1992
Scheuerman, William. The Steel Crisis: The Economics and Politics of a Declining Industry (1986)
Skrabec Jr, Quentin R. The Carnegie Boys: The Lieutenants of Andrew Carnegie that Changed America (McFarland, 2012).
Seely, Bruce E., ed The Iron and Steel Industry in the 20th Century (1994) (Encyclopedia of American Business History and Biography)
Temin, Peter. Iron and Steel in Nineteenth-Century America, An Economic Inquiry (1964)
Wall, Joseph Frazier. Andrew Carnegie (1989). .
Warren, Kenneth, Big Steel: The First Century of the United States Steel Corporation, 1901–2001. (University of Pittsburgh Press, 2001) online review
Warren, Kenneth. Bethlehem Steel: Builder and Arsenal of America (2010) excerpt and text search
Warren, Kenneth. The American Steel Industry, 1850–1970: A Geographical Interpretation (1973) ()
Whaples, Robert. "Andrew Carnegie", EH.Net Encyclopedia of Economic and Business History online
U.S. Steel's History of U.S. Steel
Urofsky, Melvin I. Big Steel and the Wilson Administration: A Study in Business-Government Relations (1969)
U.S. Labor
Brody, David. Labor in Crisis: The Steel Strike of 1919 (1965)
Mary Margaret Fonow; Union Women: Forging Feminism in the United Steelworkers of America (University of Minnesota Press, 2003)
U.S. Steel's History of U.S. Steel
Urofsky, Melvin I. Big Steel and the Wilson Administration: A Study in Business-Government Relations (1969)
Primary sources
U.S. Commissioner of Corporations. Report on the Steel Industry (1913).
Warne, Colston E. ed. The Steel Strike of 1919 (1963), primary and secondary documents
History of Steel Industry
Industrial Revolution
History of metallurgy
steel industry
Amalgamated Association of Iron and Steel Workers
History of the United Steelworkers
U.S. Steel | History of the steel industry (1850–1970) | [
"Chemistry",
"Materials_science"
] | 7,538 | [
"Metallurgy",
"History of metallurgy"
] |
8,912,200 | https://en.wikipedia.org/wiki/P2Y%20receptor | P2Y receptors are a family of purinergic G protein-coupled receptors, stimulated by nucleotides such as adenosine triphosphate, adenosine diphosphate, uridine triphosphate, uridine diphosphate and UDP-glucose.To date, 8 P2Y receptors have been cloned in humans: P2Y1, P2Y2, P2Y4, P2Y6, P2Y11, P2Y12, P2Y13 and P2Y14.
P2Y receptors are present in almost all human tissues where they exert various biological functions based on their G-protein coupling. P2Y receptors mediate responses including vasodilation, blood clotting, and immune response. Due to their ubiquity and variety in function, they are a common biological target in pharmacological development.
Structure
P2Y receptors are membrane proteins belonging to the class A family of G protein-coupled receptors (GPCRs). P2Y receptor proteins display large-scale structural domains typical of GPCRs, consisting of seven hydrophobic transmembrane helices connected by three short extracellular loops and three variably sized intracellular loops; an extracellular N-terminus; and an intracellular C-terminus. The extracellular regions interact with the receptor ligands, while the intracellular regions activate the G protein, control receptor internalization, and mediate dimerization. Similar to other GPCRs, P2Y receptors can form both homodimers and heterodimers. These dimeric forms may vary significantly in their biochemical and pharmacological properties from the monomeric receptor.
In addition to the structural domains typical of all GPCRs, some structural elements are common across P2Y receptor subtypes. All P2Y receptors contain four extracellular cysteine residues which can form two disulfide bridges, one between the N-terminus domain and the proximal extracellular loop and another between the two remaining extracellular loops. These disulfide bonds have been shown to be involved in ligand binding and signal transduction. In addition, several polar residues found within the transmembrane helices are highly conserved across both species and receptor subtypes. Mutational analysis has suggested that these residues are integral to the ligand-binding mechanism of P2Y receptors. Outside of these conserved regions, the P2Y receptor family exhibits unusually high diversity in primary structure, with P2Y1 sharing only 19% of its primary structure with P2Y12. Despite this, the individual P2Y subtypes are highly conserved across species, with human and mouse P2Y receptors sharing 95% of amino acids.
The ligand-binding mechanisms of P2Y receptors are not currently well established. The binding complex of P2Y receptors with ATP is of significant interest, as no P2Y receptor contains amino acids sequences similar to any of the many established ATP-binding sites. Recent x-ray crystallography of the human P2Y12 receptor has shown several structural irregularities in regions that are typically highly conserved across GPCRs.
In contrast to the unusual structure and behavior of the extracellular ligand binding domains, P2Y intracellular domains appear to be structurally and mechanistically similar to other GPCRs.
Signal transduction
P2Y receptors respond either positively or negatively to the presence of nucleotides in extracellular solution. Nucleotides may be divided into two categories: purines and pyrimidines. Individual P2Y receptor species may respond to only purines, only pyrimidines, or both; the activation profiles of the eight known P2Y receptors are listed below.
The activity of P2Y receptors is linked to a signal cascade originating in regulation of the flow of Ca2+ and K+ ions by the receptor's interactions with G proteins, modulating access to Ca2+ and K+ channels, though the exact behavior is dependent upon individual receptor species. Voltage-independent Ca2+ channels allow for the free flow of Ca2+ ions from the cell activated by P2Y receptors. Oscillation of Ca2+ concentration is directly affected by the signal-transduction activity of P2Y1; specifically, through protein kinase C phosphorylation of Thr339 in the carboxy terminus of the P2Y1 receptor.
Changes in the concentration of Ca2+ have many important ramifications for the cell, including regulation of cell metabolism (e.g. autophagy initiation / regulation), ATP production (through Ca2+ entering the mitochondrial outer mitochondrial membrane and stimulation of mitochondrial dehydrogenases leading to the production of ATP), and the possibility of triggering apoptosis. Both autophagy and apoptosis are cell stress responses that play significant roles in cells' overall life cycles, though autophagy seeks to preserve the viability of the cell by recycling unit parts of organelles, while apoptosis acts in the interest of the whole organism at the expense of the cell undergoing apoptosis.
Pharmacology
Many commonly prescribed medications target P2Y receptors, and active research is being conducted into developing new drugs targeting these receptors. The most commonly prescribed drug targeting P2Y receptors is clopidogrel, an antiplatelet medication which acts on the P2Y12 receptor in a manner shared with other thienopyridines. Other pharmaceutical applications include denufosol, which targets P2Y2 and is being investigated for the treatment of cystic fibrosis, and diquafosol, a P2Y2 agonist used in the treatment of dry eye disease.
P2Y6 receptors have been shown to play a role in cerebral vasodilation. UDP-analogs which bind to this receptor have been investigated as possible treatments for migraines.
P2Y11 is a regulator of immune response, and a common polymorphism carried by almost 20% of North European caucasians give increased risk of myocardial infarction, making P2Y11 an interesting drug target candidate for treatment of myocardial infarction.
In addition to established uses, pharmaceutical research has been conducted into the role of P2Y receptors in osteoporosis, diabetes, and cardio-protection.
Coupling
The biological effects of P2Y receptor activation depends on how they couple to downstream signalling pathways, either via Gi, Gq/11 or Gs G proteins. Human P2Y receptors have the following G protein coupling:
The gaps in P2Y receptor numbering is due to that several receptors (P2Y3, P2Y5, P2Y7, P2Y8, P2Y9, P2Y10) were thought to be P2Y receptors when they were cloned, when in fact they are not.
See also
Receptor (biochemistry)
Purinergic signalling
Membrane protein
Receptor theory
References
External links
Ivar von Kügelgen: Pharmacology of mammalian P2X- and P2Y-receptors, BIOTREND Reviews No. 03, September 2008,© 2008 BIOTREND Chemicals AG
G protein-coupled receptors | P2Y receptor | [
"Chemistry"
] | 1,487 | [
"G protein-coupled receptors",
"Signal transduction"
] |
8,912,313 | https://en.wikipedia.org/wiki/Flower%20mantis | Flower mantises are praying mantises that use a special form of camouflage referred to as aggressive mimicry, which they not only use to attract prey, but avoid predators as well. These insects have specific colorations and behaviors that mimic flowers in their surrounding habitats.
This strategy has been observed in other mantises including the stick mantis and dead-leaf mantis. The observed behavior of these mantises includes positioning themselves on a plant and either inserting themselves within the irradiance or on the foliage of the plants until a prey insect comes within range.
Many species of flower mantises are popular as pets. The flower mantises are diurnal group with a single ancestry (a clade), but the majority of the known species belong to family Hymenopodidea.
Example species: Orchid mantis
The orchid mantis, Hymenopus coronatus of southeast Asia mimics orchid flowers. There is no evidence that suggests that they mimic a specific orchid, but their bodies are often white with pink markings and green eyes. These insects display different body morphologies depending on their life stage; juveniles are able to bend their abdomens upwards, allowing them to easily resemble a flower. However, the adult's wings are too large, inhibiting their ability to bend as the juveniles do. This dichotomy suggests that there must be other processes involved to attract insect prey species. Since Hymenopus coronatus do not mimic one orchid in particular, their colorations often do not match the coloration of a single orchid species.
Antipredator behaviour
One mechanism displayed by the orchid mantis to attract prey is the ability to absorb UV light the same way that flowers do. This makes the mantis appear flower-like to UV-sensitive insects who are often pollinators. To an insect, the mantis and the surrounding flowers appear blue; this contrasts against the foliage in the background that appears red.
In his 1940 book Adaptive Coloration in Animals, Hugh Cott quotes an account by Nelson Annandale, saying that the mantis hunts on the flowers of the "Straits Rhododendron", Melastoma polyanthum. The nymph has what Cott calls "special alluring coloration" (aggressive mimicry), where the animal itself is the "decoy". The insect is pink and white, with flattened limbs with "that semiopalescent, semicrystalline appearance that is caused in flower petals by a purely structural arrangement of liquid globules or empty cells". The mantis climbs up the twigs of the plant and stands imitating a flower and waits for its prey patiently. It then sways from side to side, and soon small flies land on and around it, attracted by the small black spot on the end of its abdomen, which resembles a fly. When a larger dipteran fly, as big as a house fly, landed nearby, the mantis at once seized and ate it. More recently (2015), the orchid mantis's coloration has been shown to mimic tropical flowers effectively, attracting pollinators and catching them.
Juvenile mantises secrete a mixture of the chemicals 3HOA and 10HDA, attracting their top prey species, the oriental bumblebee. This method of deception is aggressive chemical mimicry, imitating the chemical composition of the bee's pheromones. The chemicals are stored in the mandibles and released when H. coronatus is hunting. Adult mantises do not produce these chemicals.
Taxonomic range
The flower mantises include species from several genera, many of which are popularly kept as pets. Seven of the genera are in the Hymenopodidae:
See also
List of mantis genera and species
References
Further reading
Wickler, Wolfgang (1968). Mimicry in plants and animals. McGraw-Hill, New York.
Mantodea
Mimicry
Insect common names | Flower mantis | [
"Biology"
] | 804 | [
"Mimicry",
"Biological defense mechanisms"
] |
8,912,350 | https://en.wikipedia.org/wiki/Discrete%20measure | In mathematics, more precisely in measure theory, a measure on the real line is called a discrete measure (in respect to the Lebesgue measure) if it is concentrated on an at most countable set. The support need not be a discrete set. Geometrically, a discrete measure (on the real line, with respect to Lebesgue measure) is a collection of point masses.
Definition and properties
Given two (positive) σ-finite measures and on a measurable space . Then is said to be discrete with respect to if there exists an at most countable subset in such that
All singletons with are measurable (which implies that any subset of is measurable)
A measure on is discrete (with respect to ) if and only if has the form
with and Dirac measures on the set defined as
for all .
One can also define the concept of discreteness for signed measures. Then, instead of conditions 2 and 3 above one should ask that be zero on all measurable subsets of and be zero on measurable subsets of
Example on
A measure defined on the Lebesgue measurable sets of the real line with values in is said to be discrete if there exists a (possibly finite) sequence of numbers
such that
Notice that the first two requirements in the previous section are always satisfied for an at most countable subset of the real line if is the Lebesgue measure.
The simplest example of a discrete measure on the real line is the Dirac delta function One has and
More generally, one may prove that any discrete measure on the real line has the form
for an appropriately chosen (possibly finite) sequence of real numbers and a sequence of numbers in of the same length.
See also
References
External links
Measures (measure theory) | Discrete measure | [
"Physics",
"Mathematics"
] | 352 | [
"Measures (measure theory)",
"Quantity",
"Physical quantities",
"Size"
] |
8,912,941 | https://en.wikipedia.org/wiki/Turner%20Controversy | The Turner Controversy was a dispute within the Socialist Party of Great Britain (SPGB) regarding the nature of socialism instigated by party member Tony Turner. The dispute ultimately led to an exodus of members who formed the short-lived Movement for Social Integration.
When membership and activity was at a peak in the period after the Second World War, Turner began giving lectures for the party on what he envisioned socialism would be like. The content of these lectures led him to develop a position that caused controversy in the party by the early to mid-1950s and which was elaborated by Turner and his supporters in articles in the party’s internal discussion journal of the time, Forum.
Three interlocking propositions underpinned the ‘Turnerite’ viewpoint:
that the society of mass consumerism and automated labour which capitalism had become had to be swept away in its entirety if alienation was to be abolished and a truly human community created. This meant a return to pre-industrial methods of production, on lines inspired by Tolstoy and William Morris’ News From Nowhere.
that the creation of the new socialist society was not simply in the interests of the working class but was in the interests of the whole of humanity, irrespective of class, a proposition they thought it essential for the Party to recognise in its everyday propaganda, and
the means of creating the new peaceful and cooperative society had to be entirely peaceful, indeed pacifist (and in the view of some, possibly even gradual).
This view was in direct contradiction to the party's 'Declaration of Principles', which identifies socialism as being the product of class struggle and which claims that the socialist
movement will organise for the capture of political power, including power over the state’s coercive machinery, which can be wielded to repress those who resist the imposition of socialism.
A series of acrimonious disputes between the ‘Turnerites’ and the majority of the party culminated in a party referendum and then a resolution being carried at the 1955 party conference to the effect that all members not in agreement with the Declaration of Principles be asked to resign. Turner, having survived a previous attempt to expel him, promptly did so, along with a number of other members including Joan Lestor (later to become a Labour minister) and the psychologist John Rowan. Some of these ex-members formed a short-lived Movement for Social Integration, though the impact the dispute had on the party as a whole was almost entirely disruptive and negative.
See also
Socialist Party of Great Britain breakaway groups#The Movement for Social Integration
Luddism
References
Bibliography
Socialist Party of Great Britain
Political controversies in Europe
Deep ecology | Turner Controversy | [
"Biology",
"Environmental_science"
] | 536 | [
"Biological hypotheses",
"Deep ecology",
"Biophilia hypothesis",
"Environmental ethics"
] |
8,912,956 | https://en.wikipedia.org/wiki/Programming%20by%20demonstration | In computer science, programming by demonstration (PbD) is an end-user development technique for teaching a computer or a robot new behaviors by demonstrating the task to transfer directly instead of programming it through machine commands.
The terms programming by example (PbE) and programming by demonstration (PbD) appeared in software development research as early as the mid 1980s to define a way to define a sequence of operations without having to learn a programming language. The usual distinction in literature between these terms is that in PbE the user gives a prototypical product of the computer execution, such as a row in the desired results of a query; while in PbD the user performs a sequence of actions that the computer must repeat, generalizing it to be used in different data sets.
These two terms were first undifferentiated, but PbE then tended to be mostly adopted by software development researchers while PbD tended to be adopted by robotics researchers. Today, PbE refers to an entirely different concept, supported by new programming languages that are similar to simulators. This framework can be contrasted with Bayesian program synthesis.
Robot programming by demonstration
The PbD paradigm is first attractive to the robotics industry due to the costs involved in the development and maintenance of robot programs. In this field, the operator often has implicit knowledge on the task to achieve (he/she knows how to do it), but does not have usually the programming skills (or the time) required to reconfigure the robot. Demonstrating how to achieve the task through examples thus allows to learn the skill without explicitly programming each detail.
The first PbD strategies proposed in robotics were based on teach-in, guiding or play-back methods that consisted
basically in moving the robot (through a dedicated interface or manually) through a set of relevant configurations that the robot
should adopt sequentially (position, orientation, state of the gripper). The method was then progressively ameliorated by
focusing principally on the teleoperation control and by using different interfaces such as vision.
However, these PbD methods still used direct repetition, which was useful in industry only when conceiving an assembly line using exactly the same product components. To apply this concept to products with different variants or to apply the programs to new robots, the generalization issue became a crucial point. To address this issue, the first attempts at generalizing the skill
were mainly based on the help of the user through queries about the user's intentions. Then, different levels of abstractions were
proposed to resolve the generalization issue, basically dichotomized in learning methods at a symbolic level or at a trajectory level.
The development of humanoid robots naturally brought a growing interest in robot programming by demonstration. As a humanoid robot is supposed by its nature to adapt to new environments, not only the human appearance is important but the algorithms used for its control require flexibility and versatility. Due to the continuously changing environments and to the huge varieties of tasks that a robot is expected to perform, the robot requires the ability to continuously learn new skills and adapt the existing skills to new contexts.
Research in PbD also progressively departed from its original purely engineering perspective to adopt an interdisciplinary approach, taking insights from neuroscience and social sciences to emulate the process of imitation in humans and animals. With the increasing consideration of this body of work in robotics, the notion of Robot programming by demonstration (also known as RPD or RbD) was also progressively replaced by the more biological label of Learning by imitation.
Neurally-imprinted Stable Vector Fields (NiVF)
Neurally-imprinted Stable Vector Fields (NiVF) was introduced as a novel learning scheme during ESANN 2013 and show how to imprint vector fields into neurals networks such as Extreme Learning Machines (ELMs) in a guaranteed stable manner. Furthermore, the paper won the best student paper award. The networks represent movements, where asymptotic stability is incorporated through constraints derived from Lyapunov stability theory. It is shown that this approach successfully performs stable and smooth point-to-point movements learned from human handwriting movements.
It is also possible to learn the Lyapunov candidate that is used for stabilization of the dynamical system. For this reason, neural learning scheme that estimates stable dynamical systems from demonstrations based on a two-stage process are needed: first, a data-driven Lyapunov function candidate is estimated. Second, stability is incorporated by means of a novel method to respect local constraints in the neural learning. This allows for learning stable dynamics while simultaneously sustaining the accuracy of the dynamical system and robustly generate complex movements.
Diffeomorphic Transformations
Diffeomorphic transformations turn out to be particularly suitable for substantially increasing the learnability of dynamical systems for robotic motions. The stable estimator of dynamical systems (SEDS) is an interesting approach to learn time invariant systems to control robotic motions. However, this is restricted to dynamical systems with only quadratic Lyapunov functions. The new approach Tau-SEDS overcomes this limitations in a mathematical elegant manner.
Parameterized skills
After a task was demonstrated by a human operator, the trajectory is stored in a database. Getting easier access to the raw data is realized with parameterized skills. A skill is requesting a database and generates a trajectory. For example, at first the skill “opengripper(slow)” is sent to the motion database and in response, the stored movement of the robotarm is provided. The parameters of a skill allow to modify the policy to fulfill external constraints.
A skill is an interface between task names, given in natural language and the underlying spatiotemporal movement in the 3d space, which consists of points. Single skills can be combined into a task for defining longer motion sequences from a high level perspective. For practical applications, different actions are stored in a skill library. For increasing the abstraction level further, skills can be converted into dynamic movement primitives (DMP). They generate a robot trajectory on the fly which was unknown at the time of the demonstration. This helps to increase the flexibility of the solver.
Non-robotic use
For final users, to automate a workflow in a complex tool (e.g. Photoshop), the most simple case of PbD is the macro recorder.
See also
Programming by example
Intentional programming
Inductive programming
Macro recorder
Supervised learning
References
External links
Reviews papers
Robots that imitate humans, Cynthia Breazeal and Brian Scassellati, Trends in Cognitive Sciences, 6:1, 2002, pp. 481–87
.
.
Special issues in journals
.
.
.
.
Key laboratories and people
.
.
.
.
Community activities on closely related topics
.
.
Videos
A robot that learns to cook an omelet:
.
.
A robot that learns to unscrew a bottle of coke:
.
User interfaces
Programming paradigms | Programming by demonstration | [
"Technology"
] | 1,397 | [
"User interfaces",
"Interfaces"
] |
8,913,472 | https://en.wikipedia.org/wiki/OSSEC | OSSEC (Open Source HIDS SECurity) is a free, open-source host-based intrusion detection system (HIDS). It performs log analysis, integrity checking, Windows registry monitoring, rootkit detection, time-based alerting, and active response. It provides intrusion detection for most operating systems, including Linux, OpenBSD, FreeBSD, OS X, Solaris and Windows. OSSEC has a centralized, cross-platform architecture allowing multiple systems to be easily monitored and managed. OSSEC has a log analysis engine that is able to correlate and analyze logs from multiple devices and formats.
History
In June 2008, the OSSEC project and all the copyrights owned by Daniel B. Cid, the project leader, were acquired by Third Brigade, Inc. They promised to continue to contribute to the open source community and to extend commercial support and training to the OSSEC open source community.
In May 2009, Trend Micro acquired Third Brigade and the OSSEC project, with promises to keep it open source and free.
In 2018, Trend released the domain name and source code to the OSSEC Foundation.
The OSSEC project is currently maintained by Atomicorp who stewards the free and open source version and also offers a commercial version.
Characteristics
OSSEC consists of a main application, an agent, and a web interface.
Manager (or server), which is required for distributed network or stand-alone installations.
Agent, a small program installed on the systems to be monitored.
Agentless mode, can be used to monitor firewalls, routers, and even Unix systems.
Features
Log based Intrusion Detection (LID): Actively monitors and analyzes data from multiple log data points in real-time.
Rootkit and Malware Detection: Process and file level analysis to detect malicious applications and rootkits.
Active Response: Respond to attacks and changes on the system in real time through multiple mechanisms including firewall policies, integration with 3rd parties such as CDN's and support portals, as well as self-healing actions.
Compliance Auditing: Application and system level auditing for compliance with many common standards such as PCI-DSS, and CIS benchmarks.
File Integrity Monitoring (FIM): For both files and windows registry settings in real time not only detects changes to the system, it also maintains a forensic copy of the data as it changes over time.
System Inventory: Collects system information, such as installed software, hardware, utilization, network services, listeners and other information.
See also
Host-based intrusion detection system comparison
References
External links
Computer network security
Free network-related software
Free security software
Intrusion detection systems
Linux security software
Internet Protocol based network software | OSSEC | [
"Engineering"
] | 551 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
8,914,599 | https://en.wikipedia.org/wiki/Helix%E2%80%93coil%20transition%20model | Helix–coil transition models are formalized techniques in statistical mechanics developed to describe conformations of linear polymers in solution. The models are usually but not exclusively applied to polypeptides as a measure of the relative fraction of the molecule in an alpha helix conformation versus turn or random coil. The main attraction in investigating alpha helix formation is that one encounters many of the features of protein folding but in their simplest version. Most of the helix–coil models contain parameters for the likelihood of helix nucleation from a coil region, and helix propagation along the sequence once nucleated; because polypeptides are directional and have distinct N-terminal and C-terminal ends, propagation parameters may differ in each direction.
The two states are
helix state: characterized by a common rotating pattern kept together by hydrogen bonds, (see alpha-helix).
coil state: conglomerate of randomly ordered sequence of atoms (see random coil).
Common transition models include the Zimm–Bragg model and the Lifson–Roig model, and their extensions and variations.
Energy of host poly-alanine helix in aqueous solution:
where m is number of residues in the helix.
References
Protein structure
Statistical mechanics
Thermodynamic models | Helix–coil transition model | [
"Physics",
"Chemistry"
] | 249 | [
"Statistical mechanics stubs",
"Thermodynamic models",
"Thermodynamics",
"Structural biology",
"Statistical mechanics",
"Protein structure"
] |
8,915,376 | https://en.wikipedia.org/wiki/Displacement%20activity | Displacement activities occur when an animal or human experiences high motivation for two or more conflicting behaviours: the resulting displacement activity is usually unrelated to the competing motivations. Birds, for example, may peck at grass when uncertain whether to attack or flee from an opponent; similarly, a human may scratch their head when they do not know which of two options to choose. Displacement activities may also occur when animals are prevented from performing a single behaviour for which they are highly motivated. Displacement activities often involve actions which bring comfort to the animal such as scratching, preening, drinking or feeding.
In the assessment of animal welfare, displacement activities are sometimes used as evidence that an animal is highly motivated to perform a behaviour that the environment prevents. One example is that when hungry hens are trained to eat from a particular food dispenser and then find the dispenser blocked, they often begin to pace and preen themselves vigorously. These actions have been interpreted as displacement activities, and similar pacing and preening can be used as evidence of frustration in other situations.
Psychiatrist and primatologist Alfonso Troisi proposed that displacement activities can be used as non-invasive measures of stress in primates. He noted that various non-human primates perform self-directed activities such as grooming and scratching in situations likely to involve anxiety and uncertainty, and that these behaviours are increased by anxiogenic (anxiety-producing) drugs and reduced by anxiolytic (anxiety-reducing) drugs. In humans, he noted that similar self-directed behaviour, together with aimless manipulation of objects (chewing pens, twisting rings), can be used as indicators of "stressful stimuli and may reflect an emotional condition of negative affect".
More recently the term 'displacement activity' has been widely adopted to describe a form of procrastination. It is commonly used in the context of what someone does intentionally to keep themselves busy whilst, at the same time, avoiding doing something else that would be a better use of their time.
History of the concept
In 1940, two Dutch researchers Kortlandt and Tinbergen independently identified what was to become known as displacement activities. The subsequent development of research on displacement activities arose from Konrad Lorenz's works on instincts.
Tinbergen in 1952 noted, for example, that "two skylarks engaged in furious combat [may] suddenly peck at the ground as if they were feeding", or birds on the point of mating may suddenly begin to preen themselves. Tinbergen adopted the term "displacement activities" because the behaviour appeared to be displaced from one behavioural system into another.
In 1902, in The Little White Bird, J. M. Barrie refers to sheep in Kensington Gardens nibbling the grass in nervous agitation immediately after being shorn, and to Solomon, the wise crow, drinking water when he was frustrated and outwitted in an argument with other birds. Another bird encourages him to drink in order to compose himself. These references to displacement activities in a work of literature indicate that the phenomenon was well recognized at the turn of the twentieth century. A further early description of a displacement activity (though not the use of the term) is by Julian Huxley in 1914.
See also
Displacement (psychology)
Ethogram
Procrastination
Vacuum activity
References
External links
Cats international about displacement activities with cats.
Ethology | Displacement activity | [
"Biology"
] | 677 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
8,915,378 | https://en.wikipedia.org/wiki/Persin | Persin is a fungicidal toxin present in the avocado. Persin is an oil-soluble compound structurally similar to a fatty acid, a colourless oil, and it leaches into the body of the fruit from the seeds.
The relatively low concentrations of persin in the ripe pulp of the avocado fruit is generally considered harmless to humans. Negative effects in humans are primarily in allergic individuals. When persin is consumed by domestic animals through the leaves or bark of the avocado tree, or skins and seeds of the avocado fruit, it is toxic and dangerous.
Presence in the avocado plant
All parts of the avocado — the fruit, leaves, stems, and seeds — contain the toxin. The leaves are the most dangerous part.
Toxicity
Consumption of the leaves and bark of the avocado tree, or the skin and pit of the avocado fruit have been shown to have the following effects:
In birds, which are particularly sensitive to the avocado toxin, the symptoms are: increased heart rate, myocardial tissue damage, subcutaneous edema of the neck and pectoral regions, labored breathing, disordered plumage, unrest, weakness, apathy and anorexia. High doses cause acute respiratory syndrome (asphyxia), with death approximately 12 to 48 hours after consumption. Caged birds seem to be more sensitive to the effects of persin, whereas, for example, turkeys and chickens seem more resistant.
Lactating rabbits and mice: non-infectious mastitis and agalactia after consumption of leaves or bark.
Rabbits: cardiac arrhythmia, submandibular edema and death after consumption of leaves.
Cows and goats: mastitis, decreased milk production after consumption of leaves or bark. Goats develop severe mastitis after ingesting 20 g/kg of leaves, and 30 g/kg of leaves usually results in cardiac injury.
Horses: clinical effects occur mainly in mares, and includes noninfectious mastitis, as well as occasional gastritis and colic. Swelling of the head, tongue, and brisket may also be present.
Cats, dogs: mild stomach upset may occur, with potential to cause heart damage. Dogs might be more resistant.
Hares, pigs, rats, sheep, ostriches, chickens, turkeys and fish: symptoms of intoxication similar to those described above. The lethal dose is not known; the effect is different depending upon the animal species.
Mice: non-fatal injury to the lactating mammary gland from 60 to 100 mg/kg of persin. Necrosis of myocardial fibres with 100 mg/kg of persin. 200 mg/kg of persin is lethal.
Diagnosis
Diagnosis of avocado toxicosis relies on history of exposure and clinical signs. There are no readily available specific tests that confirm diagnosis.
Treatment
NSAIDs, pain relievers, medications for congestive heart failure.
Additional pharmacology
Animal studies show that exposure to persin leads to apoptosis in certain types of breast cancer cells. It has also been shown to enhance the cytotoxic effect of tamoxifen in vitro. Persin is however highly insoluble in aqueous solutions and more research will be needed to put it into a soluble tablet form.
References
Plant toxins
Veterinary toxicology
Acetate esters | Persin | [
"Chemistry",
"Environmental_science"
] | 697 | [
"Veterinary toxicology",
"Toxicology",
"Chemical ecology",
"Plant toxins"
] |
8,917,050 | https://en.wikipedia.org/wiki/Journal%20of%20Environmental%20Psychology | The Journal of Environmental Psychology is a peer-reviewed academic journal published by Elsevier. Its founding editors were David Canter (University of Liverpool) and Kenneth Craik (University of California, Berkeley) back in 1981. From 2004 to 2016, Robert Gifford (University of Victoria) was the editor-in-chief. In 2017 and 2018, Florian G. Kaiser (Otto-von-Guericke University) and Jeffrey Joireman (Washington State University) were the co-chief editors. From 2019 to 2021 Sander van der Linden (University of Cambridge) was the Editor-in-Chief. Since 2021, Drs. Lindsay J. McCunn and Wesley Schultz have co-edited the journal.
The journal is the primary outlet for academic research in environmental psychology and reports scientific research on all human interactions with the built, social, and natural environment, with an emphasis on the individual and small-group level of analysis. The journal is published in association with the International Association of Applied Psychology (IAAP).
According to the Journal Citation Reports, Journal of Environmental Psychology had a 2020 impact factor of 5.192. This increased in 2021 to 7.649.
References
External links
Elsevier academic journals
English-language journals
Social psychology journals
Environmental social science journals
Academic journals established in 1980
Quarterly journals
Environmental psychology | Journal of Environmental Psychology | [
"Environmental_science"
] | 267 | [
"Environmental psychology",
"Environmental social science journals",
"Environmental science journals",
"Environmental social science stubs",
"Environmental science journal stubs",
"Environmental social science"
] |
8,917,371 | https://en.wikipedia.org/wiki/Aircraft%20maintenance | Aircraft maintenance is the performance of tasks required to ensure the continuing airworthiness of an aircraft or aircraft part, including overhaul, inspection, replacement, defect rectification, and the embodiment of modifications, compliance with airworthiness directives and repair.
Regulation
The maintenance of aircraft is highly regulated, in order to ensure safe and correct functioning during flight. In civil aviation national regulations are coordinated under international standards, established by the International Civil Aviation Organization (ICAO). The ICAO standards have to be implemented by local airworthiness authorities to regulate the maintenance tasks, personnel and inspection system. Maintenance staff must be licensed for the tasks they carry out.
Major airworthiness regulatory authorities include the US Federal Aviation Administration (FAA), European Union Aviation Safety Agency (EASA), Australian Transport Safety Bureau (ATSB), Transport Canada (TC) and Indian Directorate General of Civil Aviation.
Aircraft maintenance organization
Scheduled maintenance checks
Aircraft maintenance in civil aviation generally organized using a maintenance checks or blocks which are packages of maintenance tasks that have to be done on an aircraft after a certain amount of time or usage. Packages are constructed by dividing the maintenance tasks into convenient, bite-size chunks to minimize the time the aircraft is out of service, to keep the maintenance workload level, and to maximize the use of maintenance facilities.
Pre-emptive engine change
An engine failure can significantly impact operations and revenue. A programme of calculated pre-emptive engine changes, sometimes referred to as "power by the hour", provides budget predictability, avoids installing a loan unit during repairs when an aircraft part fails and enrolled aircraft may have a better value and liquidity.
This concept of unscheduled maintenance was initially introduced for aircraft engines to mitigate engine failures. The term was coined by Bristol Siddeley in 1962 to support Vipers of the British Aerospace 125 business jets for a fixed sum per flying hour. A complete engine and accessory replacement service was provided, allowing the operator to accurately forecast this cost, and relieving him from purchasing stocks of engines and accessories.
In the 1980s, Rolls-Royce plc reinstated the program to provide the operator with a fixed engine maintenance cost over an extended period of time. Operators are assured of an accurate cost projection and avoid the breakdowns costs; the term is trademarked by Rolls-Royce but is the common name in the industry. It is an option for operators of several Rolls-Royce aircraft engines. Other aircraft engine manufacturers such as General Electric and Pratt & Whitney offer similar programs.
Jet Support Services provides hourly cost maintenance programs independently of the manufacturers. GEMCO also offers a similar program for piston engines in general aviation aircraft. Bombardier Aerospace offers its Smart Services program, covering parts and maintenance by the hour.
Maintenance release
At the completion of any maintenance task a person authorized by the national airworthiness authority or delegated organization signs a maintenance release stating that maintenance has been performed in accordance with the applicable airworthiness requirements. A maintenance release is sometimes called a certificate of release to service (CRS).
In the case of a certified aircraft this may be a licensed aircraft maintenance engineer, Designated Airworthiness Representative – Maintenance (DAR-T) or holder of an EASA Part-66 Aircraft Maintenance License (AML), while for amateur-built aircraft this may be the owner or builder of the aircraft.
In some countries the Secretary of State may authorise a maintenance organization to grant the certification privilege to staff on their behalf.
Maintenance personnel
The ICAO defines the licensed or rated role of aircraft maintenance by a technician, engineer or mechanic), allowing that each contracting state may use whichever of these terms it prefers. Although aircraft maintenance technicians, engineers and mechanics all perform essentially the same role, different countries may use these terms in different ways to define their individual levels of qualification and responsibilities.
Most national and international licensing bodies make a division between the roles of carrying out repair and maintenance on the one hand, and certifying the vehicle or subsystem or component as flightworthy, on the other. ICAO requires that the certification privilege be a delegated function of the nation's responsible Secretary of State. The Secretary of State may authorize another organization to grant the certification privilege to staff on their behalf.
In Europe, licensing is governed by EASA Part-66. A person directly licensed to certify flightworthiness is a holder of a Part-66 AML (Aircraft Maintenance License).
In many other countries, including Australia, Bangla Desh, Canada, India, New Zealand and South Africa, a person directly granted the privilege of certification is a qualified AME (Aircraft Maintenance Engineer) or Licensed AME, also written as LAME or L-AME. (Unlicensed mechanics or tradespersons are sometimes informally referred to as "Unlicensed AMEs")
In the US and elsewhere in the Americas, a person rated for aircraft repair and maintenance is a qualified AMT (aircraft maintenance technician), or, colloquially, Airframe and Powerplant (A&P). A person directly designated to exercise the privilege of certification for the work is a DAR-T (Designated Airworthiness Representative – Maintenance).
Roles may be further divided up. In Europe aircraft maintenance personnel must comply with Part 66, Certifying Staff, issued by the European Aviation Safety Agency (EASA). This regulation establishes four levels of authorization:
Level 1: General Familiarisation, Unlicensed
Level 2: Ramp and Transit, Category A
can only certify own work performed for tasks which he/she has received documented training
Level 3: Line Certifying Staff and Base Maintenance Supporting Staff, Category B1 (electromechanic) and/or B2(Avionics)
can certify all work performed on an aircraft/engine for which he/she is type rated excluding base maintenance (generally up to and including A-Check)
Level 4: Base Maintenance Certifying Staff, Category C
can certify all work performed on an aircraft/engine for which he/she is type rated, but only if it is base maintenance (additional level-3 staff necessary)
this authorization does not automatically include any level 2 or level 3 license.
Market
Aircraft
The Maintenance, Repair, Overhaul (MRO) Market was US$135.1 billion in 2015, three quarters of the $180.3 billion aircraft production market. Of this, 60% is for civil aviation: air transport 48%, business and general aviation 9%, rotorcraft 3%; and military aviation is 40%: fixed wing 27% and rotary 13%. Of the $64.3 billion air transport MRO market, 40% is for engines, 22% for components, 17% for line, 14% for airframe and 7% for modifications. It is projected to grow at 4.1% per annum until 2025 to $96 billion.
Airliner MRO should reach $74.3 billion in 2017: 51% ($B) single-aisles, 21% ($B) long-range twin-aisles, 8% ($B) medium-range twin-aisles, 7% ($B) large aircraft, 6% ($B) regional jets as turboprop regional airliners and 1% ($B) short range twin-aisles.
Over the 2017–2026 decade, the worldwide market should reach over $900 billion, led by 23% in North America, 22% in Western Europe, and 19% in Asia Pacific.
In 2017, of the $70 billion spent by airlines on maintenance, repair and overhaul (MRO), 31% were for engines, 27% for components, 24% for line maintenance, 10% for modifications and 8% for the airframe; 70% were for mature airliners (Airbus A320 and A330, Boeing 777 and 737NG), 23% were for “sunset” aircraft (McDonnell Douglas MD-80, Boeing 737 Classic, 747 or 757) and 7% was spent on modern models (Boeing 787, Embraer E-Jet, Airbus A350XWB and A380).
In 2018, the commercial aviation industry expended $88 billion for MRO, while military aircraft required $79.6 billion, including field maintenance.
Airliner MRO is forecast to reach $115 billion by 2028, a 4% compound annual growth rate from $77.4 billion in 2018.
Major airframe manufacturers Airbus, Boeing and Embraer entered the market, increasing concerns about intellectual property sharing. Shared data-supported predictive maintenance can reduce operational disruptions. Among other factors, prognostics helped Delta Air Lines reduce maintenance cancellations by 98% from 5,600 in 2010 to 78 in 2017.
Insourced maintenance can be inefficient for small airlines with a fleet below 50–60 aircraft. They have to either outsource it or sell its MRO services to other carriers for better resource utilization.
For example, Spain's Air Nostrum operates 45 Bombardier CRJs and ATR 72s and its 300-person maintenance department provides line, base maintenance and limited component repair for other airlines 20% of the time.
Airframe heavy maintenance is worth $6 billion in 2019: $2.9 billion for C checks and $3.1 billion for D checks, Aviation Week & Space Technology forecasts a growth to $7.5 billion in 2028 — $3.1 billion C and $4.2 billion D — for $70 billion over 10 years, 10% of the overall market compared to 40% for the engines.
Engines
The commercial aviation engine MRO market is anticipated by Aviation Week & Space Technology to be $25.9 billion in 2018, a 2.5 billion increase from 2017, led by 21% for the Boeing 737NG' CFM56-7B and the A320's CFM56-5B and IAE V2500 (also on the MD-90) tied for second, followed by the mature widebody engines: the GE90 then the Trent 700.
Over the 2017–2026 decade, the largest markets for turbofans will be the B737NG's CFM56-7 with 23%, the V2500-A5 with 21%, the General Electric GE90-115B with 13%, the A320's CFM56-5B with 13%, the PW1000G with 7%, the Rolls-Royce Trent 700 with 6%, the CF6-80C2 with 5%, the CFM LEAP with 5% and the General Electric CF34-8 with 4%.
Between 2018 and 2022, the largest MRO demand will be for CFM engines with 36%, followed by GE with 24%, Rolls with 13%, IAE with 12% and Pratt with 7%.
As an aircraft gets older, a greater percentage of its value is represented by its engines. Over the course of the engine life it is possible to put value back in by repair and overhaul, to sell it for its remaining useful time, or to disassemble it and sell the used parts, to extract its remaining value.
Its maintenance value includes the value of life-limited parts (LLPs) and the time before overhaul.
The core value is the value of its data plate and non-life-limited-parts. Engine makers deeply discount their sales, up to 90%, to win the multi-year stream of spares and services, resembling the razor and blades model.
Engines installed on a new aircraft are discounted by at least 40% while spare engine values closely follow list prices.
Accounting for 80% of a shop visit cost, prices escalate to recoup the original discount, until engine availability increase with aircraft teardowns.
Between 2001 and 2018 for the Airbus A320 or the Boeing 737-800, their CFM56 value increased from 27–29% to 48–52% of the aircraft value.
The 777-200ER's Pratt & Whitney PW4000 and the A330-300's Rolls-Royce Trent 700 engines rose from a share of 18–25% in 2001 to 29–40% in 2013. For the Airbus A320neo and Boeing 737 MAX, between 52% and 57% of their value lies in their engines: this could rise to 80–90% after ten years, while new Airbus A350 or Boeing 787 engines are worth 36–40% of the aircraft. After some time the maintenance reserves exceed the aircraft lease.
Between 2019 and 2038, 5,200 spare airliner engines will be required with at least half leased.
See also
Groundcrew
Line-replaceable unit
Maintenance Resource Management
Professional Aviation Maintenance Association
RAMS
Shop-replaceable unit
References
External links
Aerospace engineering
Aircraft engines
Aircraft finance
Maintenance | Aircraft maintenance | [
"Technology",
"Engineering"
] | 2,613 | [
"Engines",
"Aircraft maintenance",
"Maintenance",
"Aerospace engineering",
"Mechanical engineering",
"Aircraft engines"
] |
8,918,305 | https://en.wikipedia.org/wiki/Dell%20PowerConnect | The current portfolio of PowerConnect switches are now being offered as part of the Dell Networking brand: information on this page is an overview of all current and past PowerConnect switches as per August 2013, but any updates on current portfolio will be detailed on the Dell Networking page.
PowerConnect was a Dell series of network switches. The PowerConnect "classic" switches are based on Broadcom or Marvell Technology Group fabric and firmware. Dell acquired Force10 Networks in 2011 to expand its data center switch products.
Dell also offers the PowerConnect M-series which are switches for the M1000e blade-server enclosure and the PowerConnect W-series which is a Wi-Fi platform based on .
Starting in 2013 Dell will re-brand their networking portfolio to Dell Networking which covers both the legacy PowerConnect products as well as the Force10 products.
Product line
The Dell PowerConnect line is marketed for business computer networking. They connect computers and servers in small to medium-sized networks using Ethernet. The brand name was first announced in July 2001, as traditional personal computer sales were declining.
By September 2002 Cisco Systems cancelled a reseller agreement with Dell.
Previously under storage business general manager Darren Thomas, in September 2010 Dario Zamarian was named to head networking platforms within Dell.
PowerConnect switches are available in pre-configured web-managed models as well as more expensive managed models.
there is not a single underlying operating system: the models with a product number up to 5500 run on a proprietary OS made by Marvell while the Broadcom powered switches run on an OS based on VxWorks. With the introduction of the 8100 series the switches will run on DNOS or Dell Networking Operating System which is based on a Linux kernel for DNOS 5.x and 6.x.
Via PowerConnect W-series Dell offers a range of Aruba WiFi products.
The Powerconnect-J (Juniper Networks) and B (Brocade) series are not longer sold, except for the B8000e/PCM8428-K full FCoE switches. Most of these products are now replaced by Force10 models.
Legacy devices
This page is about the legacy Dell PowerConnect switches that are no longer for sale. For the current portfolio please see the Dell Networking page.
All Dell switches will be rebranded to Dell Networking and will then have a different naming system based on a letter and a four-digit number. But the current PowerConnect rack-switches will keep their current name until they reach end of sales, except for the (new) 8100 series: these will be renamed to DN N-40xx series. The existing Force10 switches will mainly keep their current name and numbering (e.g. DN S-4800 series, DN S-5000, DN-Z9000 etc.
2200 and 2300 series
Models 2216 and 2224 were unmanaged, 10/100 Mbit/s Ethernet over twisted pair switches, with 16 and 24 ports each, respectively. They were discontinued.
The PowerConnect 2324 was similar to 2224, but includes 2 Gigabit Ethernet ports for uplink or server purposes.
2600 series
The PowerConnect 2600 series included the 2608, 2616 and 2624. They are un-managed Gigabit Ethernet workgroup switches with all ports at 10/100/1000 MBit/s.
The 2624 model features an SFP port for fiber uplinks. They too are discontinued.
2700 series
The PowerConnect 27xx series of switchers were web-managed all-Gigabit workgroup switches (10/100/1000) with eight, 16, 24, or 48 ports respectively. Switches shipped in a plug-n-play unmanaged mode and can be managed via a graphical user interface. They are discontinued.
2800 series
see Dell Networking - PowerConnect 2800 for details
PowerConnect 2808, 2816, 2824, and 2848 are dual-mode unmanaged or web-managed all-Gigabit workgroup switches (10/100/1000). 8, 16, 24, or 48 ports respectively. On the 2824 and 2848, there are an additional 2 small form-factor pluggable transceiver (SFP) modules, for fiber-optic connectivity. Switches ship in a plug-n-play unmanaged mode and can be managed via a graphical user interface.
3400 series
PowerConnect 3424, 3424P, 3448, and 3448P were fully managed 10/100 switches with gigabit uplinks. All have four Gigabit ports, two copper and two SFP modular, all of which may be used at once. The 3424 and 3424P have 24 10/100 ports, the 3448 and 3448P have 48. The 3424P and 3448P provide power over Ethernet on all 10/100 ports (PowerConnect 3448P requires EPS-470 for full 15.4 W on all ports simultaneously). The switches are stackable using the copper Gigabit ports. They were discontinued.
3500 series
see Dell Networking - PowerConnect 3500 for details
PowerConnect 3524(P) and 3548(P) are managed 10/100 switches with gigabit uplinks and Power over Ethernet options for [applications such as Voice over Internet Protocol (VoIP), denoted on the models denoted with a "P" on the end of the part number. All switches in this family support resilient stacking and have management and security capabilities.
5200 series
The PowerConnect 5200 series of managed switches comprised the 5212 (12 copper Gb ports and 4 SFP ports) and the 5224 (24 copper Gb ports and 4 SFP ports)
5300 series
The PowerConnect 5316M was similar in software and function to other 53xx series switches but physically designed to fit one of the four IO bays in the 1855/1955 blade chassis. 16 ports, 10 of which are allocated to the 10 blade slots in the chassis, 6 are accessible via the back panel of the switch. It was discontinued.
The PowerConnect 5324 was a 24 port, all-Gigabit, fully managed switch. The last 4 ports are SFP capable. Generally very similar to the 3400 series. It was discontinued.
5400 series
The PowerConnect 5400 series reached 'end of sales' in 2011 and are followed up with the 5500 series. Many features of the 5400 are now available on the 5500 series, but where the 5400 were certified for use with EqualLogic iSCSI arrays, the 5500 never passed the acceptance tests.
The 5400 series offered 4 SFP ports for 1G fiber uplinks to a core or distribution layer and the 24 or 48 RJ45 ports are either normal 1G Ethernet or 1G Ethernet with PoE option.
VoIP optimization and auto-configuration
iSCSI optimization and auto-configuration
IEEE 802.1X (Dot1x) port-authentication
IEEE 802.1Q (Dot1q) VLANs, trunking and -tagging.
5500 series
see also Dell Networking - PowerConnect 5500 for details on current portfolio.
PowerConnect 5500 series switches, the successor of the 5400 series, are based on Marvell technology. They offer 24 or 48 Gigabit Ethernet ports with (-P series) or without PoE. The 5500 have built-in stacking ports, using HDMI cables and the range offers standard two 10 Gb SFP+ ports for uplinks to core or distribution layer. All 5500 series models (5524, 5524P, 5548 and 5548P) can be combined in a single stack with up to 8 units per stack. The 5500 series uses standard HDMI cables (specification 1.4 category 2 or better) to stack with a total bandwidth of 40 Gbit/s per switch.
The 5500 series are often used as Top Of Rack (TOR) switches and client access switches in wiring cabinets in offices or campus networks. The 5500P series are mainly client access switches connecting VOIP phones and (daisy chained or directly connected) workstations. The -P series are also used to power other devices than phones, such as WiFi access points, IP cameras or thin clients.
The 5500 series are stackable to combine several 5500-series switches into one virtual switch.
The 5500 series switches are mainly designed to be pure layer 2 switches but it has some very basic layer 3 capabilities. Other standard features are enhanced VOIP support where the switch automatically recognizes connected VOIP devices and configure VOIP quality of service and a VOIP-VLAN. This feature will only work optimally in small VOIP networks. There is also iSCSI optimization and auto-configuration, though Dell does not support them with their EqualLogic family of storage arrays. The switch also supports IEEE 802.1X (Dot1x) port authentication. Although they were meant as the follow-up for the EqualLogic certified 5400 switches, the 5500's never passed the acceptance tests: problems with latency -especially in stacked setups- prevented certification. Although they can work in small EQL (and other iSCSI) SAN networks they should be seen as campus-access switches and not as SAN switches.
6200 series (Kinnick and Kinnick 2)
see also Dell Networking - Managed Multi-layer Gigabit Ethernet switches for details on current portfolio.
The PowerConnect 6024 with 24 Gigabit Ethernet over twisted pair ports was announced in early 2004.
The PowerConnect 6024F was a 24 port, layer 3, all-Gigabit, fiber-optimized switch. It had 24 SFP ports, eight of which doubled as copper ports. This switch was capable of routing, with static routes, Routing Information Protocol (RIP), and Open Shortest Path First (OSPF). It was replaced by the PowerConnect 6224/6248 models.
The 6224 and 6248 switch was introduced in early 2007 as the logical follow-up of the 6024 switch. It had 24 (6224) or 48 (6248) Gigabit Ethernet ports and two sockets for 10 Gigabit Ethernet modules (with two ports per module) or stacking.
Many of the Dell PowerConnect offered "combination" ports: the last 4 ports on the switch are either copper (UTP/RJ-45) or fiber (SFP) ports: for example, the PC6224 offers 20 copper-only interfaces while ports 21-24 can be found twice on the front of the switch: as standard UTP ports and also as SFP slots, but one can only use one of them: by default the UTP port is enabled but when one inserts a SFP module in port 21-24 it switches over to fibre mode and the UTP link goes down. Any combo ports are always the highest 4 standard ports (21-24 on the PCxx24 models and 45–48 on PCxx48 models)
Features include 24/48 ports, Layer 3 routing, all-gigabit, fully managed (web+cli), stackable switch with up to 4 10 Gb ports. High availability routing, edge connectivity, traffic aggregation and VOIP applications all supported in the 62xx series. Flexible, high-speed stacking, fiber support and MS NSP Certification included.
The 6224 (P and F) and 6248P series switches are end of development: new firmware versions will only repair bugs, no new features are being developed for these switches. While other PowerConnect switches based on Broadcom hardware have firmware versions 4.x, the 6200 series continue to run on version 3.x and features introduced in the 4.x firmware are not available for the 6200 series switches.(note: this does NOT apply to the PCM 6220 blade-switch)
7000 series
see also Dell Networking - Managed multi-layer Gigabit Ethernet switches for details on current portfolio.
The PowerConnect 7024 and 7048 were introduced April 1, 2011. They had the same ports as the 6224/6248, QoS features for iSCSI, and incorporate 802.3az Energy Efficient Ethernet. The 7000-series offer the same type of ports: both the 1G on the front as optional 10G and stacking modules on the rear side. Unlike the 6200 series, firmware for the 7000 series does support new functionality via version 4.x and 5.x like their 10G brothers in the 8024 and 8100 series.
A variant with reversible air flow is available for top of rack data center applications. The 6000 series remained on the market. The PCT7000 series also offer an out-of-band management Ethernet interface. One can configure the switch to allow both in-band as out-of-band management. By default the oob interface allows management per webgui (http) and telnet, but also https and ssh can be enabled on both in-band as out-of-band management. The PCT7000 series can be stacked with other PCT7000's but also with the PCM6348 blade switch
8000 series
The PowerConnect 8024 and 8024F were rack-mounted switches offering 10 Gigabit Ethernet on copper or optical fiber using enhanced small form-factor pluggable transceiver (SFP+) modules on the 8024F. On the 8024 the last 4 ports (21-24) are combo-ports with the option to use the 4 SFP+ slots to use fiber optic connections for longer-distance uplinks to core switches. On the 8024F the 4 combo ports offer 10GBASE-T copper-ports. The PCT8024 series also offers out-of-band management via an extra Ethernet port. This port only gives access to the management of the switch: it doesn't allow to route or switch (other) traffic over this connection. It is possible to configure a switch so that it allows both oob as 'in band' management when one assigns an IP address to a vlan interface one can manage the switch via that address. The oob interface allows http, https, telnet or ssh access
The 8024 can be used as pure layer-2 switch or as a layer-3 switch.
These switches were introduced in early 2010, in the same single rack unit (1U) size.
The rack-models reached end-of-sale in January, 2013..
With the firmware 4.2.0.4 and later available from December 2011, the Powerconnect 8024(-F) and the blade-versions PCM8024-k (thus NOT the discontinued PCM8024) can be stacked. Stacking is done by assigning (multiple) 10 Gb Ethernet ports as stacking ports. Up to six 80xx series switches can be stacked. Note that stacking is not supported using the 10GBASE-T combo ports on PCT8024F. One can mix PCT8024 and PCT8024-F in one stack, but it can't be combined with a PCM8024-k blade. The original PCM8024 did not support stacking
Also with the introduction of that firmware the switches now also support Fibre Channel over Ethernet via the added functionality FIP Snooping.
8100 series
see also Dell Networking - PowerConnect 8100 for details on current portfolio.
The PowerConnect 8100 series switches announced in 2012 offered 24 or 48 ports on 10 Gb and 0 or 2 built-in ports for 40 Gb QSFP ports. All models also have one extension-module slot with either two QSFP 40 Gb ports, 4 SFP+ 10 Gb ports or 4 10GbaseT ports. It is a small (1U) switch with a high port density and can be used as distribution or (collapsed)core switch for campus networks and for use in the data center it offers features such as lossless Ethernet for iSCSI and FCoE, data center bridging (DCB) and iSCSI Auto-configure
The PCT8100 series is a multi-layer switch that can be used as either a pure layer-2 Ethernet switch or as a "layer-3" switch with extensive IP routing functions. Most routing is done in hardware and can be done at (near) wire speed. Management can be done via the out-of-band Ethernet interface or in-band by connecting to one of the VLAN IP addresses. Management is possible via HTTP(s), telnet, SSH or even serial console cable.
Up to 6 units in the 8100 series can be stacked to form one logical switch and any type of interface (10 Gb or 40 Gb, fiber-optical or utp copper) can be used for stacking. Similar to the rack-switches PCT7000 and PCT8024 series the switch offers an out-of-band Fast Ethernet port for management as well as a serial console connection, required for initial configuration. The switch is built around the Broadcom Trident+ ASIC: the same ASIC as can be found in Cisco Nexus 5000 switches or Force10 models. The PowerConnect 8100 is initially released with firmware 5.0 of the switch-firmware which offers the same features as the PowerConnect 7000 and 8024 rack-switches and the different M-series Ethernet switches.
The underlying operating system of the PCT8100 is based on Linux 2.6 where all other 'Broadcom powered' PowerConnects run on VxWorks.
M-series
The PowerConnect M-series switches are classic Ethernet switches based on Broadcom fabric for use in the blade server enclosure M1000e.
All M-series switches offer a combination of internal interfaces and external interfaces. The internal interfaces connect to the mezzanine card of the blade servers in the M1000e enclosure. These internal interfaces can be one or 10 Gbit/s. The M6220 offers a total of 20 interfaces: 16 internal 1 Gb interfaces and 4 external 1 Gb ports. Optionally two extension modules can be installed that offers 10 Gb Ethernet or stacking interfaces to stack multiple M6220 switches together as one virtual switch.
The M6348 has 32 internal ports (2 per blade) and 16 external 1 Gbit/s ports. The M6348 can be stacked via CX4 modules and/or create 10 Gbit/s SFP+ uplinks.
The M8024 series offer 10 Gb on all interfaces: the M8024-k has 16 internal 10 Gb ports, 4 SFP+ slots and the option to install a FlexIO module that can offer 4 SFP+ ports, 3 CX-4 or 2 10GBaseT copper interfaces.
The M8024-k was announced in May 2011.
With the firmware upgrade announced in December 2011, the M8024-k is also a Fibre Channel over Ethernet (FCoE) transit switch that can extend an available FCoE fabric. The M8024-k uses FCoE Initialization Protocol (FIP) to perform functions of FC_BB_E device discovery, initialization and maintenance. The FIP snooping feature enables the M8024-k to link Dell M-Series blades to an external top-of-rack or end-of-row FCoE forwarding switch and provides FCoE connectivity to the servers. This feature, along with Internet SCSI optimization, iSCSI TLV, and DCBx discovery and monitoring, enables seamless 10GbE performance in an end-to-end data center solution. In addition, with the recent firmware release, up to 6 M8024-k switches can now be stacked and managed through a single IP. Stacking is done by assigning (multiple) 10 Gb Ethernet ports as stacking-ports.
Also under the M-series name Dell offers the Brocade Communications Systems M5424 8 Gb Fibre Channel switch and the M8428-k Converged 10GbE Switch which offers 10GbE performance, FCoE switching, and low-latency 8 Gb Fibre Channel (FC) switching and connectivity.
Other switches and I/O modules for the M1000e blade-enclosure are the Cisco Systems 3032 and 3130 switches and several pass-through modules that bring the internal interfaces from the midplane to the exterior to connect these server-NIC's to external switches
Firmware
Dell PowerConnect switches are based on Marvell technologies or Broadcom hardware and each of them offer entirely different firmware types. All layer-2 Ethernet switches have a family or model number below the 6000 and are based on Marvell hardware. Each model has its own family of firmware with different CLI and GUI (the PCT5500 series offer very limited layer-3 options, but is mainly a layer-2 switch).
The switches with a model-number above 6000 are based on Broadcom hardware. Although each switch has its own firmware images; the options and capabilities on these switches are the same or similar. The PCT6200 (rack) series continue to run in the major-release number 3: new features or capabilities released in the other switches under firmware 4 or 5 are not available for the PCT6200 series.
All other switches that are based on Broadcom hardware run on major release 4 or 5.
Bootcode
All Powerconnect switches, including the 8100 series, have both bootcode and operational code where one can run older firmware versions on later bootcode. Bootcode is generally backward compatible: one can run firmware of a lower version then the boot-code version; when upgrading firmware one do NOT need to upgrade the boot-code unless specifically directed in the release note or by Dell support, but when downgrading one leave the newer boot-code in place.
On the level2 switches (Marvell based) the bootcode is delivered as a separate file that needs to be copied to one's switch. The multi-layer switches however have the bootcode and operational code distributed in one file: one download and (prepare to) activate the newer firmware and with a special command (update bootcode) the switch builds the new bootcode from the operational code information. It is always possible to run a newer firmware operational code on a previous bootcode unless specifically noted in the release notes. In this case, the upgrade will automatically update the boot code during upgrade. When updating a stack, it is a best practice to individually update the stack members using the update bootcode <unit> command. Letting the unit completely rejoin the stack before updating any other unit is strongly advised, although updating the units individually (not as part of a stack) is always the safest route.
Firmware releases are backwards compatible and one can have switches of the same model in one's network running on different firmware levels. Configuration files of older firmware can be used on the newest switches except for some small changes in configuration behavior that was introduced with introduction of firmware 4.0 and later versions.
Features
The Broadcom-based multi-layer switches offer a wide range of layer-2 and layer-3 options, and new features are added all the time. Until version 4.2 it wasn't possible to stack multiple 10 Gb switches and converged Ethernet (FCoE) capabilities for the 10 Gb switches was added in firmware version 4.1
In release 5.0 the switches will start to support private vlan's and Unidirectional Link Detection. On the management level Tacacs+ accounting was added.
Management
All Powerconnect switches can be configured via either a built-in web-GUI or the command line interface or CLI (except for the entry-level layer 2 Powerconnects that only offer web-based configuration or can run in the un-managed mode). The Web GUI uses by default HTTP on port 80, but one can configure them to support HTTPS and/or change the HTTP port-number. For the CLI one can use the serial console cable and -for the M series blade switches- a virtual console via the CMC.
By default one can also access the CLI via telnet with an option to support SSH. Authentication can be done via either the local user-database, RADIUS or TACACS.
All switches do also support SNMP, and with each firmware release an up-to-date MIB is also released. Any SNMP based management tool can be used, but the company also releases their own management-platform Open Manage Network Manager, which is a limited edition of Dorado's RedCell application. A free version, which only allows the user to manage up to 10 network devices, is available on the firmware page of the respective switches.
B-, J- and W-series
The B-series and J-series were Dell branded versions of switches from Brocade Communications Systems and Juniper Networks respectively. The W-series is the wireless range from Aruba Networks
Due to the acquisition of Force10 all the PowerConnect-J series and nearly all of the PowerConnect B-series have been discontinued. Only the Brocade-based PC-B8000e and the blade version PCM 8428-k) full FCoE switches are still available (as are pure Fibre Channel switches). Also the Cisco Catalyst 3xxx switches for the M1000e enclosure are still available.
The normal (datacenter) Ethernet switches from Juniper and Brocade are now replaced by Dell Force10 models
``
The W-series, which is the Aruba product line for enterprise-class WiFi switches will continue to be available and the available models are extended in the near future.
Force10
In July 2011 Dell announced the acquisition of Force10 Networks, another company that designed and marketed Ethernet switches.
The deal completed on August 26, 2011.
This led to speculation that relationships with other vendors such as Cisco and Brocade would change because they overlap the data center market.
In September 2011 Dell announced plans to expand the staff from Force10 in San Jose, California and Chennai, India.
References
External links
Networking on the Dell official website
Dell Tech Center
Computer networking
PowerConnect
PowerConnect | Dell PowerConnect | [
"Technology",
"Engineering"
] | 5,514 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
8,918,323 | https://en.wikipedia.org/wiki/Conservative%20system | In mathematics, a conservative system is a dynamical system which stands in contrast to a dissipative system. Roughly speaking, such systems have no friction or other mechanism to dissipate the dynamics, and thus, their phase space does not shrink over time. Precisely speaking, they are those dynamical systems that have a null wandering set: under time evolution, no portion of the phase space ever "wanders away", never to be returned to or revisited. Alternately, conservative systems are those to which the Poincaré recurrence theorem applies. An important special case of conservative systems are the measure-preserving dynamical systems.
Informal introduction
Informally, dynamical systems describe the time evolution of the phase space of some mechanical system. Commonly, such evolution is given by some differential equations, or quite often in terms of discrete time steps. However, in the present case, instead of focusing on the time evolution of discrete points, one shifts attention to the time evolution of collections of points. One such example would be Saturn's rings: rather than tracking the time evolution of individual grains of sand in the rings, one is instead interested in the time evolution of the density of the rings: how the density thins out, spreads, or becomes concentrated. Over short time-scales (hundreds of thousands of years), Saturn's rings are stable, and are thus a reasonable example of a conservative system and more precisely, a measure-preserving dynamical system. It is measure-preserving, as the number of particles in the rings does not change, and, per Newtonian orbital mechanics, the phase space is incompressible: it can be stretched or squeezed, but not shrunk (this is the content of Liouville's theorem).
Formal definition
Formally, a measurable dynamical system is conservative if and only if it is non-singular, and has no wandering sets.
A measurable dynamical system (X, Σ, μ, τ) is a Borel space (X, Σ) equipped with a sigma-finite measure μ and a transformation τ. Here, X is a set, and Σ is a sigma-algebra on X, so that the pair (X, Σ) is a measurable space. μ is a sigma-finite measure on the sigma-algebra. The space X is the phase space of the dynamical system.
A transformation (a map) is said to be Σ-measurable if and only if, for every σ ∈ Σ, one has . The transformation is a single "time-step" in the evolution of the dynamical system. One is interested in invertible transformations, so that the current state of the dynamical system came from a well-defined past state.
A measurable transformation is called non-singular when if and only if . In this case, the system (X, Σ, μ, τ) is called a non-singular dynamical system. The condition of being non-singular is necessary for a dynamical system to be suitable for modeling (non-equilibrium) systems. That is, if a certain configuration of the system is "impossible" (i.e. ) then it must stay "impossible" (was always impossible: ), but otherwise, the system can evolve arbitrarily. Non-singular systems preserve the negligible sets, but are not required to preserve any other class of sets. The sense of the word singular here is the same as in the definition of a singular measure in that no portion of is singular with respect to and vice versa.
A non-singular dynamical system for which is called invariant, or, more commonly, a measure-preserving dynamical system.
A non-singular dynamical system is conservative if, for every set of positive measure and for every , one has some integer such that . Informally, this can be interpreted as saying that the current state of the system revisits or comes arbitrarily close to a prior state; see Poincaré recurrence for more.
A non-singular transformation is incompressible if, whenever one has , then .
Properties
For a non-singular transformation , the following statements are equivalent:
τ is conservative.
τ is incompressible.
Every wandering set of τ is null.
For all sets σ of positive measure, .
The above implies that, if and is measure-preserving, then the dynamical system is conservative. This is effectively the modern statement of the Poincaré recurrence theorem. A sketch of a proof of the equivalence of these four properties is given in the article on the Hopf decomposition.
Suppose that and is measure-preserving. Let be a wandering set of . By definition of wandering sets and since preserves , would thus contain a countably infinite union of pairwise disjoint sets that have the same -measure as . Since it was assumed , it follows that is a null set, and so all wandering sets must be null sets.
This argumentation fails for even the simplest examples if . Indeed, consider for instance , where denotes the Lebesgue measure, and consider the shift operator . Since the Lebesgue measure is translation-invariant, is measure-preserving. However, is not conservative. In fact, every interval of length strictly less than contained in is wandering. In particular, can be written as a countable union of wandering sets.
Hopf decomposition
The Hopf decomposition states that every measure space with a non-singular transformation can be decomposed into an invariant conservative set and a wandering (dissipative) set. A commonplace informal example of Hopf decomposition is the mixing of two liquids (some textbooks mention rum and coke): The initial state, where the two liquids are not yet mixed, can never recur again after mixing; it is part of the dissipative set. Likewise any of the partially-mixed states. The result, after mixing (a cuba libre, in the canonical example), is stable, and forms the conservative set; further mixing does not alter it. In this example, the conservative set is also ergodic: if one added one more drop of liquid (say, lemon juice), it would not stay in one place, but would come to mix in everywhere. One word of caution about this example: although mixing systems are ergodic, ergodic systems are not in general mixing systems! Mixing implies an interaction which may not exist. The canonical example of an ergodic system that does not mix is the Bernoulli process: it is the set of all possible infinite sequences of coin flips (equivalently, the set of infinite strings of zeros and ones); each individual coin flip is independent of the others.
Ergodic decomposition
The ergodic decomposition theorem states, roughly, that every conservative system can be split up into components, each component of which is individually ergodic. An informal example of this would be a tub, with a divider down the middle, with liquids filling each compartment. The liquid on one side can clearly mix with itself, and so can the other, but, due to the partition, the two sides cannot interact. Clearly, this can be treated as two independent systems; leakage between the two sides, of measure zero, can be ignored. The ergodic decomposition theorem states that all conservative systems can be split into such independent parts, and that this splitting is unique (up to differences of measure zero). Thus, by convention, the study of conservative systems becomes the study of their ergodic components.
Formally, every ergodic system is conservative. Recall that an invariant set σ ∈ Σ is one for which τ(σ) = σ. For an ergodic system, the only invariant sets are those with measure zero or with full measure (are null or are conull); that they are conservative then follows trivially from this.
When τ is ergodic, the following statements are equivalent:
τ is conservative and ergodic
For all measurable sets σ, ; that is, σ "sweeps out" all of X.
For all sets σ of positive measure, and for almost every , there exists a positive integer n such that .
For all sets and of positive measure, there exists a positive integer n such that
If , then either or the complement has zero measure: .
See also
KMS state, a description of thermodynamic equilibrium in quantum mechanical systems; dual to modular theories for von Neumann algebras.
Notes
References
Further reading
Ergodic theory
Dynamical systems | Conservative system | [
"Physics",
"Mathematics"
] | 1,733 | [
"Mechanics",
"Ergodic theory",
"Dynamical systems"
] |
8,918,557 | https://en.wikipedia.org/wiki/Ambush%20predator | Ambush predators or sit-and-wait predators are carnivorous animals that capture their prey via stealth, luring or by (typically instinctive) strategies utilizing an element of surprise. Unlike pursuit predators, who chase to capture prey using sheer speed or endurance, ambush predators avoid fatigue by staying in concealment, waiting patiently for the prey to get near, before launching a sudden overwhelming attack that quickly incapacitates and captures the prey.
The ambush is often opportunistic, and may be set by hiding in a burrow, by camouflage, by aggressive mimicry, or by the use of a trap (e.g. a web). The predator then uses a combination of senses to detect and assess the prey, and to time the strike. Nocturnal ambush predators such as cats and snakes have vertical slit pupils helping them to judge the distance to prey in dim light. Different ambush predators use a variety of means to capture their prey, from the long sticky tongues of chameleons to the expanding mouths of frogfishes.
Ambush predation is widely distributed in the animal kingdom, spanning some members of numerous groups such as the starfish, cephalopods, crustaceans, spiders, insects such as mantises, and vertebrates such as many snakes and fishes.
Strategy
Ambush predators usually remain motionless (sometimes hidden) and wait for prey to come within ambush distance before pouncing. Ambush predators are often camouflaged, and may be solitary. Pursuit predation becomes a better strategy than ambush predation when the predator is faster than the prey. Ambush predators use many intermediate strategies. For example, when a pursuit predator is faster than its prey over a short distance, but not in a long chase, then either stalking or ambush becomes necessary as part of the strategy.
Bringing the prey within range
Concealment
Ambush often relies on concealment, whether by staying out of sight or by means of camouflage.
Burrows
Ambush predators such as trapdoor spiders and Australian crab spiders on land and mantis shrimps in the sea rely on concealment, constructing and hiding in burrows. These provide effective concealment at the price of a restricted field of vision.
Trapdoor spiders excavate a burrow and seal the entrance with a web trapdoor hinged on one side with silk. The best-known is the thick, bevelled "cork" type, which neatly fits the burrow's opening. The other is the "wafer" type; it is a basic sheet of silk and earth. The door's upper side is often effectively camouflaged with local materials such as pebbles and sticks. The spider spins silk fishing lines, or trip wires, that radiate out of the burrow entrance. When the spider is using the trap to capture prey, its chelicerae (protruding mouthparts) hold the door shut on the end furthest from the hinge. Prey make the silk vibrate, and alert the spider to open the door and ambush the prey.
Camouflage
Many ambush predators make use of camouflage so that their prey can come within striking range without detecting their presence. Among insects, coloration in ambush bugs closely matches the flower heads where they wait for prey. Among fishes, the warteye stargazer buries itself nearly completely in the sand and waits for prey. The devil scorpionfish typically lies partially buried on the sea floor or on a coral head during the day, covering itself with sand and other debris to further camouflage itself. The tasselled wobbegong is a shark whose adaptations as an ambush predator include a strongly flattened and camouflaged body with a fringe that breaks up its outline. Among amphibians, the Pipa pipa's brown coloration blends in with the murky waters of the Amazon Rainforest which allows for this species to lie in wait and ambush its prey.
Aggressive mimicry
Many ambush predators actively attract their prey towards them before ambushing them. This strategy is called aggressive mimicry, using the false promise of nourishment to lure prey. The alligator snapping turtle is a well-camouflaged ambush predator. Its tongue bears a conspicuous pink extension that resembles a worm and can be wriggled around; fish that try to eat the "worm" are themselves eaten by the turtle. Similarly, some reptiles such as Elaphe rat snakes employ caudal luring (tail luring) to entice small vertebrates into striking range.
The zone-tailed hawk, which resembles the turkey vulture, flies among flocks of turkey vultures, then suddenly breaks from the formation and ambushes one of them as its prey. There is however some controversy about whether this is a true case of wolf in sheep's clothing mimicry.
Flower mantises are aggressive mimics, resembling flowers convincingly enough to attract prey that come to collect pollen and nectar. The orchid mantis actually attracts its prey, pollinator insects, more effectively than flowers do. Crab spiders, similarly, are coloured like the flowers they habitually rest on, but again, they can lure their prey even away from flowers.
Traps
Some ambush predators build traps to help capture their prey. Lacewings are a flying insect in the order Neuroptera. In some species, their larval form, known as the antlion, is an ambush predator. Eggs are laid in the earth, often in caves or under a rocky ledge. The juvenile creates a small, crater shaped trap. The antlion hides under a light cover of sand or earth. When an ant, beetle or other prey slides into the trap, the antlion grabs the prey with its powerful jaws.
Some but not all web-spinning spiders are sit-and-wait ambush predators. The sheetweb spiders (Linyphiidae) tend to stay with their webs for long periods and so resemble sit-and-wait predators, whereas the orb-weaving spiders (such as the Araneidae) tend to move frequently from one patch to another (and thus resemble active foragers).
Detection and assessment
Ambush predators must time their strike carefully. They need to detect the prey, assess it as worth attacking, and strike when it is in exactly the right place. They have evolved a variety of adaptations that facilitate this assessment. For example, pit vipers prey on small birds, choosing targets of the right size for their mouth gape: larger snakes choose larger prey. They prefer to strike prey that is both warm and moving; their pit organs between the eye and the nostril contain infrared (heat) receptors, enabling them to find and perhaps judge the size of their small, warm-blooded prey.
The deep-sea tripodfish Bathypterois grallator uses tactile and mechanosensory cues to identify food in its low-light environment. The fish faces into the current, waiting for prey to drift by.
Several species of Felidae (cats) and snakes have vertically elongated (slit) pupils, advantageous for nocturnal ambush predators as it helps them to estimate the distance to prey in dim light; diurnal and pursuit predators in contrast have round pupils.
Capturing the prey
Ambush predators often have adaptations for seizing their prey rapidly and securely. The capturing movement has to be rapid to trap the prey, given that the attack is not modifiable once launched. Zebra mantis shrimp capture agile prey such as fish primarily at night while hidden in burrows, striking very hard and fast, with a mean peak speed and mean duration of 24.98 ms.
Chameleons (family Chamaeleonidae) are highly adapted as ambush predators. They can change colour to match their surroundings and often climb through trees with a swaying motion, probably to mimic the movement of the leaves and branches they are surrounded by. All chameleons are primarily insectivores and feed by ballistically projecting their tongues, often twice the length of their bodies, to capture prey. The tongue is projected in as little as 0.07 seconds, and is launched at an acceleration of over 41 g. The power with which the tongue is launched, over 3000 W·kg−1, is more than muscle can produce, indicating that energy is stored in an elastic tissue for sudden release.
All fishes face a basic problem when trying to swallow prey: opening their mouth may pull food in, but closing it will push the food out again. Frogfishes capture their prey by suddenly opening their jaws, with a mechanism which enlarges the volume of the mouth cavity up to 12-fold and pulls the prey (crustaceans, molluscs and other whole fishes) into the mouth along with water; the jaws close without reducing the volume of the mouth cavity. The attack can be as fast as 6 milliseconds.
Taxonomic range
Ambush predation is widely distributed across the animal kingdom. It is found in many vertebrates including fishes such as the frogfishes (anglerfishes) of the sea bottom, and the pikes of freshwater; reptiles including crocodiles, snapping turtles, the mulga dragon, and many snakes such as the black mamba; mammals such as the cats; and birds such as the anhinga (darter). The strategy is found in several invertebrate phyla including arthropods such as mantises, purseweb spiders, and some crustaceans; cephalopod molluscs such as the colossal squid; and starfish such as Leptasterias tenera.
References
External links
Predation lecture University of Washington
Ethology
Predation
Articles containing video clips | Ambush predator | [
"Biology"
] | 1,935 | [
"Behavioural sciences",
"Ethology",
"Behavior"
] |
8,919,157 | https://en.wikipedia.org/wiki/Icerya%20purchasi | Icerya purchasi (common name: cottony cushion scale) is a scale insect that feeds on more than 80 families of woody plants, most notably on Citrus and Pittosporum. Originally described in 1878 from specimens collected in New Zealand as pests of kangaroo acacia and named by W.M. Maskell "after the Rev. Dr. Purchas who, [he] believe[d], first found it", it is now found worldwide where citrus crops are grown. The cottony cushion scale originates from Australia.
Life cycle
This scale infests twigs and branches. The mature hermaphrodite is oval in shape, reddish-brown with black hairs, 5 mm long. When mature, the insect remains stationary, attaches itself to the plant by waxy secretions, and produces a white egg sac in grooves, by extrusion, in the body which encases hundreds of red eggs. The egg sac will grow to be two to three times as long as the body. Newly hatched nymphs are the primary dispersal stage, with dispersion known to occur by wind and by crawling. Early stage nymphs feed from the midrib veins of leaves and small twigs, and do the bulk of the damage. At each molt, they leave at the old feeding point the former skin and the waxy secretions in which they had covered themselves and from which their common name is derived. Unlike many other scale insects, they retain legs and a limited mobility in all life stages. Older nymphs migrate to larger twigs and eventually as adults to branches and the trunk. Their life cycle is highly temperature-dependent, as the length of time in each stage of life is longer in cold temperatures than high temperatures.
In addition to the direct damage from sap sucking, the insects also secrete honeydew, on which sooty mold often grows and causes further damage to the host plant. Some ants will also consume this honeydew.
Reproduction
Males are rare in hermaphroditic species of Icerya. Males are haploid while females are diploid. Females have an ovitestis that is capable of producing both sperm and oocytes which fertilize internally to produce diploid offspring (females) through a form of hermaphroditism. The cells of the ovitestis are haploid and are derived from excess sperm during matings with males. This has been termed as 'parasitic tissue' and theoretical studies have examined this as a form of sexual conflict and have examined the possible fates and fitness consequences since females can produce daughters by mating with males or using their parasitic male cell lines. Females that lack ovitestes may preferentially invest in producing sons while females with parasitic tissue should prefer to pass on the genetic material through daughters.
True males are uncommon to rare overall, and in many infestations are not present. Pure females are unknown. Self-fertilization by a hermaphrodite will produce only hermaphrodites. Matings of a male and hermaphrodite will produce both males and hermaphrodites.
Biological control
Icerya purchasi is important as one of the first major successes of biological control. Importations of the vedalia ladybird (Novius cardinalis) in 1888–1889 by C. V. Riley, later head of the USDA's Division of Entomology, resulted in swift reductions of I. purchasi populations, saving the burgeoning Californian citrus industry from this destructive pest. However, following the introduction of insecticides such as DDT and malathion in the 1950s further outbreaks occurred due to resurgence: thought to be caused by drift from airplane spraying during the early spring months.
A second biological control, the parasitic fly Cryptochetum iceryae, has also been introduced to California as an additional control vector at around the same time.
While there is an apparent rivalry between the two natural enemies of Icerya purchasi, the competition does not affect the efficacy of the control measures when both the beetle and the fly are introduced. Biological control remains the most effective measure to manage Icerya purchasi infestations. Use of insecticides as control is recommended only if no biological control species is present. Imidacloprid is especially contraindicated, since it has no effect on this species, but is very toxic to Novius cardinalis.
References
Further reading
(originally published as 20th Century Insect Control in the July 1992 issue of Agricultural Research magazine)
External links
Monophlebidae
Citrus pests
Agricultural pest insects
Insects described in 1878
Hemiptera of Australia
Incestuous animals | Icerya purchasi | [
"Biology"
] | 949 | [
"Organisms by adaptation",
"Incestuous animals"
] |
8,919,339 | https://en.wikipedia.org/wiki/Environmental%20statistics | Environment statistics is the application of statistical methods to environmental science. It covers procedures for dealing with questions concerning the natural environment in its undisturbed state, the interaction of humanity with the environment, and urban environments. The field of environmental statistics has seen rapid growth in the past few decades as a response to increasing concern over the environment in the public, organizational, and governmental sectors.
The United Nations' Framework for the Development of Environment Statistics (FDES) defines the scope of environment statistics as follows:
The scope of environment statistics covers biophysical aspects of the environment and those aspects of the socioeconomic system that directly influence and interact with the environment.
The scope of environment, social and economic statistics overlap. It is not easy – or necessary – to draw a clear line dividing these areas. Social and economic statistics that describe processes or activities with a direct impact on, or direct interaction with, the environment are used widely in environment statistics. They are within the scope of the FDES.
Uses
Statistical analysis is essential to the field of environmental sciences, allowing researchers to gain an understanding of environmental issues through researching and developing potential solutions to the issues they study. The applications of statistical methods to environmental sciences are numerous and varied. Environmental statistics are used in many fields including; health and safety organizations, standard bodies, research institutes, water and river authorities, meteorological organizations, fisheries, protection agencies, and in risk, pollution, regulation and control concerns.
Environmental statistics is especially pertinent and widely used in the academic, governmental, regulatory, technological, and consulting industries.
Specific applications of statistical analysis within the field of environmental science include earthquake risk analysis, environmental policymaking, ecological sampling planning, environmental forensics.
Within the scope of environmental statistics, there are two main categories of their uses.
Descriptive statistics is not used to make inferences about data, but simply to describe its characteristics.
Inferential statistics is used to make inferences about data, test hypotheses or make predictions.
Types of studies covered in environmental statistics include:
Baseline studies to document the present state of an environment to provide background in case of unknown changes in the future;
Targeted studies to describe the likely impact of changes being planned or of accidental occurrences;
Regular monitoring to attempt to detect changes in the environment.
Sources
Sources of data for environmental statistics are varied and include surveys related to human populations and the environment, records from agencies managing environmental resources, maps and images, equipment used to examine the environment, and research studies around the world. A primary component of the data is direct observation, although most environmental statistics use a variety of sources.
Methods
Methods of statistical analysis in environmental sciences are as numerous as its applications. While there is a basis for the methods used in other fields, many of these methods must be adapted to suit the needs or limitations of data in environmental science. Linear regression models, generalized linear models, and non-linear models are some methods of statistical analysis that are widely used within environmental science to study relationships between variables.
See also
Coordination of Information on the Environment
Environmental studies
Statistics
References
External links
https://www.oecd-ilibrary.org/environment/data/oecd-environment-statistics_env-data-en
https://unstats.un.org/unsd/envstats/qindicators.cshtml
http://www.jenvstat.org/
https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=137230&Lab=NERL
https://web.ma.utexas.edu/users/mks/envstat.html
https://www.umass.edu/landeco/teaching/ecodata/schedule/statistics.pdf
https://unstats.un.org/unsd/environmentgl/default.asp
Applied statistics | Environmental statistics | [
"Mathematics"
] | 800 | [
"Applied mathematics",
"Applied statistics"
] |
1,047,828 | https://en.wikipedia.org/wiki/Meal%20powder | Meal powder is the fine dust left over when black powder (gunpowder) is corned and screened to separate it into different grain sizes. It is used extensively in various pyrotechnic procedures, usually to prime other compositions. It can also be used in many fireworks to add power and substantially increasing the height of the firework. The powder has occasionally been used as a synonym for Serpentine powder, which it physically resembles.
'Mill meal' powder is a mixture of potassium nitrate, charcoal and sulfur in the correct proportions (75% potassium nitrate:15% charcoal:10% sulfur) which has been ball-milled to mix it intimately. It is used in the same way as commercial meal powder or can be pressed and corned to produce true black powder.
Meal powder is made by mixing the ingredients by mass, rather than volume. These ingredients are processed in a ball mill, basically a rotating drum with non-sparking ceramic or lead balls. The more time spent in the mill, the more effective the powder will be. One main reason to ball mill as opposed to other methods is because it presses sulfur and KNO3 into the porous charcoal.
References
Pyrotechnic compositions
Gunpowder
de:Schwarzpulver#Mehlpulver | Meal powder | [
"Chemistry"
] | 255 | [
"Pyrotechnic compositions"
] |
1,047,865 | https://en.wikipedia.org/wiki/PWM%20rectifier | PWM rectifier (Pulse-width modulation rectifier) is an AC to DC power converter, that is implemented using forced commutated power electronic semiconductor switches. Conventional PWM converters are used for wind turbines that have a permanent-magnet alternator.
Today, insulated gate bipolar transistors are typical switching devices. In contrast to diode bridge rectifiers, PWM rectifiers achieve bidirectional power flow. In frequency converters this property makes it possible to perform regenerative braking. PWM rectifiers are also used in distributed power generation applications, such as micro turbines, fuel cells and windmills.
The major advantage of using the pulse width modulation technique is the reduction of higher order harmonics. It also makes it possible to control the magnitude of the output voltage, and improve the power factor by forcing the switches to follow the input voltage waveform using a PLL loop. Thus we can reduce the total harmonic distortion (THD).
Types of PWM rectifiers
Warsaw rectifier (invented 1992)
Vienna rectifier (invented 1993)
References
Electronic circuits | PWM rectifier | [
"Engineering"
] | 242 | [
"Electronic engineering",
"Electronic circuits"
] |
1,047,866 | https://en.wikipedia.org/wiki/Green%20mix | Green mix is an early step in the manufacturing of black powder for explosives. It is a rough mixture of potassium nitrate, charcoal and sulfur in the correct proportions (75:15:10) for black powder, but is not milled, pressed or corned. It burns much more slowly than black powder, when it chooses to burn at all, can still explode if ignited in a confined place; the deflagration is usually characterized by short, uneven sizzling followed by relatively long periods of smoulder.
Usage
Green mix is merely an unfinished product and not generally used itself in any pyrotechnic or projectile applications.
See also
Meal powder
References
External links
Pyroguide article on Greenmix
Pyrotechnic compositions
Gunpowder | Green mix | [
"Chemistry"
] | 153 | [
"Pyrotechnic compositions"
] |
1,047,942 | https://en.wikipedia.org/wiki/Muirhead%27s%20inequality | In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means.
Preliminary definitions
a-mean
For any real vector
define the "a-mean" [a] of positive real numbers x1, ..., xn by
where the sum extends over all permutations σ of { 1, ..., n }.
When the elements of a are nonnegative integers, the a-mean can be equivalently defined via the monomial symmetric polynomial as
where ℓ is the number of distinct elements in a, and k1, ..., kℓ are their multiplicities.
Notice that the a-mean as defined above only has the usual properties of a mean (e.g., if the mean of equal numbers is equal to them) if . In the general case, one can consider instead , which is called a Muirhead mean.
Examples
For a = (1, 0, ..., 0), the a-mean is just the ordinary arithmetic mean of x1, ..., xn.
For a = (1/n, ..., 1/n), the a-mean is the geometric mean of x1, ..., xn.
For a = (x, 1 − x), the a-mean is the Heinz mean.
The Muirhead mean for a = (−1, 0, ..., 0) is the harmonic mean.
Doubly stochastic matrices
An n × n matrix P is doubly stochastic precisely if both P and its transpose PT are stochastic matrices. A stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each column is 1. Thus, a doubly stochastic matrix is a square matrix of nonnegative real entries in which the sum of the entries in each row and the sum of the entries in each column is 1.
Statement
Muirhead's inequality states that [a] ≤ [b] for all x such that xi > 0 for every i ∈ { 1, ..., n } if and only if there is some doubly stochastic matrix P for which a = Pb.
Furthermore, in that case we have [a] = [b] if and only if a = b or all xi are equal.
The latter condition can be expressed in several equivalent ways; one of them is given below.
The proof makes use of the fact that every doubly stochastic matrix is a weighted average of permutation matrices (Birkhoff-von Neumann theorem).
Another equivalent condition
Because of the symmetry of the sum, no generality is lost by sorting the exponents into decreasing order:
Then the existence of a doubly stochastic matrix P such that a = Pb is equivalent to the following system of inequalities:
(The last one is an equality; the others are weak inequalities.)
The sequence is said to majorize the sequence .
Symmetric sum notation
It is convenient to use a special notation for the sums. A success in reducing an inequality in this form means that the only condition for testing it is to verify whether one exponent sequence () majorizes the other one.
This notation requires developing every permutation, developing an expression made of n! monomials, for instance:
Examples
Arithmetic-geometric mean inequality
Let
and
We have
Then
[aA] ≥ [aG],
which is
yielding the inequality.
Other examples
We seek to prove that x2 + y2 ≥ 2xy by using bunching (Muirhead's inequality).
We transform it in the symmetric-sum notation:
The sequence (2, 0) majorizes the sequence (1, 1), thus the inequality holds by bunching.
Similarly, we can prove the inequality
by writing it using the symmetric-sum notation as
which is the same as
Since the sequence (3, 0, 0) majorizes the sequence (1, 1, 1), the inequality holds by bunching.
See also
Inequality of arithmetic and geometric means
Doubly stochastic matrix
Maclaurin's inequality
Monomial symmetric polynomial
Newton's inequalities
Notes
References
Combinatorial Theory by John N. Guidi, based on lectures given by Gian-Carlo Rota in 1998, MIT Copy Technology Center, 2002.
Kiran Kedlaya, A < B (A less than B), a guide to solving inequalities
Hardy, G.H.; Littlewood, J.E.; Pólya, G. (1952), Inequalities, Cambridge Mathematical Library (2. ed.), Cambridge: Cambridge University Press, , , , Section 2.18, Theorem 45.
Inequalities
Means | Muirhead's inequality | [
"Physics",
"Mathematics"
] | 1,006 | [
"Means",
"Point (geometry)",
"Mathematical theorems",
"Mathematical analysis",
"Geometric centers",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems",
"Symmetry"
] |
1,047,944 | https://en.wikipedia.org/wiki/Two-phase%20electric%20power | Two-phase electrical power was an early 20th-century polyphase alternating current electric power distribution system. Two circuits were used, with voltage phases differing by one-quarter of a cycle, 90°. Usually circuits used four wires, two for each phase. Less frequently, three wires were used, with a common wire with a larger-diameter conductor. Some early two-phase generators had two complete rotor and field assemblies, with windings physically offset to provide two-phase power. The generators at Niagara Falls installed in 1895 were the largest generators in the world at that time, and were two-phase machines. Three-phase systems eventually replaced the original two-phase power systems for power transmission and utilization. Active two-phase distribution systems remain in Center City Philadelphia, where many commercial buildings are permanently wired for two-phase, and in Hartford, Connecticut.
Comparison with single-phase power
The advantage of two-phase electrical power over single-phase was that it allowed for simple, self-starting electric motors. In the early days of electrical engineering, it was easier to analyze and design two-phase systems where the phases were completely separated. It was not until the invention of the method of symmetrical components in 1918 that polyphase power systems had a convenient mathematical tool for describing unbalanced load cases. The revolving magnetic field produced with a two-phase system allowed electric motors to provide torque from zero motor speed, which was not possible with a single-phase induction motor (without an additional starting means). Induction motors designed for two-phase operation use a similar winding configuration as capacitor start single-phase motors. However, in a two-phase induction motor, the impedances of the two windings are identical.
Two-phase circuits also have the advantage of constant combined power into an ideal load, whereas power in a single-phase circuit pulsates at twice the line frequency due to the zero crossings of voltage and current.
Comparison with three-phase power
Three-phase electric power requires less conductor mass for the same voltage and overall power, compared with a two-phase four-wire circuit of the same carrying capacity. It has replaced two-phase power for commercial distribution of electrical energy, but two-phase circuits are still found in certain control systems.
Two-phase circuits typically use two separate pairs of current-carrying conductors. Alternatively, three wires may be used, but the common conductor carries the vector sum of the phase currents, which requires a larger conductor. The vector sum of balanced three-phase currents, however, is zero, allowing for the neutral wires to be eliminated. In electrical power distribution, a requirement of only three conductors, rather than four, represented a considerable distribution-wire cost savings due to the expense of conductors and installation.
While both two-phase and three-phase circuits have a constant combined power for an ideal load, practical devices such as motors can suffer from power pulsations in two-phase systems. These power pulsations tend to cause increased mechanical noise in transformer and motor laminations due to magnetostriction and torsional vibration in generator and motor drive shafts.
Two-phase power can be derived from a three-phase source using two transformers in a Scott connection: One transformer primary is connected across two phases of the supply. The second transformer is connected to a center-tap of the first transformer, and is wound for 86.6% of the phase-to-phase voltage on the three-phase system. The secondaries of the transformers will have two phases 90 degrees apart in time, and a balanced two-phase load will be evenly balanced over the three supply phases.
See also
Polyphase system
Rotary converter
Single-phase electric power
Split-phase electric power
Three-phase electric power
References
Notes
Specific references
General references
Donald G. Fink and H. Wayne Beaty, Standard Handbook for Electrical Engineers, Eleventh Edition, McGraw-Hill, New York, 1978,
Edwin J. Houston and Arthur Kennelly, Recent Types of Dynamo-Electric Machinery, copyright American Technical Book Company 1897, published by P. F. Collier and Sons New York, 1902
Electric power
AC power | Two-phase electric power | [
"Physics",
"Engineering"
] | 841 | [
"Power (physics)",
"Electrical engineering",
"Electric power",
"Physical quantities"
] |
1,047,958 | https://en.wikipedia.org/wiki/Split-phase%20electric%20power | A split-phase or single-phase three-wire system is a type of single-phase electric power distribution. It is the alternating current (AC) equivalent of the original Edison Machine Works three-wire direct-current system. Its primary advantage is that, for a given capacity of a distribution system, it saves conductor material over a single-ended single-phase system.
The system is common in North America for residential and light commercial applications. Two AC lines are supplied to the premises that are out of phase by 180 degrees with each other (when both measured with respect to the neutral), along with a common neutral. The neutral conductor is connected to ground at the transformer center tap. Circuits for lighting and small appliance power outlets use circuits connected between one line and neutral. High-demand applications, such as ovens, are often powered using AC circuits—these are connected between the two AC lines. These loads are either hard-wired or use outlets which are deliberately non-interchangeable with outlets.
Other applications of a split-phase power system are used to reduce the electric shock hazard or to reduce electromagnetic noise.
Connections
A transformer supplying a three-wire distribution system has a single-phase input (primary) winding. The output (secondary) winding has a center tap connected to a grounded neutral. As shown in , either end to center has half the voltage of end-to-end. illustrates the phasor diagram of the output voltages for a split-phase transformer. Since the two phasors do not define a unique direction of rotation for a revolving magnetic field, a split single-phase is not a two-phase system.
In the United States and Canada, the practice originated with the DC distribution system developed by Thomas Edison. By connecting pairs of lamps or groups of lamps on the same circuit in series, and doubling the supply voltage, the size of conductors was reduced substantially. Connection of the junction point of each parallel branch of two series lamps to a common neutral, returned to the center tap of the supply voltage, stabilized the branch circuit voltages from changes when loads were switched on and off. The neutral conductor carried only the imbalance of current flowing from one group of loads to the other.
The line to neutral voltage is half the line-to-line voltage. Lighting and small appliances may be connected between a line wire and the neutral. Higher-power appliances, such as cooking equipment, space heating, water heaters, clothes dryers, air conditioners and electric vehicle charging equipment, are connected to the two line conductors. This means that, for the supply of the same amount of power, the current is halved. Smaller conductors may be used than would be needed if the appliances were designed to be supplied by the lower voltage.
If the load were guaranteed to be balanced (the same current drawn from each line), then the neutral conductor would not carry any current and the system would be equivalent to a single-ended system of twice the voltage with the line wires taking half the current. This would not need a neutral conductor at all, but would be impractical for varying loads; just connecting the groups in series would result in excessive voltage and brightness variation as lamps are switched on and off.
By connecting the two lamp groups to a neutral, intermediate in potential between the two live legs, any imbalance of the load will be supplied by a current in the neutral, giving substantially constant voltage across both groups. The total current carried in all three wires (including the neutral) will always be twice the supply current of the most heavily loaded half.
For short wiring runs limited by conductor current carrying capacity, this allows three half-sized conductors to be substituted for two full-sized ones, using 75% of the copper of an equivalent single-phase system.
Long wiring runs are limited by the permitted voltage drop limit in the conductors. Because the supply voltage is doubled, a balanced load can tolerate double the voltage drop, allowing quarter-sized conductors to be used; this uses 3/8 the copper of an equivalent single-phase system.
In practice, some intermediate value is chosen. For example, if the imbalance is limited to 25% of the total load (half of one half) rather than the absolute worst-case 50%, then conductors 3/8 of the single-phase size will guarantee the same maximum voltage drop, totalling 9/8 of one single-phase conductor, 56% of the copper of the two single-phase conductors.
Balanced power
In a so-called balanced power system, sometimes called "technical power", an isolation transformer with a center tap is used to create a separate supply with conductors at balanced voltages with respect to ground. The purpose of a balanced power system is to minimize the noise coupled into sensitive equipment from the power supply.
Unlike a three-wire distribution system, the grounded neutral is not distributed to the loads; only line-to-line connections at are used. A balanced power system is used only for specialized distribution in audio and video production studios, sound and television broadcasting, and installations of sensitive scientific instruments.
The U.S. National Electrical Code provides rules for technical power installations. The systems are not to be used for general-purpose lighting or other equipment and may use special sockets to ensure that only approved equipment is connected to the system. Additionally, technical power systems pay special attention to the way the distribution system is grounded.
A risk of using a balanced power system in an installation that also uses "conventional" power in the same rooms is that a user may inadvertently interconnect the power systems together via an intermediate system of audio or video equipment, elements of which might be connected to different power systems. The chance of this happening may be reduced by appropriate labelling of the balanced power outlets and by the use of a type of power outlet socket for the balanced system that is physically different from that of the "conventional" power system to further differentiate them.
Applications
Europe
In Europe, three-phase 230/400 V is most commonly used. However, 130/225 V, three-wire, two-phase electric power discontinued systems called B1 are used to run old installations in small groups of houses when only two of the three-phase high-voltage conductors are used. The phase shift in Europe is 120°, as is the case with three-phase current. That's why we calculate 130V * √3 = 225V. A three-phase final step-down transformer is then used. One house gets phases A & B, the next house gets phase B & C, the third house gets phase A & C. Some installations, such as farms (especially those never subsequently upgraded to three-phase) may be supplied with both phases to the same consumer. Whilst usually metered through two chosen phases of a typical three-phase meter, these two phases will only ever be used individually, not, as in the USA, to provide a higher voltage. Nonetheless they help with situations where a single supply cannot provide enough power for an installation.
In the United Kingdom, electric tools and portable lighting at larger construction and demolition sites are governed by , and where possible are recommended to be fed from a centre-tapped system with only 55 V between live conductors and the earth (so-called CTE or centre-tap earth, or 55–0–55). This reduced low-voltage system is used with 110 V equipment. No neutral conductor is distributed. In high-hazard locations, additional double-pole RCD protection may be used. The intention is to reduce the shock hazard that may exist when using electrical equipment at a wet or outdoor construction site, and eliminate the requirement for rapid automatic disconnection for prevention of shocks during faults. Portable transformers that transform single-phase 240 V to this 110 V split-phase system are a common piece of construction equipment. Generator sets used for construction sites are equipped to supply it directly. However, a large farm may be given a 230–0–230 (nominal) supply.
An incidental benefit is that the filaments of 110 V incandescent lamps used on such systems are thicker and thus mechanically more rugged than those of 240 V lamps.
North America
This three-wire single-phase system is common in North America for residential and light commercial applications. Circuit breaker panels typically have two live (hot) wires, and a neutral, connected at one point to the grounded center tap of a local transformer. Usually, one of the live wires is black and the other one red; the neutral wire is always white. Single-pole circuit breakers feed circuits from one of the buses within the panel, or two-pole circuit breakers feed circuits from both buses. circuits are the most common, and used to power and outlets, and most residential and light commercial direct-wired lighting circuits. circuits are used for high-demand applications, such as air conditioners, space heaters, electric stoves, electric clothes dryers, water heaters, and electric vehicle charge points. These use or outlets that will not accept plugs.
Wiring regulations govern the application of split-phase circuits so that the shared neutral can be protected from excess current. A neutral wire can be shared only by two circuits fed from opposite lines of the supply system, using circuit breakers connected by a bar so that both trip simultaneously ( NEC 210.4); this prevents from feeding across circuits.
Railways
In the electric power supply system of railways in Sweden split-phase electric power is also used on some railways. The center tap is grounded and one pole is fed to an overhead wire section, while the other wire is used for another section.
Split-phase distribution is used on Amtrak's traction power system in the Northeast Corridor between New York and Boston. Two separate wires are run along the track, the contact wire for the locomotive and an electrically separate feeder wire. Each wire is fed with 25 kV with respect to ground, with 50 kV between them. Autotransformers along the track balance the loads between the contact and feeder wires, reducing resistive losses.
See also
Shared neutral
References
Electric power distribution
Electric motors
AC power | Split-phase electric power | [
"Technology",
"Engineering"
] | 2,047 | [
"Electrical engineering",
"Engines",
"Electric motors"
] |
1,048,019 | https://en.wikipedia.org/wiki/Metric%20typographic%20units | Metric typographic units have been devised and proposed several times to overcome the various traditional point systems. After the French Revolution of 1789 one popular proponent of a switch to metric was Didot, who had been able to standardise the continental European typographic measurement a few decades earlier. The conversion did not happen, though. The Didot point was metrically redefined as m (≈ 0.376 mm) in 1879 by Berthold.
The advent and success of desktop publishing (DTP) software and word processors for office use, coming mostly from the non-metric United States, side stepped this metrication process in typography. DTP commonly uses the PostScript point, which is defined as of an inch (0.3527 mm).
Metric Didot Point
With the introduction of phototypesetting in the 1970s metric units were increasingly used in typography. The Didot point was redefined once again to 375 μm, exactly mm.
Quart
Also in the 1970s, the new unit quart (quarter millimetre, abbreviated 'q') of 250 μm ( mm) was devised. The German draft standard DIN 16507-2 has suggested that digital typography will be specified using millimeters, with sizes in multiples of 0.250 mm. In some special cases where finer resolution is required, it is permitted with sizes in multiples of 0.100 or 0.050 mm (respectively 2.5 and 5 times finer step sizes).
German graphic designer and typographer Otl Aicher (1922 – 1991) vividly encouraged the use of the quart, and provided a suggested list of common sizes:
Note that Aicher's font sizes are based on the DIN standard then in development, which uses the H-height, whereas in lead typesetting the larger cap height was used. Some typographers have proposed using the x-height instead, because the psychological size depends more on the size of default, lowercase letters.
Device resolutions
The resolution of computer screens is often denoted in millimetres pitch, whereas office printers are usually denoted reciprocally in dots per inch ('dpi', 'd/in'). Phototypesetters have long used micrometres.
To convert dpi resolution to μm resolution, the formula to be used is , where R is the resolution in dpi. So for example 96 dpi translates to a resolution of 265 μm.
The CSS3 media queries draft introduces the unit dots per centimetre (dpcm) for resolution.
See also
Typographic unit
Dots per centimeter, a metric unit of dot density in printing, video or images, proposed to replace dots per inch (DPI).
Pixels per centimeter, a metric unit of pixel density proposed to replace pixels per inch (PPI).
Himetric, a resolution independent, digital, metric, measurement unit
References
External links
Metric typographic units
Typographic measurement
Typography
Units_of_length | Metric typographic units | [
"Mathematics"
] | 614 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
1,048,049 | https://en.wikipedia.org/wiki/Bastion%20host | A bastion host is a special-purpose computer on a network specifically designed and configured to withstand attacks, so named by analogy to the bastion, a military fortification. The computer generally hosts a single application or process, for example, a proxy server or load balancer, and all other services are removed or limited to reduce the threat to the computer. It is hardened in this manner primarily due to its location and purpose, which is either on the outside of a firewall or inside of a demilitarized zone (DMZ) and usually involves access from untrusted networks or computers. These computers are also equipped with special networking interfaces to withstand high-bandwidth attacks through the internet.
Definitions
The term is generally attributed to a 1990 article discussing firewalls by Marcus J. Ranum, who defined a bastion host as "a system identified by the firewall administrator as a critical strong point in the network security. Generally, bastion hosts will have some degree of extra attention paid to their security, may undergo regular audits, and may have modified software".
It has also been described as "any computer that is fully exposed to attack by being on the public side of the DMZ, unprotected by a firewall or filtering router. Firewalls and routers, anything that provides perimeter access control security can be considered bastion hosts. Other types of bastion hosts can include web, mail, DNS, and FTP servers. Due to their exposure, a great deal of effort must be put into designing and configuring bastion hosts to minimize the chances of penetration".
Placement
There are two common network configurations that include bastion hosts and their placement. The first requires two firewalls, with bastion hosts sitting between the first "outside world" firewall, and an inside firewall, in a DMZ. Often, smaller networks do not have multiple firewalls, so if only one firewall exists in a network, bastion hosts are commonly placed outside the firewall.
Use cases
Though securing remote access is the main use case of a bastion server, there are a few more use cases of a bastion host such as:
Authentication gateway
VPN alternative
Alternative to internal admin tools
Alternative to file transfers
Alternative way to share resource credentials
Intrusion detection
Software inventory management
Examples
These are several examples of bastion host systems/services:
DNS (Domain Name System) server
Email server
FTP (File Transfer Protocol) server
Honeypot
Proxy server
VPN (virtual private network) server
Web server
See also
Jump server
References
Internet Protocol based network software
Computer network security | Bastion host | [
"Engineering"
] | 518 | [
"Cybersecurity engineering",
"Computer networks engineering",
"Computer network security"
] |
1,048,084 | https://en.wikipedia.org/wiki/Sunwise | Sunwise, sunward or deasil (sometimes spelled deosil), are terms meaning to go clockwise or in the direction of the sun, as seen from the northern hemisphere. The opposite term is widdershins (Lowland Scots), or tuathal (Scottish Gaelic). In Scottish culture, this turning direction is also considered auspicious, while the converse is true for counter-clockwise motion.
Irish culture
During the days of Gaelic Ireland and of the Irish clans, the Psalter known as was used as both a rallying cry and protector in battle by the Chiefs of Clan O'Donnell. Before a battle it was customary for a chosen monk or holy man (usually attached to the Clan McGroarty and who was in a state of grace) to wear the Cathach and the cumdach, or book shrine, around his neck and then walk three times sunwise around the warriors of Clan O'Donnell.
According to folklorist Kevin Danaher, on St. John's Eve in Ulster and Connaught, it was customary to light a bonfire at sunset and to walk sunwise around the fire while praying the rosary. Those who could not afford a rosary would keep tally by holding a small pebble during each prayer and throwing it into the bonfire as each prayer was completed.
Similar praying of the rosary or other similar prayers while walking sunwise around Christian pilgrimage shrines or holy wells is also traditional in Irish culture during pattern days.
Scottish culture
This is descriptive of the ceremony observed by the druids, of walking round their temples by the south, in the course of their directions, always keeping their temples on their right. This course (diasil or deiseal) was deemed propitious, while the contrary course is perceived as fatal, or at least unpropitious. From this ancient superstition are derived several Gaelic customs which were still observed around the turn of the twentieth century, such as drinking over the left thumb, as Toland expresses it, or according to the course of the sun.
Similarly to the pre-battle use of the Cathach of St. Columba in Gaelic Ireland, the Brecbannoch of St Columba, a reliquary containing the partial human remains of the Saint, was traditionally carried three times sunwise around Scottish armies before they gave battle. The most famous example of this was during the Scottish Wars of Independence, shortly before the Scots under Robert the Bruce faced the English army at the Battle of Bannockburn in 1314.
Martin Martin says:
The use of the sunwise circle was also traditional in the Highlands during Christian pilgrimages in honour of St Máel Ruba, particularly to the shrine where he is said to have established a hermitage upon Isle Maree.
"Deosil" and other spellings
Wicca uses the spelling deosil, which violates the Gaelic orthography principle that a consonant must be surrounded by either broad vowels (a, o, u) or slender vowels (e, i). The Oxford English Dictionary gives precedence to the spelling "deasil", which violates the same principle, but acknowledges "deiseal", "deisal", and "deisul" as well.
Other cultures
This distinction exists in traditional Tibetan religion. Tibetan Buddhists go round their shrines sunwise, but followers of Bonpo go widdershins. The former consider Bonpo to be merely a perversion of their practice, but Bonpo adherents claim that their religion, as the indigenous one of Tibet, was doing this prior to the arrival of Buddhism in the country.
The Hindu pradakshina, the auspicious circumambulation of a temple, is also made clockwise.
See also
Circumambulation
References
Sources
(Deiseal)
Catholic Church in Ireland
History of Catholicism in Scotland
Irish folklore
Irish mythology
Roman Catholic pilgrimage sites in Ireland
Scottish folklore
Tibet
Orientation (geometry) | Sunwise | [
"Physics",
"Mathematics"
] | 802 | [
"Topology",
"Space",
"Geometry",
"Spacetime",
"Orientation (geometry)"
] |
1,048,266 | https://en.wikipedia.org/wiki/Loo%20of%20the%20Year%20Awards | The Loo of the Year Awards are run to celebrate the best public toilets in the United Kingdom, and promote high standards.
The awards competition receives sponsorship from a number of companies involved in providing products and services to washroom providers.
History
First introduced in 1987, the Loo of the Year Awards competition has run annually, except for 1993.
Eligibility criteria
Any type of public facility ('away from home' toilet) can be nominated for consideration. There are sixty four award categories. Eligible facilities can be located in England, Scotland, Wales or Ireland.
Anyone can submit a nomination—staff, customers, visitors, managers, owners or contractors—but owners or managers must authorise entries.
Judging criteria
After an unannounced inspection from an authorised Loo of the Year Awards Inspector, nominees are graded on a scale from Bronze to Diamond. Grades are awarded based on 101 criteria, and are judged on both male and female facilities, as well as any baby changing and accessible facilities provided. The criteria include cleanliness, decor, signage, accessibility and customer care.
Awards
Awards are given in one of sixty three categories. An entrant receiving a Diamond or possibly a Platinum grading will be considered for one a number of major national awards in each of England, Scotland, Wales and Ireland:
National Category Winners (up to sixty-three categories in each country)
National Accessible Toilet Winner
National (Baby) Changing Facilities Winner
Public Toilet Entries – National Winner
Individual Category Entries – National Winner
Changing Places Toilet Entries – National Winner
Space to Change Toilet Entries – National Winner
Eco Friendly Toilet – National Winner
Local Authority Award – National Winner
Toilets in Education – National Winner
Hygiene Room – National Winner
Other awards include:
Individual UK Loo of the Year Trophy The winner of this award is selected from one of the eight national winners in the Public Toilet Entries and the Individual Category Entries categories above.
UK Corporate Provider Trophy The winner of this award is selected from organizations or authorities entering ten or more different locations and winning five or more five-star awards.
Champions' League – Standards of Excellence Awards This award is presented to organizations or local authorities winning five or more five-star awards and who, in the inspectors' opinion, are maintaining a consistently high standard of management in all their Loo of the Year Awards entries.
Local Authority Premier League Membership in the League is granted to the top twenty local authority public toilet providers.
Washroom Technician of the Year Awards These "people awards" are open to all full-time techbicians, as well as employed washroom technicians and retained washroom technician contractors. Separate Washroom Technician of the Year Award Certificates are awarded in England, Scotland, Wales and Northern Ireland with Commended and National Winners for each country.
In addition, the overall UK winner (either an individual, in-house washroom technician teams or external washroom technician contractor teams) is awarded the UK Washroom Technician of The Year Trophy.
Overall "Loo of the Year" trophy winners
The winner of this category is presented with a Golden Loo Seat Trophy.
See also
The Good Loo Guide
References
External links
Loo of the Year Award
British awards
Design awards
Public toilets
1987 establishments in the United Kingdom
Awards established in 1987
Annual events in the United Kingdom | Loo of the Year Awards | [
"Engineering"
] | 645 | [
"Design",
"Design awards"
] |
1,048,371 | https://en.wikipedia.org/wiki/Edward%20Jenner%20Institute%20for%20Vaccine%20Research | The Edward Jenner Institute for Vaccine Research (EJIVR) was an independent research institute named after Edward Jenner, the inventor of vaccination. It was co-located with the Compton Laboratory of the Institute for Animal Health on a campus in the village of Compton in Berkshire, England. After occupying temporary laboratory space at the Institute for Animal Health from 1996, the Institute moved to a newly completed laboratory building in 1998. Funding of the Institute continued until October 2005 when it was closed.
Jenner Institute
A successor institute, formed by a partnership between the University of Oxford and the UK Institute for Animal Health, was established in November 2005. This Jenner Institute is headquartered in Oxford on the Old Road Campus and is supported by a specific charity, the Jenner Vaccine Foundation.
References
Research institutes established in 1996
Research institutes disestablished in 2005
Research institutes in Berkshire
Former research institutes
Medical research institutes in the United Kingdom
Vaccination-related organizations | Edward Jenner Institute for Vaccine Research | [
"Biology"
] | 190 | [
"Vaccination-related organizations",
"Vaccination"
] |
1,048,389 | https://en.wikipedia.org/wiki/Pirbright%20Institute | The Pirbright Institute (formerly the Institute for Animal Health) is a research institute in Surrey, England, dedicated to the study of infectious diseases of farm animals. It forms part of the UK government's Biotechnology and Biological Sciences Research Council (BBSRC). The institute employs scientists, vets, PhD students, and operations staff.
History
It began in 1914 to test cows for tuberculosis. More buildings were added in 1925. Compton was established by the Agricultural Research Council in 1937. Pirbright became a research institute in 1939 and Compton in 1942. The Houghton Poultry Research Station at Houghton, Cambridgeshire was established in 1948. In 1963 Pirbright became the Animal Virus Research Institute and Compton became the Institute for Research on Animal Diseases. The Neuropathogenesis Unit (NPU) was established in Edinburgh in 1981. This became part of the Roslin Institute in 2007.
In 1987, Compton, Houghton and Pirbright became the Institute for Animal Health, being funded by BBSRC. Houghton closed in 1992, operations at Compton ended in 2015.
The Edward Jenner Institute for Vaccine Research was sited at Compton until October 2005, when it merged with the vaccine programmes of the University of Oxford and the Institute for Animal Health.
The Pirbright site was implicated in the 2007 United Kingdom foot-and-mouth outbreak, with the Health and Safety Executive (HSE) concluding that a local case of the disease was a result of contaminated effluent release either from the Pirbright Institute or the neighbouring Merial Animal Health laboratory.
Significant investment (over £170 million) took place at Pirbright with the development of new world-class laboratory and animal facilities. The institute has been known as "The Pirbright Institute" since October 2012.
On 14 June 2019 the largest stock of the rinderpest virus was destroyed at the Pirbright Institute.
Directors of note
John Burns Brooksby 1964 until 1980
Structure
The work previously carried out at Compton has either moved out to the university sector, ended or has been transferred to the Pirbright site. The Compton site currently carries out work on endemic (commonplace) animal diseases including some avian viruses and a small amount of bovine immunology whilst Pirbright works on exotic (unusual) animal diseases (usually caused by virus outbreaks). Pirbright has national and international reference laboratories of diseases. It is a biosafety level 4 laboratories (commonly referred to as "P4" or BSL-4).
Funding
25% of its income comes from a core grant from the BBSRC of around £11 million. Around 50% comes from research grants from related government organisations, such as DEFRA, or industry and charities (such as the Wellcome Trust). The remaining 25% comes from direct payments for work carried out.
The Bill & Melinda Gates Foundation has provided funding to the institute for research into veterinary infectious diseases and universal flu vaccine development.
Function
The Pirbright Institute carries out research, diagnostics and surveillance of viruses carried predominantly by farm animals, such as foot-and-mouth disease virus (FMDV), African swine fever, bluetongue, lumpy skin disease and avian and swine flu. Understanding of viruses comes from molecular biology.
It carries out surveillance activities on farm animal health and disease movement in the UK.
Services
Arthropod supplies
Diagnostics & Surveillance
Disinfectant testing
Flow cytometry & cell sorting
Products – Includes positive sera, inactived antigens, diagnostic kits, viral cultures and live midges.
Training courses
Location
The institute had two sites:
Compton in Berkshire – closed in August 2015 with services relocated to new facilities at Pirbright.
Pirbright in Surrey – shared with commercial company Merial
See also
2007 United Kingdom foot-and-mouth outbreak
World Organisation for Animal Health
Bluetongue disease
Veterinary Laboratories Agency (now part of the Animal Health and Veterinary Laboratories Agency)
Animal Health (now part of the Animal Health and Veterinary Laboratories Agency)
Animal Health and Veterinary Laboratories Agency (an Executive Agency of the Department of Environment, Food and Rural Affairs)
References
External links
Agricultural research institutes in the United Kingdom
Agricultural organisations based in England
Animal health in England
Animal research institutes
Animal virology
Biotechnology in the United Kingdom
Biotechnology organizations
Genetics or genomics research institutions
Medical research institutes in the United Kingdom
Microbiology institutes
Research institutes established in 1987
Research institutes in Berkshire
Research institutes in Surrey
Veterinary research institutes
1987 establishments in England
Veterinary medicine in England | Pirbright Institute | [
"Engineering",
"Biology"
] | 906 | [
"Biotechnology organizations",
"Biotechnology in the United Kingdom",
"Biotechnology by country"
] |
1,048,445 | https://en.wikipedia.org/wiki/Jacob%20ben%20Machir%20ibn%20Tibbon | Jacob ben Machir ibn Tibbon (), of the Ibn Tibbon family, also known as Prophatius, was a Jewish astronomer; born, probably at Marseilles, about 1236; died at Montpellier about 1304. He was a grandson of Samuel ben Judah ibn Tibbon. His Provençal name was Don Profiat Tibbon; the Latin writers called him Profatius Judæus. Jacob occupies a considerable place in the history of astronomy in the Middle Ages. His works, translated into Latin, were quoted by Copernicus, Reinhold, and Clavius. He was also highly reputed as a physician, and, according to Jean Astruc ("Mémoires pour Servir à l'Histoire de la Faculté de Médecine de Montpellier," p. 168), Ibn Tibbon was regent of the faculty of medicine of Montpellier.
In the controversy between the Maimonists and the anti-Maimonists, Jacob defended science against the attacks of Abba Mari and his party; the energetic attitude of the community of Montpellier on that occasion was due to his influence.
Works
Jacob became known by a series of Hebrew translations of Arabic scientific and philosophical works, and above all by two original works on astronomy. His translations are:
the Elements of Euclid, divided into fifteen chapters
the treatise of Qusta ibn Luqa on the armillary sphere, in sixty-five chapters
Sefer ha-Mattanot, the Data of Euclid
Autolycus' On the Moving Sphere
Theodosius' Spherics
Menelaus' Spherics
Ma'amar bi-Tekunah, or Sefer 'al Tekunah, in forty-four chapters
a treatise on the use of the astrolabe
compendium of the Almagest of Ptolemy
Iggeret ha-Ma'aseh be-Luaḥ ha-Niḳra Sofiḥah
preface to Abraham bar Ḥiyya's astronomical work
an extract from the Almagest on the arc of a circle
"Ḳiẓẓur mi-Kol Meleket Higgayon," Averroes' compendium of the Organon (Riva di Trento, 1559)
Averroes' paraphrase of books xi–xix of Aristotle's history of animals
Mozene ha-'Iyyunim, falsely attributed to Ghazali, including also a large part deriving from the Encyclopedia of the Brethren of Purity (Rasā’il Ikhwān al-Ṣafā’)
The two original works of Jacob are:
a description of the astronomical instrument called the quadrant (Bibliothèque Nationale, Paris, MS. No. 1054), in sixteen chapters, the last of which shows how to construct this instrument. This was translated several times into Latin (once by Armengaud Blaise)
astronomical tables, beginning with 1 March 1300 (Munich MS. No. 343, 26). These tables were translated into Latin and enjoyed great repute.
See also
Hachmei Provence
Ibn Tibbon, a family list
Jacob's staff
References
External links
(PDF version)
1230s births
1300s deaths
Year of birth uncertain
Year of death uncertain
13th-century French Jews
13th-century French scientists
Arabic–Hebrew translators
Medieval French astronomers
Medieval Jewish astronomers
Provençal Jews
Jewish astronomers
13th-century French mathematicians | Jacob ben Machir ibn Tibbon | [
"Astronomy"
] | 686 | [
"Astronomers",
"Jewish astronomers"
] |
1,048,452 | https://en.wikipedia.org/wiki/Flash%20powder | Flash powder is a pyrotechnic composition, a mixture of oxidizer and metallic fuel, which burns quickly (deflagrates) and produces a loud noise regardless of confinement. It is widely used in theatrical pyrotechnics and fireworks (namely salutes, e.g., cherry bombs, M-80s, firecrackers, and cap gun shots) and was once used for flashes in photography.
Different varieties of flash powder are made from different compositions; most common are potassium perchlorate and aluminium powder. Sometimes, sulfur is included in the mixture to increase the sensitivity. Early formulations used potassium chlorate instead of potassium perchlorate.
Flash powder compositions are also used in military pyrotechnics when production of large amount of noise, light, or infrared radiation is required, e.g., missile decoy flares and stun grenades.
History
Lycopodium powder is a yellow-tan dust-like powder historically used as a flash powder. Today, the principal use of the powder is to create flashes or flames that are large and impressive but relatively easy to manage safely in magic acts and for cinema and theatrical special effects.
Mixtures
Normally, flash powder mixtures are compounded to achieve a particular purpose. These mixtures range from extremely fast-burning mixtures designed to produce a maximum audio report, to mixtures designed to burn slowly and provide large amounts of illumination, to mixtures that were formerly used in photography.
Aluminium and chlorate
The combination of aluminium powder and potassium chlorate is unstable, and a poor choice for flash powder that is to be stored for more than a very short period. For that reason, it has been largely replaced by the potassium perchlorate mixtures. Chlorate mixes are still used when cost is the overriding concern because potassium chlorate is less expensive than perchlorate.
The simplest is a two-component chlorate mix, although this is rarely used.
KClO3 + 2Al → Al2O3 + KCl
The composition is approximately 70% KClO3 : 30% Al by weight for the reactants of the above stoichiometrically balanced equation.
It is considered critically important to exclude sulfur and any acidic components from these mixtures. Sulfur oxidises and absorbs moisture to produce sulfuric and thionic acids; any acid in the mixture makes it unstable. Sometimes a few percent of bicarbonate or carbonate buffer is added to the mixture to ensure the absence of acidic impurities.
Sulfur is deliberately added as a third component to this mixture in order to reduce the activation energy. However this gives the problem with acid production and instability and so these mixtures are generally considered too unstable to be stored and must be mixed immediately before use. Antimony trisulfide may be used as an alternative, and is more stable in storage.
Potassium nitrate, aluminium and sulfur
This composition, usually in a ratio of 5 parts potassium nitrate, to 3 parts aluminum powder, to 2 parts sulfur, is especially popular with hobbyists. It is not very quick-burning unless exceptionally fine ingredients are used. Although it incorporates sulfur, it is in fact fairly stable, sustaining multiple hits from a hammer onto a hard surface. Adding 2% of its weight with boric acid is reputed to significantly increase stability and shelf life, through resistance to dampening through ambient humidity. Other ratios such as 6 KNO3/3 Al/1 S and 5 KNO3/2 Al/3 S also exist and work. All ratios have similar burn times and strength, although 5 KNO3/3 Al/2 S seems to be dominant.
2 KNO3 + 4 Al + S → K2S + N2 + 2 Al2O3
The composition is approximately 59% KNO3 : 31.6% Al : 9.4% S by weight for the reactants of the above stoichiometrically balanced equation.
For best results, "German Dark" aluminum should be used, with air float sulfur, and finely ball milled pure potassium nitrate. The finished mixture should never be ball milled together.
Aluminium and perchlorate
Aluminium powder and potassium perchlorate are the only two components of the pyrotechnic industry standard flash powder. It provides a great balance of stability and power, and is the composition used in most commercial exploding fireworks.
The balanced equation for the reaction is:-
3 KClO4 + 8 Al → 3 KCl + 4 Al2O3
The stoichiometric ratio is 34.2% aluminum and 65.8% perchlorate by mass.
A ratio of seven parts potassium perchlorate to three parts dark pyro aluminium is the composition used by most pyrotechnicians.
For best results, the aluminium powder should be "Dark Pyro" grade, with a flake particle shape, and a particle size of fewer than 10 micrometres. The KClO4 should be in powder form, free from clumps. It can be sieved through a screen, if necessary, to remove any clumps prior to use. The particle size of the perchlorate is not as critical as that of the aluminium component, as much less energy is required to decompose the KClO4 than is needed to melt the aluminium into the liquid state required for the reaction.
Although this composition is fairly insensitive, it should be treated with care and respect. Hobbyist pyrotechnicians usually use a method called diapering, in which the materials are poured separately onto a large piece of paper, which is then alternately lifted at each corner to roll the composition over itself and mix the components. Some amateur pyrotechnicians choose to mix the composition by shaking in a closed paper container, as this is much quicker and more effective than diapering. One method of mixing flash is to put the components in the final device and handling the device will mix the flash powder. Paper/cardboard is chosen over other materials, such as plastic, as a result of its favorable triboelectric properties.
Large quantities should never be mixed in a single batch, as they are difficult to handle safely and can put bystanders at risk. In the event of accidental ignition, debris from a multiple-pound flash powder explosion can be thrown hundreds of feet with sufficient force to kill or injure. (Note: 3 grams of mixture is enough to explode in open air without constraint other than air pressure.)
No matter the quantity, care must always be taken to prevent any electrostatic discharge or friction during mixing or handling, as these may cause accidental ignition.
Magnesium and nitrate
Another flash composition common among amateurs consists of magnesium powder and potassium nitrate. Other metal nitrates have been used, including barium and strontium nitrates. Compositions using nitrate and magnesium metal have been used as photographic flash powders almost since the invention of photography. Potassium nitrate/magnesium flash powder should be mixed and used immediately and not stored due to its tendency of self-ignition.
If magnesium is not a very fine powder, it can be passivated with linseed oil or potassium dichromate. The passivated magnesium flash powder is stable and generally safe to store.
2 KNO3 + 5 Mg → K2O + N2 + 5 MgO
The composition is 62.4% KNO3 : 37.6% Mg by weight for the reactants of the above stoichiometrically balanced equation. Below is the same reaction but involving barium nitrate.
Ba(NO3)2 + 5 Mg → BaO + N2 + 5 MgO
Mixtures designed to make reports are substantially different from mixtures designed for illumination. A stoichiometric ratio of three parts KNO3 to two parts Mg is close to ideal and provides the most rapid burn. The magnesium powder should be smaller than 200 mesh, though up to 100 mesh will work. The potassium nitrate should be impalpable dust. This mixture is popular in amateur pyrotechnics because it is insensitive and relatively safe as such things go.
For photographic use, mixtures containing magnesium and nitrates are made much more fuel rich. The excess magnesium is volatilized by the reaction and burns in air providing additional light. In addition, the higher concentration of fuel results in a slower burn, providing more of a "poof" and less of a "bang" when ignited. A formula from 1917 specifies 5 parts of magnesium to 6 parts of barium nitrate for a stoichiometry of nine parts fuel to one part oxidizer. Modern recreations of photographic flash powders may avoid the use of barium salts because of their toxic nature. A mixture of five parts 80 mesh magnesium to one part of potassium nitrate provides a good white flash without being too violent. Fuel rich flash powders are also used in theatrical flash pots.
Magnesium based compositions degrade over long periods, meaning the metallic Mg will slowly react with atmospheric oxygen and moisture. In military pyrotechnics involving magnesium fuels, external oxygen can be excluded by using hermetically sealed canisters. Commercial photographic flash powders are sold as two-part mixtures, to be combined immediately before use.
Magnesium and PTFE
A flash composition designed specifically to generate flares that are exceptionally bright in the infrared portion of the spectrum use a mixture of pyro-grade magnesium and powdered polytetrafluoroethylene. These flares are used as decoys from aircraft that might be subject to heat-seeking missile fire.
2n Mg + (C2F4))n → 2n MgF2 (s) + 2n C (s)
Antimony trisulfide and chlorate
This mixture, and similar mixtures sometimes containing pyro aluminium have been used since the early 1900s for small "Black Cat" style paper firecrackers. Its extremely low cost makes it popular among manufacturers of low-grade fireworks in China. Like all mixtures containing chlorates, it is extremely sensitive to friction, impact and electrostatic discharge, and is considered unsafe in pyrotechnic devices that contain more than a few tens of milligrams of the mixture.
3 KClO3 + Sb2S3 → Sb2O3 + 3 SO2 + 3 KCl
This mixture is not highly energetic, and in at least some parts of the United States, firecrackers containing 50 mg or less of this mixture are legal as consumer fireworks.
Safety and handling
Flash powders even within intended usages often release explosive force of deadly capacity. Nearly all widely used flash powder mixtures are sensitive to shock, friction and electrostatic discharge. In certain mixtures, it is not uncommon for this sensitivity to spontaneously change over time, or due to change in the environment, or to other unknowable factors in either the original manufacturing or in real-world storage. Additionally, accidental contaminants such as strong acids or sulfur compounds can sensitise them even more. Because flash powder mixtures are so easy to initiate, there is potentially a high risk of accidental explosions which can inflict severe blast/fragmentation injuries, e.g. blindness, explosive amputation, permanent maiming, or disfigurement. Fatalities have occurred. The various flash powder compositions should therefore not be handled by anyone who is unfamiliar with their properties, or the handling techniques required to maintain safety. Flash powder and flash powder devices pose exceptionally high risks to children, who typically cannot understand the danger and may be less adept with safe handling techniques. As a result, children tend to suffer more severe injuries than adults.
Flash powders—especially those that use chlorate—are often highly sensitive to friction, heat/flame and static electricity. A spark of as little as 0.1–10 millijoules can set off certain mixtures. Certain formulations prominent in the underground press contain both sulfur and potassium chlorate. These mixtures are especially shock and friction sensitive and in many applications should be considered unpredictable. Modern pyrotechnic practices call for never using sulfur in a mix containing chlorate salts.
Some flash powder formulations (those that use single-digit micrometre flake aluminium powder or fine magnesium powder as their fuel) can self-confine and explode in small quantities. This makes flash powder dangerous to handle, as it can cause severe hearing damage and amputation injury even when sitting in the open. Self-confinement occurs when the mass of the pile provides sufficient inertia to allow high pressure to build within it as the mixture reacts. This is referred to as 'inertial confinement', and it is not to be confused with a detonation.
Flash powder of any formulation should not be mixed in large quantities by amateur pyrotechnicians. Beginners should start with sub-gram quantities, and refrain from making large devices. Flash powder should only be made at the site at which it will be used. Additionally, the mixture should be made immediately before use. When mixed, the transportation, storage, usage, various possession, and illegal "firearms" laws (including felonies) may come into effect that do not apply to the unmixed or pre-assembled components.
See also
Pyrotechnic initiator
Sprengel explosive
Thermite
Black powder
Lycopodium powder
References
Explosives
Pyrotechnic compositions
Powders | Flash powder | [
"Physics",
"Chemistry"
] | 2,743 | [
"Pyrotechnic compositions",
"Materials",
"Powders",
"Explosives",
"Explosions",
"Matter"
] |
1,048,518 | https://en.wikipedia.org/wiki/Supramolecular%20chemistry | Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules. The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces, electrostatic charge, or hydrogen bonding to strong covalent bonding, provided that the electronic coupling strength remains small relative to the energy parameters of the component. While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. These forces include hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi–pi interactions and electrostatic effects.
Important concepts advanced by supramolecular chemistry include molecular self-assembly, molecular folding, molecular recognition, host–guest chemistry, mechanically-interlocked molecular architectures, and dynamic covalent chemistry. The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research.
History
The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920.
With the deeper understanding of the non-covalent interactions, for example, the clear elucidation of DNA structure, chemists started to emphasize the importance of non-covalent interactions. In 1967, Charles J. Pedersen discovered crown ethers, which are ring-like structures capable of chelating certain metal ions. Then, in 1969, Jean-Marie Lehn discovered a class of molecules similar to crown ethers, called cryptands. After that, Donald J. Cram synthesized many variations to crown ethers, on top of separate molecules capable of selective interaction with certain chemicals. The three scientists were awarded the Nobel Prize in Chemistry in 1987 for "development and use of molecules with structure-specific interactions of high selectivity”. In 2016, Bernard L. Feringa, Sir J. Fraser Stoddart, and Jean-Pierre Sauvage were awarded the Nobel Prize in Chemistry, "for the design and synthesis of molecular machines".
The term supermolecule (or supramolecule) was introduced by Karl Lothar Wolf et al. (Übermoleküle) in 1937 to describe hydrogen-bonded acetic acid dimers. The term supermolecule is also used in biochemistry to describe complexes of biomolecules, such as peptides and oligonucleotides composed of multiple strands.
Eventually, chemists applied these concepts to synthetic systems. One breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen. Following this work, other researchers such as Donald J. Cram, Jean-Marie Lehn and Fritz Vögtle reported a variety of three-dimensional receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging.
The influence of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution.
Concepts
Molecular self-assembly
Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles, membranes, vesicles, liquid crystals, and is important to crystal engineering.
Molecular recognition and complexation
Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis.
Template-directed synthesis
Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis. Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry. After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex.
Mechanically interlocked molecular architectures
Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, molecular Borromean rings and ravels.
Dynamic covalent chemistry
In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures.
Biomimetics
Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication.
Imprinting
Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity.
Molecular machinery
Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology, and prototypes have been demonstrated using supramolecular concepts. Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'.
Building blocks
Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen.
Synthetic recognition motifs
The pi-pi charge-transfer interactions of bipyridinium with dioxyarenes or diaminoarenes have been used extensively for the construction of mechanically interlocked systems and in crystal engineering.
The use of crown ether binding with metal or ammonium cations is ubiquitous in supramolecular chemistry.
The formation of carboxylic acid dimers and other simple hydrogen bonding interactions.
The complexation of bipyridines or terpyridines with ruthenium, silver or other metal ions is of great utility in the construction of complex architectures of many individual molecules.
The complexation of porphyrins or phthalocyanines around metal ions gives access to catalytic, photochemical and electrochemical properties in addition to the complexation itself. These units are used a great deal by nature.
Macrocycles
Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties.
Cyclodextrins, calixarenes, cucurbiturils and crown ethers are readily synthesized in large quantities, and are therefore convenient for use in supramolecular systems.
More complex cyclophanes, and cryptands can be synthesised to provide more tailored recognition properties.
Supramolecular metallocycles are macrocyclic aggregates with metal ions in the ring, often formed from angular and linear modules. Common metallocycle shapes in these types of applications include triangles, squares, and pentagons, each bearing functional groups that connect the pieces via "self-assembly."
Metallacrowns are metallomacrocycles generated via a similar self-assembly approach from fused chelate-rings.
Structural units
Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required.
Commonly used spacers and connecting groups include polyether chains, biphenyls and triphenyls, and simple alkyl chains. The chemistry for creating and connecting these units is very well understood.
nanoparticles, nanorods, fullerenes and dendrimers offer nanometer-sized structure and encapsulation units.
Surfaces can be used as scaffolds for the construction of complex systems and also for interfacing electrochemical systems with electrodes. Regular surfaces can be used for the construction of self-assembled monolayers and multilayers.
The understanding of intermolecular interactions in solids has undergone a major renaissance via inputs from different experimental and computational methods in the last decade. This includes high-pressure studies in solids and "in situ" crystallization of compounds which are liquids at room temperature along with the use of electron density analysis, crystal structure prediction and DFT calculations in solid state to enable a quantitative understanding of the nature, energetics and topological properties associated with such interactions in crystals.
Photo-chemically and electro-chemically active units
Porphyrins, and phthalocyanines have highly tunable photochemical and electrochemical activity as well as the potential to form complexes.
Photochromic and photoisomerizable groups can change their shapes and properties, including binding properties, upon exposure to light.
Tetrathiafulvalene (TTF) and quinones have multiple stable oxidation states, and therefore can be used in redox reactions and electrochemistry.
Other units, such as benzidine derivatives, viologens, and fullerenes, are useful in supramolecular electrochemical devices.
Biologically-derived units
The extremely strong complexation between avidin and biotin is instrumental in blood clotting, and has been used as the recognition motif to construct synthetic systems.
The binding of enzymes with their cofactors has been used as a route to produce modified enzymes, electrically contacted enzymes, and even photoswitchable enzymes.
DNA has been used both as a structural and as a functional unit in synthetic supramolecular systems.
Applications
Materials technology
Supramolecular chemistry has found many applications, in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. Many smart materials are based on molecular recognition.
Catalysis
A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions influence the binding reactants.
Medicine
Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions.
A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells.
Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function.
Data storage and processing
Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox-switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers.
See also
Organic chemistry
Nanotechnology
Reading
References
External links
2D and 3D Models of Dodecahedrane and Cuneane Assemblies
Supramolecular Chemistry and Supramolecular Chemistry II – Thematic Series in the Open Access Beilstein Journal of Organic Chemistry
Chemistry | Supramolecular chemistry | [
"Chemistry",
"Materials_science"
] | 3,037 | [
"Nanotechnology",
"nan",
"Supramolecular chemistry"
] |
1,048,678 | https://en.wikipedia.org/wiki/Glued%20laminated%20timber | Glued laminated timber, commonly referred to as glulam, is a type of structural engineered wood product constituted by layers of dimensional lumber bonded together with durable, moisture-resistant structural adhesives so that all of the grain runs parallel to the longitudinal axis. In North America, the material providing the laminations is termed laminating stock or lamstock.
History
The principles of glulam construction are believed to date back to the 1860s, in the assembly room of King Edward VI College, a school in Southampton, England. The first patent however emerged in 1901 when Otto Karl Freidrich Hetzer, a carpenter from Weimar, Germany, patented this method of construction. Approved in Switzerland, Hetzer's patent explored creating a straight beam out of several laminations glued together. In 1906 he received a patent in Germany for curved sections of glulam. Other countries in Europe soon began approving patents and by 1922, glulam had been used in 14 countries.
The technology was first brought to the United States by Max Hanisch Sr., who had been associated with the Hetzer firm in 1906 before emigrating to the United States in 1923. With no financial backing, it was not until 1934 that Hanisch was able to first use glulam in the United States. The project, a school and community gym in Peshtigo, Wisconsin, took time to get started, as manufacturers were hard to find, but eventually the Thompson Brothers Boat Manufacturing Company took on the project. The Wisconsin Industrial Commission, however, rejected the arches as they had no previous experience working with glulam. A compromise was reached in which the arches could be used if they were used in conjunction with bolts, lags, metal strapping, and angles to reinforce the structure. Though the reinforcements were unnecessary, ground finally broke in late 1934 featuring four spans of three-hinged arches with clear spans of . The partnership for this project lead to the creation of Unit Structures Inc., a construction firm for glulam owned by both the Hanisch and Thompson families.
In 1936, Unit Structures patented both the forming equipment used to produce glulam arches and the glulam arches themselves. A second project, this time for the Forest Products Laboratory (FPL), gave Unit Structures the opportunity to prove the strength and stiffness of glulam members to architects and engineers. Full-scale load tests conducted by placing of sandbags on the roof exceeded the design specs by 50%. The noted deflections were also in favor of the system. While the results took some time to get published, the test enabled Unit Structures to continue building with glulam. At this time, I-sections featuring plywood webs and glulam flanges became popular in Europe while rectangular sections became the norm in America. The I-sections saved on lumber, which was beneficial to Europeans as they had high lumber costs but were more labor intensive, which was expensive in the States. The glulam system piqued the interest of those on the west coast and many firms began to engage with it.
In 1942, the introduction of a fully water-resistant phenol-resorcinol adhesive enabled glulam to be used in exposed exterior environments without concern of glue line degradation, expanding its applicable market. During the midst of World War II, glulam construction became more widespread as steel was needed for the war effort. In 1952, leading fabricators of engineered and solid wood joined forces to create the American Institute of Timber Construction (AITC) to help standardize the industry and promote its use. The first U.S. manufacturing standard for glulam was published by the Department of Commerce in 1963. Since then, glulam manufacturing has spread within the United States and into Canada and has been used for other structures, such as bridges, as well. It is currently standardized under ANSI Standard A190.1.
Manufacturing
The manufacturing of glulam is typically broken down into four steps: drying and grading the lumber, joining the lumber to form longer laminations, gluing the layers, and finishing and fabrication. The lumber used to produce glulam may come to the manufacturers pre-dried. A hand-held or on the line moisture meter is used to check the moisture level. Each piece of lumber going into the manufacturing process should have a moisture content between 8% and 14% in accordance with the adhesive used. Lumber above this threshold is redried.
Knots on the ends of the dried lumber are trimmed. Lumber is then grouped based on the grade. To create lengths of glulam longer than those typically available for sawn lumber, the lumber must be end-jointed. The most common joint for this is a finger joint, in length that is cut on either end with special cutter heads. A structural resin, typically RF curing melamine formaldehyde (MF) or PF resin, is applied to the joint between successive boards and cured under end pressure using a continuous RF curing system. After the resins have cured, the lumber is cut to length and planed on each side to ensure smooth surfaces for gluing.
Once planed, a glue extruder spreads the resin onto the lumber. This resin is most often phenol-resorcinol-formaldehyde, but PF resin or melamine-urea-formaldehyde (MUF) resin can also be used. For straight beams, the resinated lumber is stacked in a specific lay-up pattern in a clamping bed where a mechanical or hydraulic system presses the layers together. For curved beams, the lumber is instead stacked in a curved form. These beams are cured at room temperature for 5 to 16 hours before the pressure is released. Combining pressure with RF curing can reduce the time needed for curing.
The wide-side faces faces of the beams are sanded or planed to remove resin that was squeezed out between the boards. The narrow top and bottom faces may also be sanded if necessary to achieve the desired appearance. Corners are often rounded as well. Specifications for appearance may require additional finishing such as filling knot holes with putty, finer sanding, and applying sealers, finishes, or primers.
Technological developments
Resin glues
When glued laminated timber was introduced as a building material in the early twentieth century, casein glues (which are waterproof but have lower shear strength) were widely used. Joints with casein glues had detachment failures due to inherent stresses in the wood. Cold-curing synthetic resin glues were invented in 1928. "Kaurit" and other urea-formaldehyde resin glues are inexpensive, easy to use, waterproof and enable high adhesive strength. The development of resin glues contributed to the wide use of glued laminated timber construction. Also, there is today another technique for gluing green wood (of high moisture content) to fabricate such laminated products.
Finger joints
The use of finger joints with glulam allowed for production of glulam beams and columns on large scale. Glulam finger joints provide a large surface area for gluing. Automatic finger-jointing machines cut the pointed joints, connect and glue them together under pressure, allowing for a strong, durable joint, capable of carrying high loads comparable to natural wood with the same cross-section.
Computer numerical control
Computer numerical control (CNC) allows to cut glued laminated timber into unusual shapes with a high degree of precision. CNC machine tools can utilize up to five axes, which enables undercutting and hollowing-out processes. The cost-effective CNC machines carve the material using mechanical tools, like a router.
Advantages
Advantages to using glulam in construction:
Size and shape - By laminating a number of smaller pieces of lumber into a single large structural member, the dimensions of glulam members are only limited by transport and handling rather than the size of a tree like sawn lumber. This also enables the use of smaller trees harvested from second-growth forests and plantations rather than relying on old-growth forests. Glulam can also be manufactured in a variety of shapes, so it offers architects artistic freedom without sacrificing structural requirements.
Versatility - Because the size and shape of glulam members can be so variable, they are able to be used as both beams and columns.
Strength and stiffness - Glulam has a higher strength to weight ratio compared to both concrete and steel. Glulam also reduces the impact defects in the wood have on the strength of the member making it stronger than sawn lumber as well. Glulam has also been proven to have a higher resistance to lateral-torsional buckling than steel.
Environmentally friendly - Glulam has much lower embodied energy than reinforced concrete and steel because the laminating process allows the timber to be used for much longer spans, heavier loads, and more complex shapes than reinforced concrete or steel. The embodied energy to produce it is one sixth of that of a comparable strength of steel. Also, as glulam is a wood product, it naturally sequesters carbon, keeping it from being released into the atmosphere. As long as the wood used to manufacture the glulam members comes from a sustainably managed forest, glulam is a renewable resource.
Fire safety - While glulam is inherently flammable because it is made of wood, if it catches on fire a char layer forms that protects the interior of the member and thus maintains the strength of the member for some time.
Disadvantages
Material cost - Glulam may be more costly than concrete at high axial loads, though this depends on location and availability/ abundance of either material. While glulam beams may be cheaper than HEA steel beams in some cases, it is not a significant difference.
Moisture - Glulam, especially when used for bridge projects, is susceptible to changes in moisture which can impact its strength. The bending strength of glulam exposed to a number of wet/dry cycles can decrease dramatically (by 43.5% in one study).
Dimensions - Compared to steel and reinforced concrete, glulam generally requires larger members to support the same load. The cross-sectional area and height of glulam members are significantly greater than those of steel. Compared to concrete, glulam columns will be smaller for small axial loads, but once large axial forces come into play, concrete columns have a smaller cross-sectional area.
Biodegradation - As a wood product, glulam is subject to concern regarding biodegradation. In regions with higher risk, measures to protect the glulam need to be taken.
Applications
Sport structures
Large stadium roofs are a common application for wide-span glulam beams. Advantages are the light weight of the material and the ability to furnish long lengths and large cross-sections. Prefabrication is invariably employed and the structural engineer needs to specify methods for delivery and erection of the large members at an early stage in the design.
The PostFinance Arena is an example of a wide-span sports stadium roof using glulam arches reaching up to 85 metres. The structure was built in Bern in 1967, and has subsequently been refurbished and extended. Eastern Kentucky University's Alumni Coliseum was built in 1963 with the world's largest glued laminated arches, which span .
The roof of the Richmond Olympic Oval, built for speed skating events at the 2010 Winter Olympic Games in Vancouver, British Columbia, features one of the world's largest clearspan wooden structures. The roof includes 2,400 cubic metres of Douglas fir lamstock lumber in glulam beams. A total of 34 yellow cedar glulam posts support the overhangs where the roof extends beyond the walls.
Anaheim Ice rink in Anaheim, California was built in 1995 by Disney Development Company and architect Frank Gehry using large double-curved yellow pine glulam beams.
Bridges
Glulam has been used for pedestrian, forest, highway, and railway bridges.
Pressure-treated glulam timbers or timbers manufactured from naturally durable wood species are well suited for creating bridges and waterfront structures. Wood is naturally resistant to corrosion by salt used for de-icing roadways.
One North American glulam bridge is Keystone Wye in the Black Hills of South Dakota, constructed in 1967. The da Vinci Bridge in Norway, completed in 2001, is almost completely constructed with glulam. The Kingsway Pedestrian Bridge in Burnaby, British Columbia, Canada, is constructed of cast-in-place concrete for the support piers, structural steel and glulam for the arch, a post tensioned precast concrete walking deck, and stainless steel support rods connecting the arch to the walking deck.
Religious buildings
Glulam is used for the construction of multi-use facilities such as churches, school buildings, and libraries. The Cathedral of Christ the Light in Oakland, California, is one such example and uses glulam to enhance the ecological and aesthetic effect. It was built as the replacement of the Cathedral of Saint Francis de Sales, which became unusable after the Loma Prieta earthquake in 1989. The , -shaped building formed the frame with a glued-laminated timber beam and steel-rod skeleton covered with a glass skin. Considering the conventional mode of construction with steel or reinforced concrete moment-frame, this glulam-and-steel combination case is regarded as an advanced way to realize the economy and aesthetic in the construction.
As an alternative to new-felled oak trees, glued laminated timber was proposed as the structural material in the replacement spire of , destroyed by fire in 2019.
Public buildings
Glulam is used extensively in public facilities due to its ability to span large spaces without the need for intermediate supports. This quality is particularly beneficial in creating open, airy interiors that are both functional and visually striking. The Lokameru Sunsetfalls in Salatiga, Indonesia, is one notable application of glulam is in the construction of wedding chapels. The use of glulam in these structures provides several advantages:
Aesthetic Appeal: Glulam offers a warm, natural look that enhances the romantic and serene atmosphere of a wedding chapel. The exposed wooden beams can be crafted into elegant arches or intricate patterns, adding to the visual interest of the space.
Structural Strength: Glulam's high strength-to-weight ratio allows for the creation of large, open spaces free of columns or other supports that could obstruct views. This is especially important in wedding chapels, where an unobstructed view of the ceremony is desirable.
Versatility in Design: Glulam can be shaped into various forms, including curves and angles that traditional solid wood might not easily achieve. This versatility allows architects to design unique and iconic wedding chapels that stand out for their architectural beauty.
Sustainability: Glulam is a sustainable building material, often sourced from sustainably managed forests. Its use in wedding chapels aligns with the growing trend toward environmentally conscious construction practices.
Other
In 2019, the world's tallest structure employing the use of glulam was Mjøstårnet, an 18-story mixed-use building in Brumunddal, Norway. In 2022, the Ascent MKE building in Milwaukee, Wisconsin, surpassed it with 26 stories, measuring over 86 meters tall.
The roof of the Centre Pompidou-Metz museum in France is composed of sixteen kilometers of glued laminated timber intersecting to form hexagonal units. With a surface area of 8,000 m2, the irregular geometry of the roof, featuring various curves and counter-curves, resembles a Chinese hat.
Failures
In 2005, researchers at Lund University, Sweden, found a number of failures of glulam structures in Scandinavian countries. They concluded that construction faults or design errors were responsible.
In January 2002 the roof of the Siemens velodrome arena in Copenhagen collapsed when a joint between glulam trusses failed at the point of its dowel fastenings.
In February 2003 the roof of a newly built exhibition hall in Jyväskylä, Finland, collapsed. It was found that during construction the specified number of dowels at joints between glulam timbers were missing or had been wrongly placed.
The collapse of the Perkolo bridge in Sjoa, Norway, in 2016 was caused by a design miscalculation of stresses at joints. Following this incident thirteen road bridges of glulam construction were checked, with only minor faults found.
On 15 August 2022 Tretten Bridge in Gudbrandsdalen, Norway, collapsed as two vehicles were crossing. It was made with glulam and steel construction and had been erected in 2012, with a design life of "at least 100 years". The cause of the failure was not immediately apparent, although during the 2016 inspection , one joint was found to have dowels that were too short.
See also
Fiberboard
I joist
Masonite
Oriented strand board
Parallam
Particle board
References
External links
Glulam Beam Repair/Reinforcement – An article (Printed in STRUCTURE magazine, Sep. 2006) by Gary W. Gray P.E. and Paul C. Gilham P.E.
Timber Engineering Europe Glulam
Canadian Wood Council Glulam
Composite materials
Engineered wood
Timber framing | Glued laminated timber | [
"Physics",
"Technology"
] | 3,521 | [
"Timber framing",
"Composite materials",
"Structural system",
"Materials",
"Matter"
] |
1,048,680 | https://en.wikipedia.org/wiki/Algebraic%20normal%20form | In Boolean algebra, the algebraic normal form (ANF), ring sum normal form (RSNF or RNF), Zhegalkin normal form, or Reed–Muller expansion is a way of writing propositional logic formulas in one of three subforms:
The entire formula is purely true or false:
One or more variables are combined into a term by AND (), then one or more terms are combined by XOR () together into ANF. Negations are not permitted:
The previous subform with a purely true term:
Formulas written in ANF are also known as Zhegalkin polynomials and Positive Polarity (or Parity) Reed–Muller expressions (PPRM).
Common uses
ANF is a canonical form, which means that two logically equivalent formulas will convert to the same ANF, easily showing whether two formulas are equivalent for automated theorem proving. Unlike other normal forms, it can be represented as a simple list of lists of variable names—conjunctive and disjunctive normal forms also require recording whether each variable is negated or not. Negation normal form is unsuitable for determining equivalence, since on negation normal forms, equivalence does not imply equality: a ∨ ¬a is not reduced to the same thing as 1, even though they are logically equivalent.
Putting a formula into ANF also makes it easy to identify linear functions (used, for example, in linear-feedback shift registers): a linear function is one that is a sum of single literals. Properties of nonlinear-feedback shift registers can also be deduced from certain properties of the feedback function in ANF.
Performing operations within algebraic normal form
There are straightforward ways to perform the standard Boolean operations on ANF inputs in order to get ANF results.
XOR (logical exclusive disjunction) is performed directly:
() ⊕ ()
⊕
1 ⊕ 1 ⊕ x ⊕ x ⊕ y
y
NOT (logical negation) is XORing 1:
1 ⊕ 1 ⊕ x ⊕ y
x ⊕ y
AND (logical conjunction) is distributed algebraically
( ⊕ )
⊕
(1 ⊕ x ⊕ y) ⊕ (x ⊕ x ⊕ xy)
1 ⊕ x ⊕ x ⊕ x ⊕ y ⊕ xy
1 ⊕ x ⊕ y ⊕ xy
OR (logical disjunction) uses either 1 ⊕ (1 ⊕ a)(1 ⊕ b) (easier when both operands have purely true terms) or a ⊕ b ⊕ ab (easier otherwise):
() + ()
1 ⊕ (1 ⊕ )(1 ⊕ )
1 ⊕ x(x ⊕ y)
1 ⊕ x ⊕ xy
Converting to algebraic normal form
Each variable in a formula is already in pure ANF, so one only needs to perform the formula's Boolean operations as shown above to get the entire formula into ANF. For example:
x + (y ⋅ ¬z)
x + (y(1 ⊕ z))
x + (y ⊕ yz)
x ⊕ (y ⊕ yz) ⊕ x(y ⊕ yz)
x ⊕ y ⊕ xy ⊕ yz ⊕ xyz
Formal representation
ANF is sometimes described in an equivalent way:
{| cellpadding="4"
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|-
|
|
|}
where fully describes .
Recursively deriving multiargument Boolean functions
There are only four functions with one argument:
To represent a function with multiple arguments one can use the following equality:
, where
Indeed,
if then and so
if then and so
Since both and have fewer arguments than it follows that using this process recursively we will finish with functions with one variable. For example, let us construct ANF of (logical or):
since and
it follows that
by distribution, we get the final ANF:
See also
Reed–Muller expansion
Zhegalkin normal form
Boolean function
Logical graph
Zhegalkin polynomial
Negation normal form
Conjunctive normal form
Disjunctive normal form
Karnaugh map
Boolean ring
References
Further reading
Boolean algebra
Normal forms (logic)
ru:Полином Жегалкина | Algebraic normal form | [
"Mathematics"
] | 857 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic"
] |
1,048,685 | https://en.wikipedia.org/wiki/Die%20preparation | Die preparation is a step of semiconductor device fabrication during which a wafer is prepared for IC packaging and IC testing. The process of die preparation typically consists of two steps: wafer mounting and wafer dicing.
Wafer mounting
Wafer mounting is a step that is performed during the die preparation of a wafer as part of the process of semiconductor fabrication. During this step, the wafer is mounted on a plastic tape that is attached to a ring. Wafer mounting is performed right before the wafer is cut into separate dies. The adhesive film upon which the wafer is mounted ensures that the individual dies remain firmly in place during 'dicing', as the process of cutting the wafer is called.
The picture on the right shows a 300 mm wafer after it was mounted and diced. The blue plastic is the adhesive tape. The wafer is the round disc in the middle. In this case, a large number of dies were already removed.
Semiconductor-die cutting
In the manufacturing of micro-electronic devices, die cutting, dicing or singulation is a process of reducing a wafer containing multiple identical integrated circuits to individual dies each containing one of those circuits.
During this process, a wafer with up to thousands of circuits is cut into rectangular pieces, each called a die. In between those functional parts of the circuits, a thin non-functional spacing is foreseen where a saw can safely cut the wafer without damaging the circuits. This spacing is called the scribe line or saw street. The width of the scribe is very small, typically around 100 μm. A very thin and accurate saw is therefore needed to cut the wafer into pieces. Usually the dicing is performed with a water-cooled circular saw with diamond-tipped teeth.
Types of blades
The most common make up of blade used is either a metal or resin bond containing abrasive grit of natural or more commonly synthetic diamond, or borazon in various forms. Alternatively, the bond and grit may be applied as a coating to a metal former. See diamond tools.
Further reading
, section 11.4.
Semiconductor device fabrication | Die preparation | [
"Materials_science"
] | 435 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
1,048,909 | https://en.wikipedia.org/wiki/ROOT | ROOT is an object-oriented computer program and library developed by CERN. It was originally designed for particle physics data analysis and contains several features specific to the field, but it is also used in other applications such as astronomy and data mining. The latest minor release is 6.32, as of 2024-05-26.
Description
CERN maintained the CERN Program Library written in FORTRAN for many years. Its development and maintenance were discontinued in 2003 in favour of ROOT, which is written in the C++ programming language.
ROOT development was initiated by René Brun and Fons Rademakers in 1994. Some parts are published under the GNU Lesser General Public License (LGPL) and others are based on GNU General Public License (GPL) software, and are thus also published under the terms of the GPL. It provides platform independent access to a computer's graphics subsystem and operating system using abstract layers. Parts of the abstract platform are: a graphical user interface and a GUI builder, container classes, reflection, a C++ script and command line interpreter (CINT in version 5, cling in version 6), object serialization and persistence.
The packages provided by ROOT include those for
Histogramming and graphing to view and analyze distributions and functions,
curve fitting (regression analysis) and minimization of functionals,
statistics tools used for data analysis,
matrix algebra,
four-vector computations, as used in high energy physics,
standard mathematical functions,
multivariate data analysis, e.g. using neural networks,
image manipulation, used, for instance, to analyze astronomical pictures,
access to distributed data (in the context of the Grid),
distributed computing, to parallelize data analyses,
persistence and serialization of objects, which can cope with changes in class definitions of persistent data,
access to databases,
3D visualizations (geometry),
creating files in various graphics formats, like PDF, PostScript, PNG, SVG, LaTeX, etc.
interfacing Python code in both directions,
interfacing Monte Carlo event generators.
A key feature of ROOT is a data container called tree, with its substructures branches and leaves. A tree can be seen as a sliding window to the raw data, as stored in a file. Data from the next entry in the file can be retrieved by advancing the index in the tree. This avoids memory allocation problems associated with object creation, and allows the tree to act as a lightweight container while handling buffering invisibly.
ROOT is designed for high computing efficiency, as it is required to process data from the Large Hadron Collider's experiments estimated at several petabytes per year. ROOT is mainly used in data analysis and data acquisition in particle physics (high energy physics) experiments, and most experimental plots and results in those subfields are obtained using ROOT.
The inclusion of a C++ interpreter (CINT until version 5.34, Cling from version 6.00) makes this package very versatile as it can be used in interactive, scripted and compiled modes in a manner similar to commercial products like MATLAB.
On July 4, 2012 the ATLAS and CMS LHC's experiments presented the status of the Standard Model Higgs search. All data plotting presented that day used ROOT.
Applications
Several particle physics collaborations have written software based on ROOT, often in favor of using more generic solutions (e.g. using ROOT containers instead of STL).
Some of the running particle physics experiments using software based on ROOT
ALICE
ATLAS
BaBar experiment
Belle Experiment (an electron positron collider at KEK (Japan))
Belle II experiment (successor of the Belle experiment)
BES III
CB-ELSA/TAPS
CMS
COMPASS experiment (Common Muon and Proton Apparatus for Structure and Spectroscopy)
CUORE (Cryogenic Underground Observatory for Rare Events)
D0 experiment
GlueX Experiment
GRAPES-3 (Gamma Ray Astronomy PeV EnergieS)
H1 (particle detector) at HERA collider at DESY, Hamburg
LHCb
MINERνA (Main Injector Experiment for ν-A)
MINOS (Main injector neutrino oscillation search)
NA61 experiment (SPS Heavy Ion and Neutrino Experiment)
NOνA
OPERA experiment
PHENIX detector
PHOBOS experiment at Relativistic Heavy Ion Collider
SNO+
STAR detector (Solenoidal Tracker at RHIC)
T2K experiment
Future particle physics experiments currently developing software based on ROOT
Mu2e
Compressed Baryonic Matter experiment (CBM)
PANDA experiment (antiProton Annihilation at Darmstadt (PANDA))
Deep Underground Neutrino Experiment (DUNE)
Hyper-Kamiokande (HK (Japan))
Astrophysics (X-ray and gamma-ray astronomy, astroparticle physics) projects using ROOT
AGILE
Alpha Magnetic Spectrometer (AMS)
Antarctic Impulse Transient Antenna (ANITA)
ANTARES neutrino detector
CRESST (Dark Matter Search)
DMTPC
DEAP-3600/Cryogenic Low-Energy Astrophysics with Neon(CLEAN)
Fermi Gamma-ray Space Telescope
ICECUBE
HAWC
High Energy Stereoscopic System (H.E.S.S.)
Hitomi (ASTRO-H)
MAGIC
Milagro
Pierre Auger Observatory
VERITAS
PAMELA
POLAR
PoGOLite
Criticisms
Criticisms of ROOT include its difficulty for beginners, as well as various aspects of its design and implementation. Frequent causes of frustration include extreme code bloat, heavy use of global variables, and an overcomplicated class hierarchy. From time to time these issues are discussed on the ROOT users mailing list. While scientists dissatisfied with ROOT have in the past managed to work around its flaws, some of the shortcomings are regularly addressed by the ROOT team. The CINT interpreter, for example, has been replaced by the Cling interpreter, and numerous bugs are fixed with every release.
See also
Matplotlib – a plotting and analysis system for Python
SciPy – a scientific data analysis system for Python, based on the NumPy classes
Perl Data Language – a set of array programming extensions to the Perl programming language
HippoDraw – an alternative C++-based data analysis system
Java Analysis Studio – a Java-based AIDA-compliant data analysis system
R programming language
AIDA (computing) – open interfaces and formats for particle physics data processing
Geant4 – a platform for the simulation of the passage of particles through matter using Monte Carlo methods
PAW
IGOR Pro
Scientific Linux
Scientific computing
OpenDX
OpenScientist
CERN Program Library – legacy program library written in Fortran77, still available but not updated
References
External links
The ROOT System Home Page
Image galleries
ROOT User's Guide
ROOT Reference Guide
ROOT Forum
The RooFit Toolkit for Data Modeling, an extension to ROOT to facilitate maximum likelihood fits
The Toolkit for Multivariate Data Analysis with ROOT (TMVA) is a ROOT-integrated project providing a machine learning environment for the processing and evaluation of multivariate classification, both binary and multi class, and regression techniques targeting applications in high-energy physics (here or here).
C++ libraries
Data analysis software
Data management software
Experimental particle physics
Free physics software
Free plotting software
Free science software
Free software programmed in C++
Numerical software
Physics software
Plotting software
CERN software | ROOT | [
"Physics",
"Mathematics"
] | 1,499 | [
"Mathematical software",
"Computational physics",
"Experimental physics",
"Particle physics",
"Numerical software",
"Experimental particle physics",
"Physics software"
] |
1,048,987 | https://en.wikipedia.org/wiki/Self-incompatibility | Self-incompatibility (SI) is a general name for several genetic mechanisms that prevent self-fertilization in sexually reproducing organisms, and thus encourage outcrossing and allogamy. It is contrasted with separation of sexes among individuals (dioecy), and their various modes of spatial (herkogamy) and temporal (dichogamy) separation.
SI is best-studied and particularly common in flowering plants, although it is present in other groups, including sea squirts and fungi. In plants with SI, when a pollen grain produced in a plant reaches a stigma of the same plant or another plant with a matching allele or genotype, the process of pollen germination, pollen-tube growth, ovule fertilization, or embryo development is inhibited, and consequently no seeds are produced. SI is one of the most important means of preventing inbreeding and promoting the generation of new genotypes in plants and it is considered one of the causes of the spread and success of angiosperms on Earth.
Mechanisms of single-locus self-incompatibility
The best studied mechanisms of SI act by inhibiting the germination of pollen on stigmas, or the elongation of the pollen tube in the styles. These mechanisms are based on protein-protein interactions, and the best-understood mechanisms are controlled by a single locus termed S, which has many different alleles in the species population. Despite their similar morphological and genetic manifestations, these mechanisms have evolved independently, and are based on different cellular components; therefore, each mechanism has its own, unique S-genes.
The S-locus contains two basic protein coding regions – one expressed in the pistil, and the other in the anther and/or pollen (referred to as the female and male determinants, respectively). Due to their physical proximity, these are genetically linked, and are inherited as a unit. The units are called S-haplotypes. The translation products of the two regions of the S-locus are two proteins which, by interacting with one another, lead to the arrest of pollen germination and/or pollen tube elongation, and thereby generate an SI response, preventing fertilization. However, when a female determinant interacts with a male determinant of a different haplotype, no SI is created, and fertilization ensues. This is a simplistic description of the general mechanism of SI, which is more complicated, and in some species the S-haplotype contains more than two protein coding regions.
Following is a detailed description of the different known mechanisms of SI in plants.
Gametophytic self-incompatibility (GSI)
In gametophytic self-incompatibility (GSI), the SI phenotype of the pollen is determined by its own gametophytic haploid genotype. This is the most common type of SI. Two different mechanisms of GSI have been described in detail at the molecular level, and their description follows.
The RNase mechanism
In this mechanism, pollen tube elongation is halted when it has proceeded approximately one third of the way through the style. The female component ribonuclease protein, termed S-RNase probably causes degradation of the ribosomal RNA (rRNA) inside the pollen tube, in the case of identical male and female S alleles, and consequently pollen tube elongation is arrested, and the pollen grain dies.
Within a decade of the initial confirmation their role in GSI, proteins belonging to the same RNase gene family were also found to cause pollen rejection in species of Rosaceae and Plantaginaceae. Despite initial uncertainty about the common ancestry of RNase-based SI in these distantly related plant families, phylogenetic studies and the finding of shared male determinants (F-box proteins) strongly supported homology across eudicots. Therefore, this mechanism likely arose approximately 90 million years ago, and is the inferred ancestral state for approximately 50% of all plant species.
In the past decade, the predictions about the wide distribution of this mechanism of SI have been confirmed, placing additional support of its single ancient origin. Specifically, a style-expressed T2/S-RNase gene and pollen-expressed F-box genes are now implicated in causing SI among the members of Rubiaceae, Rutaceae, and Cactaceae. Therefore, other mechanisms of SI are thought to be recently derived in eudicots plants, in some cases relatively recently. One particularly interesting case is the Prunus SI systems, which functions through self-recognition (the cytotoxic activity of the S-RNAses is inhibited by default and selectively activated by the pollen partner SFB upon self-pollination), [where "SFB" is a term that stands "for S-haplotype-specific F-box protein", as explained (parenthetically) in the abstract of], while SI in the other species with S-RNAse functions through non-self recognition (the S-RNAses are selectively detoxified upon cross-pollination).
The S-glycoprotein mechanism
In this mechanism, pollen growth is inhibited within minutes of its placement on the stigma. The mechanism is described in detail for Papaver rhoeas and so far appears restricted to the plant family Papaveraceae.
The female determinant is a small, extracellular molecule, expressed in the stigma; the identity of the male determinant remains elusive, but it is probably some cell membrane receptor. The interaction between male and female determinants transmits a cellular signal into the pollen tube, resulting in strong influx of calcium cations; this interferes with the intracellular concentration gradient of calcium ions which exists inside the pollen tube, essential for its elongation. The influx of calcium ions arrests tube elongation within 1–2 minutes. At this stage, pollen inhibition is still reversible, and elongation can be resumed by applying certain manipulations, resulting in ovule fertilization.
Subsequently, the cytosolic protein p26, a pyrophosphatase, is inhibited by phosphorylation, possibly resulting in arrest of synthesis of molecular building blocks, required for tube elongation. There is depolymerization and reorganization of actin filaments, within the pollen cytoskeleton. Within 10 minutes from the placement on the stigma, the pollen is committed to a process which ends in its death. At 3–4 hours past pollination, fragmentation of pollen DNA begins, and finally (at 10–14 hours), the cell dies apoptotically.
Sporophytic self-incompatibility (SSI)
In sporophytic self-incompatibility (SSI), the SI phenotype of the pollen is determined by the diploid genotype of the anther (the sporophyte) in which it was created. This form of SI was identified in the families: Brassicaceae, Asteraceae, Convolvulaceae, Betulaceae, Caryophyllaceae, Sterculiaceae and Polemoniaceae. Up to this day, only one mechanism of SSI has been described in detail at the molecular level, in Brassica (Brassicaceae).
Since SSI is determined by a diploid genotype, the pollen and pistil each express the translation products of two different alleles, i.e. two male and two female determinants. Dominance relationships often exist between pairs of alleles, resulting in complicated patterns of compatibility/self-incompatibility. These dominance relationships also allow the generation of individuals homozygous for a recessive S allele.
Compared to a population in which all S alleles are co-dominant, the presence of dominance relationships in the population raises the chances of compatible mating between individuals. The frequency ratio between recessive and dominant S alleles reflects a dynamic balance between reproductive assurance (favoured by recessive alleles) and avoidance of selfing (favoured by dominant alleles).
The SI mechanism in Brassica
As previously mentioned, the SI phenotype of the pollen is determined by the diploid genotype of the anther. In Brassica, the pollen coat, derived from the anther's tapetum tissue, carries the translation products of the two S alleles. These are small, cysteine-rich proteins. The male determinant is termed SCR or SP11, and is expressed in the anther tapetum as well as in the microspore and pollen (i.e. sporophytically). There are possibly up to 100 polymorphs of the S-haplotype in Brassica, and within these there is a dominance hierarchy.
The female determinant of the SI response in Brassica, is a transmembrane protein termed SRK, which has an intracellular kinase domain, and a variable extracellular domain. SRK is expressed in the stigma, and probably functions as a receptor for the SCR/SP11 protein in the pollen coat. Another stigmatic protein, termed SLG, is highly similar in sequence to the SRK protein, and seems to function as a co-receptor for the male determinant, amplifying the SI response.
The interaction between the SRK and SCR/SP11 proteins results in autophosphorylation of the intracellular kinase domain of SRK, and a signal is transmitted into the papilla cell of the stigma. Another protein essential for the SI response is MLPK, a serine-threonine kinase, which is anchored to the plasma membrane from its intracellular side. A downstream signaling cascade leads to proteasomal degradation that produces an SI response.
Other mechanisms of self-incompatibility
These mechanisms have received only limited attention in scientific research. Therefore, they are still poorly understood.
2-locus gametophytic self-incompatibility
The grass subfamily Pooideae, and perhaps all of the family Poaceae, have a gametophytic self-incompatibility system that involves two unlinked loci referred to as S and Z. If the alleles expressed at these two loci in the pollen grain both match the corresponding alleles in the pistil, the pollen grain will be recognized as incompatible. At both loci, S and Z, two male and one female determinant can be found. All four male determinants encode proteins belonging to the same family (DUF247) and are predicted to be membrane-bound. The two female determinants are predicted to be secreted proteins with no protein family membership.
Heteromorphic self-incompatibility
A distinct SI mechanism exists in heterostylous flowers, termed heteromorphic self-incompatibility. This mechanism is probably not evolutionarily related to the more familiar mechanisms, which are differentially defined as homomorphic self-incompatibility.
Almost all heterostylous taxa feature SI to some extent. The loci responsible for SI in heterostylous flowers, are strongly linked to the loci responsible for flower polymorphism, and these traits are inherited together. Distyly is determined by a single locus, which has two alleles; tristyly is determined by two loci, each with two alleles. Heteromorphic SI is sporophytic, i.e. both alleles in the male plant, determine the SI response in the pollen. SI loci always contain only two alleles in the population, one of which is dominant over the other, in both pollen and pistil. Variance in SI alleles parallels the variance in flower morphs, thus pollen from one morph can fertilize only pistils from the other morph. In tristylous flowers, each flower contains two types of stamens; each stamen produces pollen capable of fertilizing only one flower morph, out of the three existing morphs.
A population of a distylous plant contains only two SI genotypes: ss and Ss. Fertilization is possible only between genotypes; each genotype cannot fertilize itself. This restriction maintains a 1:1 ratio between the two genotypes in the population; genotypes are usually randomly scattered in space. Tristylous plants contain, in addition to the S locus, the M locus, also with two alleles. The number of possible genotypes is greater here, but a 1:1 ratio exists between individuals of each SI type.
Cryptic self-incompatibility (CSI)
Cryptic self-incompatibility (CSI) exists in a limited number of taxa (for example, there is evidence for CSI in Silene vulgaris, Caryophyllaceae). In this mechanism, the simultaneous presence of cross and self pollen on the same stigma, results in higher seed set from cross pollen, relative to self pollen. However, as opposed to 'complete' or 'absolute' SI, in CSI, self-pollination without the presence of competing cross pollen, results in successive fertilization and seed set; in this way, reproduction is assured, even in the absence of cross-pollination. CSI acts, at least in some species, at the stage of pollen tube elongation, and leads to faster elongation of cross pollen tubes, relative to self pollen tubes. The cellular and molecular mechanisms of CSI have not been described.
The strength of a CSI response can be defined, as the ratio of crossed to selfed ovules, formed when equal amounts of cross and self pollen, are placed upon the stigma; in the taxa described up to this day, this ratio ranges between 3.2 and 11.5.
Late-acting self-incompatibility (LSI)
Late-acting self-incompatibility (LSI) is also termed ovarian self-incompatibility (OSI). In this mechanism, self pollen germinates and reaches the ovules, but no fruit is set. LSI can be pre-zygotic (e.g. deterioration of the embryo sac prior to pollen tube entry, as in Narcissus triandrus) or post-zygotic (malformation of the zygote or embryo, as in certain species of Asclepias and in Spathodea campanulata).
The existence of the LSI mechanism among different taxa and in general, is subject for scientific debate. Criticizers claim, that absence of fruit set is due to genetic defects (homozygosity for lethal recessive alleles), which are the direct result of self-fertilization (inbreeding depression). Supporters, on the other hand, argue for the existence of several basic criteria, which differentiate certain cases of LSI from the inbreeding depression phenomenon.
Self-compatibility (SC)
Self-compatibility (SC) is the absence of genetic mechanisms which prevent self-fertilization resulting in plants that can reproduce successfully via both self-pollen and pollen from other individuals. Approximately one half of angiosperm species are SI, the remainder being SC. Mutations that disable SI (resulting in SC) may become common or entirely dominate in natural populations. Pollinator decline, variability in pollinator service, the so-called "automatic advantage" of self-fertilisation, among other factors, may favor the loss of SI.
Many cultivated plants are SC, although there are notable exceptions, such as apples and Brassica oleracea. Human-mediated artificial selection through selective breeding is often responsible for SC among these agricultural crops. SC enables more efficient breeding techniques to be employed for crop improvement. However, when genetically similar SI cultivars are bred, inbreeding depression can cause a cross-incompatible form of SC to arise, such as in apricots and almonds. In this rare, intraspecific, cross-incompatible mechanism, individuals have more reproductive success when self-pollinated rather than when cross-pollinated with other individuals of the same species. In wild populations, intraspecific cross-incompatibility has been observed in Nothoscordum bivalve.
See also
References
Further reading
External links
Pollination
Plant reproduction
Population genetics
Plant sexuality | Self-incompatibility | [
"Biology"
] | 3,389 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Plant sexuality",
"Sexuality"
] |
1,049,114 | https://en.wikipedia.org/wiki/Push-pull%20configuration | An aircraft constructed with a push-pull configuration has a combination of forward-mounted tractor (pull) propellers, and backward-mounted (pusher) propellers.
Historical
The earliest known examples of "push-pull" engined-layout aircraft was the Short Tandem Twin.
An early pre-World War I example of a "push-pull" aircraft was the Caproni Ca.1 of 1914 which had two wing-mounted tractor propellers and one centre-mounted pusher propeller. Around 450 of these and their successor, the Ca.3 were built. One of the first to employ two engines on a common axis (tandem push-pull) was the one-off, ill-fated Siemens-Schuckert DDr.I fighter of 1917.
German World War I designs included the only Fokker twin-engined design of the period, the Fokker K.I from 1915; followed by the unusual Siemens-Schuckert DDr.I triplane fighter design of late 1917, and concluding with the laterally-offset "push-pull" Gotha G.VI bomber prototype of 1918.
Claudius Dornier embraced the concept, many of his flying boats using variations of the tandem "push-pull" engine layout, including the 1922 Dornier Wal, the 1938 Dornier Do 26, and the massive 1929 Dornier Do X, which had twelve engines driving six tractors and six pushers. A number of Farmans and Fokkers also had push-pull engine installations, such as the Farman F.121 Jabiru and Fokker F.32.
Configuration
Push-pull designs have the engines mounted above the wing as Dornier flying boats or more commonly on a shorter fuselage than conventional one, as for Rutan Defiant or Voyager canard designs. Twin boomers such as the Cessna Skymaster and Adam A500 have the aircraft's tail suspended via twin booms behind the pusher propeller. In contrast, both the World War II-era Dornier Do 335 and the early 1960s-designed French Moynet M 360 Jupiter experimental private plane had their pusher propeller behind the tail.
Design benefits
While pure pushers decreased in popularity during the First World War, the push-pull configuration has continued to be used. The advantage it provides is the ability to mount two propellers on the aircraft's centreline, thereby avoiding the increased drag that comes with twin wing-mounted engines. It is also easier to fly if one of the two engines fails, as the thrust provided by the remaining engine stays in the centerline. In contrast, a conventional twin-engine aircraft will yaw in the direction of the failed engine and become uncontrollable below a certain airspeed, known as VMC.
Design problems
The rear engine operates in the disturbed air from the forward engine, which may reduce its efficiency to 85% of the forward engine. In addition the rear engine can interfere with the aircraft's rotation during takeoff if installed in the tail, or they require additional compromise to be made to ensure clearance. This is why they are more common on seaplanes, where this is not a concern.
Piloting
Pilots in the United States who obtain a multi-engine rating in an aircraft with this push-pull, or "centerline thrust," configuration are restricted to flying centerline-thrust aircraft; pilots who obtain a multi-engine rating in conventional twin-engine aircraft do not have a similar limitation with regard to centerline-thrust aircraft. The limitation can be removed by further testing in a conventional multi-engine aircraft.
Military application
Despite its advantages push-pull configurations are rare in military aircraft. In addition to the problems noted for civil aircraft, the increased risk to the pilot in the case of a crash or the need to parachute from the aircraft also pose problems. During a crash the rear engine may crush the pilot and if bailing out, the pilot is in danger of hitting the propeller. Examples of past military applications include the aforementioned Siemens-Schuckert DDr.I twin-engined triplane and the Gotha G.VI, with its engines mounted on the front and rear ends of two separate fuselages. More successful was the Italian Caproni Ca.3 trimotor, with two tractor engines and one pusher. Between the wars, most push-pull aircraft were flying boats, of which the Dornier Wal was probably the most numerous, while a number of heavy bombers, such as the Farman F.220 used engines mounted in push-pull pairs under the wings. Near the end of World War II, the German Dornier Do 335 push-pull twin-engined, Zerstörer-candidate heavy fighter featured explosive charges to jettison the rear propeller and dorsal tailfin, a manually-jettisonable main canopy, as well as an ejection seat. One of the last military aircraft to use the configuration was the American Cessna O-2, which was used for forward air control during the Vietnam War.
Images
See also
Tractor configuration
Pusher configuration
References
External links
Star Kraft SK-700 - 2 x 350hp 1000aircraftphotos.com
Aircraft configurations | Push-pull configuration | [
"Engineering"
] | 1,041 | [
"Aircraft configurations",
"Aerospace engineering"
] |
1,049,151 | https://en.wikipedia.org/wiki/Wangar%C4%A9%20Maathai | Wangarĩ Maathai (; 1 April 1940 – 25 September 2011) was a Kenyan social, environmental, and political activist who founded the Green Belt Movement, an environmental non-governmental organization focused on the planting of trees, environmental conservation, and women's rights. In 2004 she became the first African woman to win the Nobel Peace Prize.
As a beneficiary of the Kennedy Airlift, she studied in the United States, earning a bachelor's degree from Mount St. Scholastica and a master's degree from the University of Pittsburgh. She went on to become the first woman in East and Central Africa to become a Doctor of Philosophy, receiving her Ph.D. from the University of Nairobi in Kenya. In 1984, she got the Right Livelihood Award for "converting the Kenyan ecological debate into mass action for reforestation." Wangari Maathai was an elected member of the Parliament of Kenya and, between January 2003 and November 2005, served as Assistant Minister for Environment and Natural Resources in the government of President Mwai Kibaki. She was an Honorary Councillor of the World Future Council. As an academic and the author of several books, Maathai was not only an activist but also an intellectual who has made significant contributions to thinking about ecology, development, gender, and African cultures and religions.
Maathai died of complications from ovarian cancer on 25 September 2011.
Early life and education
Maathai was born on 1 April 1940 in the village of Ihithe, Nyeri District, in the central highlands of the colony of Kenya. Her family was Kikuyu, the most populous ethnic group in Kenya, and had lived in the area for several generations. Around 1943, Maathai's family relocated to a white-owned farm in the Rift Valley, near Nakuru, where her father had found work. Late in 1947, she returned to Ihithe with her mother, as two of her brothers were attending primary school in the village, and there was no schooling available on the farm where her father worked. Her father remained at the farm. Shortly afterward, at the age of eight years, she joined her brothers at Ihithe Primary School.
At eleven years, Maathai moved to St. Cecilia's Intermediate Primary School, a boarding school at the Mathari Catholic Mission in Nyeri. Maathai studied at St. Cecilia's for four years. During this time, she became fluent in English and converted to Catholicism. She was involved with the Legion of Mary, whose members attempted "to serve God by serving fellow human beings." Studying at St. Cecilia's, she was sheltered from the ongoing Mau Mau uprising, which forced her mother to move from their homestead to an emergency village in Ihithe. When she completed her studies there in 1956, she was rated first in her class, and was granted admission to the only Catholic high school for girls in Kenya, Loreto High School in Limuru.
As the end of East African colonialism approached, Kenyan politicians, such as Tom Mboya, were proposing ways to make education in Western nations available to promising students. John F. Kennedy, then a United States senator, agreed to fund such a program through the Joseph P. Kennedy Jr. Foundation, initiating what became known as the Kennedy Airlift or Airlift Africa. Maathai became one of some 300 Kenyans selected to study in the United States in September 1960.
She received a scholarship to study at Mount St. Scholastica College (now Benedictine College), in Atchison, Kansas, where she majored in biology, with minors in chemistry and German. After receiving her Bachelor of Science degree in 1964, she studied at the University of Pittsburgh for a master's degree in biology. Her graduate studies there were funded by the Africa-America Institute, and during her time in Pittsburgh, she first experienced environmental restoration, when local environmentalists pushed to rid the city of air pollution. In January 1966, Maathai received her MSc in biological sciences, and was appointed to a position as research assistant to a professor of zoology at University College of Nairobi.
Upon returning to Kenya, Maathai dropped her forename, preferring to be known by her birth name, Wangarĩ Muta. When she arrived at the university to start her new job, she was informed that it had been given to someone else. Maathai believed this was because of gender and tribal bias. After a two-month job search, Professor Reinhold Hofmann, from the University of Giessen in Germany, offered her a job as a research assistant in the microanatomy section of the newly established Department of Veterinary Anatomy in the School of Veterinary Medicine at the University College of Nairobi. In April 1966, she met Mwangi Mathai, another Kenyan who had studied in America, who would later become her husband. She also rented a small shop in the city and established a general store, at which her sisters worked. In 1967, at the urging of Professor Hofmann, she travelled to the University of Giessen in Germany in pursuit of a doctorate. She studied both at Giessen and the University of Munich.
In the spring of 1969, she returned to Nairobi to continue her studies at the University College of Nairobi as an assistant lecturer. In May, she and Mwangi Mathai married. Later that year, she became pregnant with her first child, and her husband campaigned for a seat in Parliament, narrowly losing. During the election, Tom Mboya, who had been instrumental in founding the program which sent her overseas, was assassinated. This led to President Kenyatta effectually ending multi-party democracy in Kenya. Shortly after, her first son, Waweru, was born. In 1971, she became the first Eastern African woman to receive a Ph.D., her doctorate in veterinary anatomy, from the University College of Nairobi, which became the University of Nairobi the following year. She completed her dissertation on the development and differentiation of gonads in bovines. Her daughter, Wanjira, was born in December 1971.
Activism and political life
1972–1977: Start of activism
Maathai continued to teach at Nairobi, becoming a senior lecturer in anatomy in 1975, chair of the Department of Veterinary Anatomy in 1976, and associate professor in 1977. She was the first woman in Nairobi appointed to any of these positions. During this time, she campaigned for equal benefits for the women working on the staff of the university, going so far as trying to turn the academic staff association of the university into a union, to negotiate for benefits. The courts denied this bid, but many of her demands for equal benefits were later met. In addition to her work at the University of Nairobi, Maathai became involved in several civic organisations in the early 1970s. She was a member of the Nairobi branch of the Kenya Red Cross Society, becoming its director in 1973. She was a member of the Kenya Association of University Women. Following the establishment of the Environment Liaison Centre in 1974, Maathai was asked to be a member of the local board, eventually becoming board chair. The Environment Liaison Centre worked to promote the participation of non-governmental organisations in the work of the United Nations Environment Programme (UNEP), whose headquarters was established in Nairobi following the United Nations Conference on the Human Environment held in Stockholm in 1972. Maathai also joined the National Council of Women of Kenya (NCWK). Through her work at these various volunteer associations, it became evident to Maathai that the root of most of Kenya's problems was environmental degradation.
In 1974, Maathai's family expanded to include her third child, son Muta. Her husband campaigned again for a seat in Parliament, hoping to represent the Lang'ata constituency, and won. During his campaign, he had promised to find jobs to limit the rising unemployment in Kenya. These promises led Maathai to connect her ideas of environmental restoration to providing jobs for the unemployed and led to the founding of Envirocare Ltd., a business that involved the planting of trees to conserve the environment, involving ordinary people in the process. This led to the planting of her first tree nursery, collocated with a government tree nursery in Karura Forest. Envirocare ran into multiple problems, primarily dealing with funding, and ultimately failed. However, through conversations concerning Envirocare and her work at the Environment Liaison Centre, UNEP made it possible to send Maathai to the first UN conference on human settlements, known as Habitat I, in June 1976.
In 1977, Maathai spoke to the NCWK concerning her attendance at Habitat I. She proposed further tree planting, which the council supported. On 5 June 1977, marking World Environment Day, the NCWK marched in a procession from Kenyatta International Conference Centre in downtown Nairobi to Kamukunji Park on the outskirts of the city, where they planted seven trees in honour of historical community leaders. This was the first event of the Green Belt Movement. Maathai encouraged the women of Kenya to plant tree nurseries throughout the country, searching nearby forests for seeds to grow trees native to the area. She agreed to pay the women a small stipend for each seedling which was later planted elsewhere.
In her 2010 book, Replenishing the Earth: Spiritual Values for Healing Ourselves and the World, she discussed the impact of the Green Belt Movement, explaining that the group's civic and environmental seminars stressed "the importance of communities taking responsibility for their actions and mobilizing to address their local needs," and adding, "We all need to work hard to make a difference in our neighborhoods, regions, and countries, and in the world as a whole. That means making sure we work hard, collaborate, and make ourselves better agents to change." In this book, she explicitly engages with religious traditions, including the indigenous Kikuyu religion and Christianity, mobilizing them as resources for environmental thinking and activism.
1977–1979: Personal problems
Maathai and her husband, Mwangi Mathai, separated in 1977. After a lengthy separation, Mwangi filed for divorce in 1979. He was said to have believed that Wangari was "too strong-minded for a woman" and that he was "unable to control her". In addition to naming her as "cruel" in court filings, he publicly accused her of adultery with another Member of Parliament, which in turn was thought to cause his high blood pressure and the judge ruled in Mwangi's favour. Shortly after the trial, in an interview with Viva magazine, Maathai referred to the judge as either incompetent or corrupt. The interview later led the judge to charge Maathai with contempt of court. She was found guilty and sentenced to six months in jail. After three days in Lang'ata Women's Prison in Nairobi, her lawyer formulated a statement that the court found sufficient for her release. Shortly after the divorce, her former husband sent a letter via his lawyer demanding that Maathai drop his surname. She chose to add an extra "a" instead of changing her name.
The divorce had been costly, and with lawyers' fees and the loss of her husband's income, Maathai found it difficult to provide for herself and their children on her university wages. An opportunity arose to work for the Economic Commission for Africa through the United Nations Development Programme. As this job required extended travel throughout Africa and was based primarily in Lusaka, Zambia, she was unable to bring her children with her. Maathai chose to send them to her ex-husband and take the job. While she visited them regularly, they lived with their father until 1985.
1979–1982: Political problems
In 1979, shortly after the divorce, Maathai ran for the position of chairperson of the National Council of Women of Kenya (NCWK), an umbrella organisation consisting of many women's organisations in the country. The newly-elected President of Kenya, Daniel arap Moi, tried to limit the amount of influence those of the Kikuyu ethnicity held in the country, including in volunteer civic organisations such as the NCWK. She lost this election by three votes, but was overwhelmingly chosen to be the vice-chairman of the organisation. The following year, Maathai again ran for chairman of the NCWK. Again she was opposed, she believes, by the government. When it became apparent that Maathai was going to win the election, Maendeleo Ya Wanawake, a member organisation which represented a majority of Kenya's rural women and whose leader was close to Arap Moi, withdrew from the NCWK. Maathai was then elected chairman of the NCWK unopposed. However, Maendeleo Ya Wanawake came to receive a majority of the financial support for women's programs in the country, and NCWK was left virtually bankrupt. Future funding was much more difficult to come by, but the NCWK survived by increasing its focus on the environment and making its presence and work known. Maathai continued to be reelected to serve as chairman of the organization every year until she retired from the position in 1987.
In 1982, the Parliamentary seat representing her home region of Nyeri was open, and Maathai decided to campaign for the seat. As required by law, she resigned from her position with the University of Nairobi to campaign for office. The courts decided that she was ineligible to run for office because she had not re-registered to vote in the last presidential election in 1979. Maathai believed this to be false and illegal, and brought the matter to court. The court was to meet at nine in the morning, and if she received a favorable ruling, was required to present her candidacy papers in Nyeri by three in the afternoon that day. The judge disqualified her from running on a technicality: as before, they claimed she should have re-registered to vote. When she requested her job back, she was denied. As she lived in university housing and was no longer a staff member, she was evicted.
Green Belt Movement
Maathai founded the Green Belt Movement in 1977 in response to the environmental concerns raised by rural Kenyan women. She moved into a small home she had purchased years before, and focused on the NCWK before becoming employed again. In the course of her work through the NCWK, she had the opportunity to partner with the executive director of the Norwegian Forestry Society, Wilhelm Elsrud. Maathai became the coordinator. Along with the partnership with the Norwegian Forestry Society, the movement had also received "seed money" from the United Nations Voluntary Fund for Women. These funds allowed for the expansion of the movement, for hiring additional employees to oversee the operations, and for continuing to pay a small stipend to the women who planted seedlings throughout the country. It allowed her to refine the operations of the movement, paying a small stipend to the women's husbands and sons who were literate and able to keep accurate records of seedlings planted.
The UN held the third global women's conference in Nairobi. During the conference, Maathai arranged seminars and presentations to describe the work the Green Belt Movement was doing in Kenya. She escorted delegates to see nurseries and plant trees. She met Peggy Snyder, the head of UNIFEM, and Helvi Sipilä, the first woman appointed a UN assistant secretary general. The conference helped to expand funding for the Green Belt Movement and led to the movement's establishing itself outside Kenya. In 1986, with funding from UNEP, the movement expanded throughout Africa and led to the foundation of the Pan-African Green Belt Network. Forty-five representatives from fifteen African countries travelled to Kenya over the next three years to learn how to set up similar programs in their own countries to combat desertification, deforestation, water crises, and rural hunger. The attention the movement received in the media led to Maathai's being honored with numerous awards. The government of Kenya, however, demanded that the Green Belt Movement separate from the NCWK, believing the latter should focus solely on women's issues, not the environment. Therefore, in 1987, Maathai stepped down as chairperson of the NCWK and focused on the newly separate non-governmental organisation.
Government intervention
In the latter half of the 1980s, the Kenyan government came down against Maathai and the Green Belt Movement. The single-party regime opposed many of the movement's positions regarding democratic rights. The government invoked a colonial-era law prohibiting groups of more than nine people from meeting without a government license. In 1988, the Green Belt Movement carried out pro-democracy activities such as registering voters for the election and pressing for constitutional reform and freedom of expression. The government carried out electoral fraud in the elections to maintain power, according to Maathai.
In October 1989, Maathai learned of a plan to construct the 60-storey Kenya Times Media Trust Complex in Uhuru Park. The complex was intended to house the headquarters of KANU, the Kenya Times newspaper, a trading center, offices, an auditorium, galleries, shopping malls, and parking spaces for 2,000 cars. The plan also included a large statue of President Daniel Arap Moi. Maathai wrote many letters in protest to, among others, the Kenya Times, the Office of the President, the Nairobi city commission, the provincial commissioner, the minister for environment and natural resources, the executive directors of UNEP and the Environment Liaison Centre International, the executive director of the United Nations Educational, Scientific and Cultural Organization (UNESCO), the ministry of public works, and the permanent secretary in the department of international security and administration all received letters. She wrote to Sir John Johnson, the British high commissioner in Nairobi, urging him to intervene with Robert Maxwell, a major shareholder in the project, equating the construction of a tower in Uhuru Park to such construction in Hyde Park or Central Park and maintaining that it could not be tolerated.
The government refused to respond to her inquiries and protests, instead responding through the media that Maathai was "a crazy woman"; that denying the project in Uhuru Park would take more than a small portion of public parkland; and proclaiming the project as a "fine and magnificent work of architecture" opposed by only the "ignorant few". On 8 November 1989, Parliament expressed outrage at Maathai's actions, complaining of her letters to foreign organisations and calling the Green Belt Movement a bogus organisation and its members "a bunch of divorcees". They suggested that if Maathai was so comfortable writing to Europeans, perhaps she should go live in Europe.
Despite Maathai's protests, as well as popular protest growing throughout the city, the ground was broken at Uhuru Park for construction of the complex on 15 November 1989. Maathai sought an injunction in the Kenya High Court to halt construction, but the case was thrown out on 11 December. In his first public comments peonhe project, President Daniel Arap Moi stated that those who opposed the project had "insects in their heads". On 12 December, in Uhuru Park, during a speech celebrating independence from the British, President Moi suggested Maathai be a proper woman in the African tradition and respect men and be quiet. She was forced by the government to vacate her office, and the Green Belt Movement was moved into her home. The government audited the Green Belt Movement in an apparent attempt to shut it down. Despite the government's efforts, her protests and the media coverage the government's response garnered led foreign investors to cancel the project in January 1990.
In January 1992, it came to the attention of Maathai and other pro-democracy activists that a list of people were targeted for assassination and that a government-sponsored coup was possible. Maathai's name was on the list. The pro-democracy group, known as the Forum for the Restoration of Democracy (FORD), presented its information to the media, calling for a general election. Later that day, Maathai received a warning that one of their members had been arrested. Maathai decided to barricade herself in her home. Shortly thereafter, police arrived and surrounded the house. She was besieged for three days before police cut through the bars she had installed on her windows, came in, and arrested her. She and the other pro-democracy activists who had been arrested were charged with spreading malicious rumors, sedition, and treason. After a day and a half in jail, they were brought to a hearing and released on bail. A variety of international organisations and eight senators (including Al Gore and Edward M. Kennedy) put pressure on the Kenyan government to substantiate the charges against the pro-democracy activists or risk damaging relations with the United States. In November 1992, the Kenyan government dropped the charges.
On 28 February 1992, while released on bail, Maathai and others took part in a hunger strike in a corner of Uhuru Park, which they labeled Freedom Corner, to pressure the government to release political prisoners. After four days of hunger strike, on 3 March 1992, the police forcibly removed the protesters. Maathai and three others were knocked unconscious by police and hospitalized. President Daniel arap Moi called her "a mad woman" and "a threat to the order and security of the country". The attack drew international criticism. The US State Department said it was "deeply concerned" by the violence and by the forcible removal of the hunger strikers. When the prisoners were not released, the protesters – mostly mothers of those in prison – moved their protest to All Saints Cathedral, the seat of the Anglican Archbishop in Kenya, across from Uhuru Park. The protest there continued, with Maathai contributing frequently, until early 1993 when the prisoners were finally released.
During this time, Maathai was recognized with various awards internationally, but the Kenyan government did not appreciate her work. In 1991 she received the Goldman Environmental Prize in San Francisco and the Hunger Project's Africa Prize for Leadership in London. CNN aired a three-minute segment about the Goldman prize, but when it aired in Kenya, that segment was cut out. In June 1992, during the long protest at Uhuru Park, both Maathai and President arap Moi travelled to Rio de Janeiro for the UN Conference on Environment and Development (Earth Summit). The Kenyan government accused Maathai of inciting women and encouraging them to strip at Freedom Corner, urging that she not be allowed to speak at the summit. Despite this, Maathai was chosen to be a chief spokesperson at the summit.
Push for democracy
During the first multi-party election of Kenya, in 1992, Maathai strove to unite the opposition and for fair elections in Kenya. The Forum for the Restoration of Democracy (FORD) had fractured into FORD-Kenya (led by Oginga Odinga) and FORD-Asili (led by Kenneth Matiba); former vice president Mwai Kibaki had left the ruling Kenya African National Union (KANU) party, and formed the Democratic Party. Maathai and many others believed such a fractured opposition would lead to KANU's retaining control of the country, so they formed the Middle Ground Group in an effort to unite the opposition. Maathai was chosen to serve as its chairperson. Also during the election, Maathai and like-minded opposition members formed the Movement for Free and Fair Elections. Despite their efforts, the opposition did not unite, and the ruling KANU party used intimidation and state-held media to win the election, retaining control of parliament.
The following year, ethnic clashes occurred throughout Kenya. Maathai believed they were incited by the government, who had warned of stark consequences to multi-party democracy. Maathai travelled with friends and the press to areas of violence in order to encourage them to cease fighting. With the Green Belt Movement she planted "trees of peace", but before long her actions were opposed by the government. The conflict areas were labeled as "no go zones", and in February 1993 the president claimed that Maathai had masterminded a distribution of leaflets inciting Kikuyus to attack Kalenjins. After her friend and supporter Dr. Makanga was kidnapped, Maathai chose to go into hiding. While in hiding, Maathai was invited to a meeting in Tokyo of the Green Cross International, an environmental organisation recently founded by former Soviet leader Mikhail Gorbachev. When Maathai responded that she could not attend as she did not believe the government would allow her to leave the country and she was in hiding, Gorbachev pressured the government of Kenya to allow her to travel freely. President arap Moi denied limiting her travel, and she was allowed to leave the country, although too late for the meeting in Tokyo. Maathai was again recognized internationally, and she flew to Scotland to receive the Edinburgh Medal in April 1993. In May she went to Chicago to receive the Jane Addams International Women's Leadership Award, and in June she attended the UN's World Conference on Human Rights in Vienna.
During the elections of 1997, Maathai again wished to unite the opposition in order to defeat the ruling party. In November, less than two months before the election, she decided to run for parliament and for president as a candidate of the Liberal Party. Her intentions were widely questioned in the press; many believed she should simply stick to running the Green Belt Movement and stay out of politics. On the day of the election, a rumour that Maathai had withdrawn from the election and endorsed another candidate was printed in the media. Maathai garnered few votes and lost the election.
In the summer of 1998, Maathai learned of a government plan to privatize large areas of public land in the Karura Forest, just outside Nairobi, and give it to political supporters. Maathai protested this through letters to the government and the press. She went with the Green Belt Movement to Karura Forest, planting trees and protesting the destruction of the forest. On 8 January 1999, a group of protesters including Maathai, six opposition MPs, journalists, international observers, and Green Belt members and supporters returned to the forest to plant a tree in protest. The entry to the forest was guarded by a large group of men. When she tried to plant a tree in an area that had been designated to be cleared for a golf course, the group was attacked. Many of the protesters were injured, including Maathai, four MPs, some of the journalists, and German environmentalists. When she reported the attack to the police, they refused to return with her to the forest to arrest her attackers. However, the attack had been filmed by Maathai's supporters, and the event provoked international outrage. Student protests broke out throughout Nairobi, and some of these groups were violently broken up by the police. Protests continued until 16 August 1999, when the president announced that he was banning all allocation of public land.
In 2001, the government again planned to take public forest land and give it to its supporters. While protesting this and collecting petition signatures on 7 March 2001, in Wang'uru village near Mount Kenya, Maathai was again arrested. The following day, following international and popular protest at her arrest, she was released without being charged. On 7 July 2001, shortly after planting trees at Freedom Corner in Uhuru Park in Nairobi to commemorate Saba Saba Day, Maathai was again arrested. Later that evening, she was again released without being charged. In January 2002, Maathai returned to teaching as the Dorothy McCluskey Visiting Fellow for Conservation at the Yale University's School of Forestry and Environmental Studies. She remained there until June 2002, teaching a course on sustainable development focused on the work of the Green Belt Movement.
Election to parliament
Upon her return to Kenya, Maathai again campaigned for parliament in the 2002 elections, this time as a candidate of the National Rainbow Coalition, the umbrella organisation which finally united the opposition. On 27 December 2002, the Rainbow Coalition defeated the ruling party Kenya African National Union, and in Tetu Constituency Maathai won with an overwhelming 98% of the vote. In January 2003, she was appointed Assistant Minister in the Ministry for Environment and Natural Resources and served in that capacity until November 2005. She founded the Mazingira Green Party of Kenya in 2003 to allow candidates to run on a platform of conservation as embodied by the Green Belt Movement. It is a member of the Federation of Green Parties of Africa and the Global Greens.
2004 Nobel Peace Prize
Wangarĩ Maathai was awarded the 2004 Nobel Peace Prize for her "contribution to sustainable development, democracy and peace." Maathai was the first African woman to win the prestigious award. According to Nobel's will, the Peace Prize shall be awarded to the person who in the preceding year "shall have done the most or the best work for fraternity between nations, for the abolition or reduction of standing armies and for the holding and promotion of peace congresses". Between 1901 and 2018, only 52 Nobel Prize awards were given to women, while 852 Nobel Prize awards have been given to men. Through her significant efforts, Wangari Maathai became the first African woman, and the first environmentalist, to win the Peace Prize.
AIDS conspiracy theory
Controversy arose when it was reported by Kenyan newspaper The Standard that Maathai had claimed HIV/AIDS was "deliberately created by Western scientists to decimate the African population." Maathai denied making the allegations, but The Standard has stood by its reports.
In a 2004 interview with Time magazine, in response to questions concerning that report, Maathai replied: "I have no idea who created AIDS and whether it is a biological agent or not. But I do know things like that don't come from the moon. I have always thought that it is important to tell people the truth, but I guess there is some truth that must not be too exposed," and when asked what she meant, she continued, "I'm referring to AIDS. I am sure people know where it came from. And I'm quite sure it did not come from the monkeys." In response she issued the following statement:
2005–2011: Later life
Following a trip to Japan in 2005, Maathai became an enthusiastic proponent of the waste-reduction philosophy of mottainai, a Japanese term of Buddhist origin.
On 28 March 2005, Maathai was elected the first president of the African Union's Economic, Social and Cultural Council and was appointed a goodwill ambassador for an initiative aimed at protecting the Congo Basin Forest Ecosystem. In 2006, she was one of the eight flag-bearers at the 2006 Winter Olympics Opening Ceremony. Also on 21 May 2006, she was awarded an honorary doctorate by and gave the commencement address at Connecticut College. She supported the International Year of Deserts and Desertification program. In November 2006, she spearheaded the United Nations Billion Tree Campaign. Maathai was one of the founders of the Nobel Women's Initiative along with sister Nobel Peace laureates Jody Williams, Shirin Ebadi, Rigoberta Menchú Tum, Betty Williams and Mairead Corrigan Maguire. Six women representing North America and South America, Europe, the Middle East and Africa decided to bring together their experiences in a united effort for peace with justice and equality. It is the goal of the Nobel Women's Initiative to help strengthen work being done in support of women's rights around the world.
In August 2006, then United States Senator Barack Obama traveled to Kenya. His father was educated in America through the same program as Maathai. She and the Senator met and planted a tree together in Uhuru Park in Nairobi. Obama called for freedom of the press to be respected, saying, "Press freedom is like tending a garden; it continually has to be nurtured and cultivated. The citizenry has to value it because it's one of those things that can slip away if we're not vigilant." He deplored global ecological losses, singling out President George W. Bush's refusal to join the United Nations Framework Convention on Climate Change (UNFCCC) and its subsidiary, the Kyoto Protocol.
Maathai was defeated in the Party of National Unity's primary elections for its parliamentary candidates in November 2007 and chose to instead run as the candidate of a smaller party. She was defeated in the December 2007 parliamentary election. She called for a recount of votes in the presidential election (officially won by Mwai Kibaki, but disputed by the opposition) in her constituency, saying that both sides should feel the outcome was fair and that there were indications of fraud.
In 2009, she published "The Challenge for Africa" with her insights into the strengths and weaknesses of governance in Africa, her own experiences, and the centrality of environmental protection to Africa's future.
In June 2009, Maathai was named as one of PeaceByPeace.com's first peace heroes. Until her death in 2011, Maathai served on the Eminent Advisory Board of the Association of European Parliamentarians with Africa (AWEPA).
Wangarĩ Maathai died on 25 September 2011 of complications arising from ovarian cancer while receiving treatment at a Nairobi hospital.
Her remains were cremated and buried at the Wangari Maathai Institute for Peace and Environmental Studies in Nairobi.
Wangarĩ Maathai Forest Champion Award
In 2012, the Collaborative Partnership on Forests CPF, an international consortium of 14 organisations, secretariats and institutions working on international forest issues, launched the inaugural Wangarĩ Maathai Forest Champion Award.
Winners have included:
2012 – Narayan Kaji Shrestha, with an honourable mention to Kurshida Begum
2014 – Martha Isabel "Pati" Ruiz Corzo, with an honourable mention to Chut Wutty
2015 – Gertrude Kabusimbi Kenyangi
2017 – Maria Margarida Ribeiro da Silva, a Brazilian forestry activist
2019 – Léonidas Nzigiyimpa, a Burundian forestry activist
2022 – Cécile Ndjebet, a Cameroonian activist
Posthumous recognition
In 2012, Wangarĩ Gardens opened in Washington, DC. Wangarĩ Gardens is 2.7 acre community garden project for local residents which consists of over 55 garden allotments. This community garden honours the legacy of Wangarĩ Maathai and her mission for community engagement and environmental protection. The Wangarĩ Gardens consist of a community garden, youth garden, outdoor classroom, pollinator hive and public fruit tree orchard, vegetable garden, herb garden, berry garden and strawberry patch. Within the garden complex there are personal garden plots and public gardens. The personal plots are available to residents living within 1.5 miles of the community garden. Personal plot holders are required to contribute 1 hour monthly to the maintenance of the public gardens. The public gardens and orchard are maintained by plot holders and volunteers, and are open to everyone to enjoy and harvest. The Wangarĩ Gardens has no direct affiliation with the Green Belt Movement or the Wangarĩ Maathai Foundation but was inspired by Wangarĩ Maathai and her work and passion for the environment.
On 25 September 2013, the Wangarĩ Maathai Trees and Garden was dedicated on the lawn of the University of Pittsburgh's Cathedral of Learning. The memorial includes two red maples symbolizing Maathai's "commitment to the environment, her founding of the Green Belt Movement, and her roots in Kenya and in Pittsburgh" and a flower garden planted in a circular shape that representing her "global vision and dedication to the women and children of the world" with an ornamental maple tree in the middle signifying "how one small seed can change the world".
On 1 April 2013, Google celebrated Wangari Maathai’s 73rd Birthday with a doodle.
In 2014, at what would have been her 50-year reunion, her Mount St. Scholastica classmates and Benedictine College unveiled a statue of the Nobel laureate at her alma mater's Atchison, Kansas campus. In 2019, with the renovation of the Westerman Hall of Science and Engineering, the college added a mural of Maathai and other scientists to the front entryway of the building.
In 2015, UNESCO published the graphic novel Wangari Maathai and the Green Belt Movement as part of their UNESCO Series on Women in African History. As an artistic and visual interpretation intended for private or public use in classrooms, it tells the story of Maathai and the movement she started.
In October 2016, Forest Road in Nairobi was renamed to Wangarĩ Maathai Road for her efforts to oppose several attempts to degrade forests and public parks through the Green Belt Movement.
In September 2022, Washington, DC–based educational publisher, Science Naturally, included Dr. Maathai in their Women in Botany book in the Science Wide Open series for children. Brief excerpt:"Dr. Wangari started the Green Belt Movement to change things. She taught women in Kenya how to grow trees from seeds, and the women were paid to plant trees all around the country."
Selected publications
; (1985)
The bottom is heavy too: even with the Green Belt Movement : the Fifth Edinburgh Medal Address (1994)
Bottle-necks of development in Africa (1995)
The Canopy of Hope: My Life Campaigning for Africa, Women, and the Environment (2002)
Unbowed: A Memoir (2006)
Reclaiming rights and resources women, poverty and environment (2007)
Rainwater Harvesting (2008)
State of the world's minorities 2008: events of 2007 (2008)
; (2009)
Moral Ground: Ethical Action for a Planet in Peril. (2010) chapter Nelson, Michael P. and Kathleen Dean Moore (eds.). Trinity University Press,
Replenishing the Earth (2010)
Honours
1984: Right Livelihood Award
1986: Better World Society
1987: Global 500 Roll of Honour
1991: Goldman Environmental Prize
1991: The Hunger Project's Africa Prize for Leadership
1993: Edinburgh Medal (for "Outstanding contribution to Humanity through Science")
1993: Jane Addams Leadership Award
1993: Benedictine College Offeramus Medal
1994: The Golden Ark Award
2001: The Juliet Hollister Award
2003: Global Environment Award, World Association of Non-Governmental Organizations
2004: Conservation Scientist Award from Columbia University
2004: J. Sterling Morton Award
2004: Petra Kelly Prize
2004: Sophie Prize
2004: Nobel Peace Prize
2006: Légion d'honneur
2006: Doctor of Public Service (honorary degree), University of Pittsburgh
2007: World Citizenship Award
2007: Livingstone Medal from Royal Scottish Geographical Society
2007: Indira Gandhi Prize
2007: Cross of the Order of St. Benedict
2008: The Elizabeth Blackwell Award from Hobart and William Smith Colleges
2009: NAACP Image Award - Chairman's Award (with Al Gore)
2009: Grand Cordon of the Order of the Rising Sun of Japan
2011: The Nichols-Chancellor's Medal awarded by Vanderbilt University
2013: Doctor of Science (honorary degree), Syracuse University, New York
2020: The Perfect World Award by The Perfect World Foundation
See also
Black Nobel Prize laureates
List of female Nobel laureates
List of peace activists
Mottainai
Tokyo International Conference on African Development (TICAD-IV), 2008.
Women's Environment & Development Organization
References
Works cited
Further reading
Namulundah Florence, Wangari Maathai: Visionary, Environmental Leader, Political Activist, Lantern, 2015.
Wangari Maathai, The Greenbelt Movement: Sharing the Approach and the Experience, Lantern Books, 2003.
Wangari Maathai, The Canopy of Hope: My Life Campaigning for Africa, Women, and the Environment, Lantern Books, 2002.
Wangari Maathai, Bottom is Heavy Too: Edinburgh Medal Lecture, Edinburgh UP, 1994.
Picture book (fr.), Franck Prévot (text) & Aurélia Fronty (illustrations), Wangari Maathai, la femme qui plante des millions d'arbres, , 2011 ()
External links
Taking Root: The Vision of Wangari Maathai documentary film
Official Site: The Wangari Maathai Foundation
The Green Belt Movement and Wangari Maathai
Wangari Maathai and the Billion Tree Campaign
Feature on Wangari Maathai by the International Museum of Women
Seeds of change planting a path to peace
Nobel Women's Initiative
1940 births
2011 deaths
20th-century Kenyan women scientists
20th-century Kenyan scientists
Benedictine College alumni
Deaths from cancer in Kenya
Converts to Roman Catholicism
Kenyan Roman Catholics
Deaths from ovarian cancer
Economic, Social and Cultural Council officials
Nobel Peace Prize laureates
Kenyan Nobel laureates
Kenyan environmentalists
Kenyan women environmentalists
Kenyan democracy activists
Kenyan expatriates in the United States
Kenyan feminists
Kenyan veterinarians
21st-century Kenyan women politicians
21st-century Kenyan politicians
Kenyan women's rights activists
Kikuyu people
Mazingira Green Party of Kenya politicians
Members of the National Assembly (Kenya)
Grand Cordons of the Order of the Rising Sun
People from Nyeri County
University of Pittsburgh alumni
Women Nobel laureates
University of Nairobi alumni
Yale University faculty
Forestry in Kenya
Women in forestry
Candidates for President of Kenya
Japan–Kenya relations
Nonviolence advocates
Goldman Environmental Prize awardees
Environmental justice scholars
21st-century Kenyan scientists | Wangarĩ Maathai | [
"Technology"
] | 8,398 | [
"Women Nobel laureates",
"Women in science and technology"
] |
1,049,169 | https://en.wikipedia.org/wiki/Service-level%20objective | A service-level objective (SLO), as per the O'Reilly Site Reliability Engineering book, is a "target value or range of values for a service level that is measured by an SLI." An SLO is a key element of a service-level agreement (SLA) between a service provider and a customer. SLOs are agreed upon as a means of measuring the performance of the service provider and are outlined as a way of avoiding disputes between the two parties based on misunderstanding.
Overview
There is often confusion in the use of SLAs and SLOs. The SLA is the entire agreement that specifies what service is to be provided, how it is supported, times, locations, costs, performance, and responsibilities of the parties involved. SLOs are specific measurable characteristics of the SLA such as availability, throughput, frequency, response time, or quality. These SLOs together are meant to define the expected service between the provider and the customer and vary depending on the service's urgency, resources, and budget. SLOs provide a quantitative means to define the level of service a customer can expect from a provider.
The SLO are formed by setting goals for metrics (commonly called service level indicators, SLIs). As an example, an availability SLO may be defined as the expected measured value of an availability SLI over a prescribed duration (e.g. four weeks). The availability SLI used will vary based on the nature and architecture of the service. For example, a simple web service might use the ratio of successful responses served vs the total number of valid requests received. (total_success / total_valid)
Examples
Sturm and Morris argue that SLOs must be:
Attainable
Repeatable
Measurable
Understandable
Meaningful
Controllable
Affordable
Mutually acceptable
While Andrieux et al. define the SLO as "the quality of service aspect of the agreement. Syntactically, it is an assertion over the terms of the agreement as well as such qualities as date and time". Keller and Ludwig more concisely define an SLO as "commitment to maintain a particular state of the service in a given period" with respect to the state of the SLA parameters. Keller and Ludwig go on to state that while service providers will most often be the lead entity in taking on SLOs there is no firm definition as such and any entity can be responsible for an SLO. Along with this an SLO can be broken down into a number of different components.
Obliged - The entity that is required to deliver the SLO.
Validity Period - The time in which the SLO will be delivered.
Expression - This is the actual language that defines what the SLO will be.
Optionally an EvaluationEvent maybe assigned to the SLO, an EvaluationEvent is defined as the measure by which the SLO will be checked to see if it's meeting the Expression.
SLOs should generally be specified in terms of an achievement value or service level, a target measurement, a measurement period, and where and how they are measured. As an example, "90% of calls to the helpdesk should be answered in less than 20 seconds measured over a one-month period as reported by the ACD system". Results can be reported as a percent of time that the target answer time was achieved and then compared to the desired service level (90%).
Term usage
The SLO term is found in various scientific papers, for instance in the reference architecture of the SLA@SOI project, and it is used in the Open Grid Forum document on WS-Agreement.
References
External links
Service Level Objectives
What are SLOs? How service-level objectives work with SLIs to deliver on SLAs
SLA vs. SLO vs. SLI: What’s the difference?
IT service management
Outsourcing | Service-level objective | [
"Technology"
] | 783 | [
"Computer industry",
"IT service management"
] |
1,049,187 | https://en.wikipedia.org/wiki/Toilets%20in%20Japan | Toilets in Japan are sometimes designed more elaborately than toilets commonly seen in other developed nations. European toilets occasionally have a separate bidet whilst Japan combines an electronic bidet with the toilet. The current state of the art for Western-style toilets in Japan is the bidet toilet, which is installed in 81% of Japanese households. In Japan, these bidets are commonly called washlets, a brand name of Toto Ltd., and they may include many advanced features rarely seen outside of Asia. The basic feature set commonly found on washlets consists of anal hygiene, bidet washing, seat warming, and deodorization.
Terminology
The word is an abbreviated form of the English language word "toilet", and is used both for the toilet itself and for the room where it is located.
A common euphemism is . This is similar to the usage in US English of "washroom", which literally means a room where something is washed, and "toilet", which literally refers to the act of self-cleaning. It is also common to see another loan translation, , on signs in department stores and supermarkets, as well as accompanying the public toilet pictogram.
The plain word for toilet is , from the word meaning "convenience" or "excrement", and this word is fairly common. It is often used in elementary schools, public swimming baths, and other such public places, and is not especially impolite, although some may prefer to use a more refined word. In many children's games, a child who is tagged "out" is sent to a special place, such as the middle of a circle, called the benjo. Japanese has many other words for places reserved for excretory functions, including kawaya (厠) and habakari (憚り), but most are rare or archaic.
The toilet itself—that is, the bowl or in-floor receptacle, the water tank, et cetera—is called benki (便器). The toilet seat is benza (便座). A potty, either for small children or for the elderly or infirm, is called omaru (sometimes written 御虎子).
The Japan Toilet Association celebrates an unofficial Toilet Day on November 10, because in Japan the numbers 11/10 (for the month and the day) can be read as , which also means "Good Toilet".
Types of toilets
There are two styles of toilets commonly found in Japan; the oldest type is a simple squat toilet, which is still common in public conveniences. After World War II, modern Western-type flush toilets and urinals became common.
Squat toilet
The traditional toilet is the squat toilet. A squat toilet differs from a sitting toilet in both construction and method of employment. A squat toilet essentially looks like a miniature urinal set horizontally into the floor. Most squat toilets in Japan are made of porcelain, although in some cases (as on trains) stainless steel is used instead. The user squats over the toilet, facing the hemispherical hood, i.e., the wall in the back of the toilet in the picture seen on the right. A shallow trough collects the waste, instead of a large water-filled bowl as in a Western toilet. All other fixtures, such as the water tank, piping, and flushing mechanism, may be identical to those of a Western toilet.
Flushing causes water to push the waste matter from the trough into a collecting reservoir which is then emptied, with the waste carried off into the sewer system. The flush is often operated in the same manner as a Western toilet, though some have pull handles or pedals instead. Many Japanese toilets have two kinds of flush: "small" (小) and "large" (大). The difference is in the amount of water used. The former is for urine (in Japanese, literally "small excretion") and the latter for feces ("large excretion"). Often, the lever is pushed to the "small" setting to provide a continuous masking noise for privacy, as discussed below.
A combination squat/sitting toilet also exists, where a seat can be flipped down over a squat toilet, and the toilet can be used essentially the same way as the Western style. This hybrid seems to be common only in rural areas for the benefit of resident foreigners. Adapters that sit on top of the Japanese toilet to convert it to a functional sit-down toilet are much more common. There are also permanently installed extensions available to convert a squat toilet into a sitting-style washlet.
"Western-style"
A flush toilet which has a pedestal for sitting is known in Japan as a toilet, more commonly known as the sitting toilet. Western-style toilets, including high tech toilets, are now more common in Japanese homes than the traditional squat toilets, though some older apartments retain stickers on the toilet or in its room illustrating the proper way to use it for urination and defecation. Many public toilets at schools, temples, and train stations are still equipped with only squat toilets. In their own homes, however, Japanese people prefer being able to sit, especially older or physically disabled individuals for whom prolonged squatting is physically demanding or uncomfortable. Like Japanese toilets, many Western toilets have two kinds of flush: "small" (小) and "large" (大). The difference is in the amount of water used.
Japanese bidets (washlet)
The modern toilet in Japan, in English sometimes called Super Toilet, and commonly known in Japanese as or as has many features. The Toto product Washlet Zoe is listed in Guinness World Records as the world's most sophisticated toilet, with seven functions. However, as the model was introduced in 1997, it is now likely to be inferior to the latest model by Toto, Neorest. The idea for the washlet came from abroad, and the first toilet seat with integrated bidet was produced in Switzerland by Closomat in 1957. The age of the high-tech toilet in Japan started in 1980 with the introduction of the Washlet G Series by Toto, and since then the product name washlet has been used to refer to all types of Japanese high-tech toilets. , almost half of all private homes in Japan have such a toilet, exceeding the number of households with a personal computer. While the toilet looks like a Western-style toilet at first glance, there are numerous additional features—such as blow dryer, seat heating, massage options, water jet adjustments, automatic lid opening, automatic flushing, wireless control panel, room heating and air conditioning for the room—included either as part of the toilet or in the seat. These features can be accessed by an (often wireless) control panel attached to the seat or mounted on a nearby wall.
Basic features
The most basic feature is the integrated bidet, a nozzle the size of a pencil that comes out from underneath the toilet seat and squirts water. It has two settings: one for washing the anus and one for the bidet function. The former is called posterior wash, general use, or family cleaning, and the latter is known as feminine cleaning, feminine wash or simply bidet. At no point does the nozzle actually touch the body of the user. The nozzle is also self-cleaning and cleans itself before and after operation.
The user can select to wash the anus or vulva by pressing the corresponding button on the control panel. Usually the same nozzle is used for both operations, but at a different position of the nozzle head, and using different openings in the nozzle to squirt water at a different angle to aim for the correct spot. Occasionally, two nozzles are used, each dedicated for one area. The control logic is also attached to a pressure switch or a proximity sensor in the toilet seat, and operates only when the seat is occupied. The very first models did not include this automatic switch-off.
The seat-heating feature is very common, found even on toilets that lack the bidet features. As most Japanese homes lack central heating—instead using space heating—the bathroom may be only a few degrees above freezing in the winter.
Customization
Most high-tech toilets allow water temperature and water pressure to be adjusted to match the preferences of the user. By default, the vulva receives less pressure than the anus. Researchers in Japan have found that most users prefer a water temperature slightly above body temperature, with considered optimal. The nozzle position can also often be manually adjusted forward or aft. High-end washlets allow selection of vibrating and pulsating jets of water, claimed by manufacturers
to be beneficial for constipation and hemorrhoids. The most advanced washlets can mix the water jet with soap for an improved cleaning process.
The washlet can replace toilet paper completely, but many users opt to use both wash and paper in combination—although use of paper may be omitted for cleaning of the vulva. Some wipe before washing, some wash before wiping, some wash only, and some wipe only—each according to their preference. Another frequent feature is a blow dryer, often adjustable between 40 °C (104 °F) and 60 °C, (140 °F) used to dry the washed areas.
Advanced features
Additional features may include a heated seat, which may be adjustable from 30 °C (86 °F) to 40 °C (104 °F), an automatic lid equipped with a proximity sensor, which opens and closes based on the location of the user; and an air dryer and deodorizer. Some play music to relax the user's sphincter (some INAX toilets, for example, play the first few phrases of Op. 62 Nr. 6 Frühlingslied by Felix Mendelssohn). Other features are automatic flushing, automatic air deodorizing, and a germ-resistant surface. Some models specially designed for the elderly may include armrests and devices that help the user to stand back up after use. A soft close feature slows the toilet lid down while closing so the lid does not slam onto the seat, or in some models, the toilet lid will close automatically a certain time after flushing.
The most recent introduction is an ozone deodorant system that can quickly eliminate smells. Also, the latest models store the times of day when the toilet is used and have a power-saving mode that warms the toilet seat only during times when the toilet is likely to be used based on historic usage patterns. Some toilets also glow in the dark or may even have air conditioning below the rim for hot summer days. Another recent innovation is intelligent sensors that detect someone standing in front of the toilet and initiate an automatic raising of the lid (if the person is facing away from the toilet) or the lid and seat together (if someone is facing the toilet).
Self-cleaning
Japanese toilets with washlets increasingly have features intended to reduce the frequency with which manual cleaning is required.
Many models will spray a film of water prior to use to prevent waste from bonding to the bowl prior to flushing. Still others will spray a small amount of mild detergent, this has the added benefit of breaking the surface tension of the water, preventing urine or solid waste from splashing during use. Some models will spray electrolyzed water after use to disinfect the bowl.
Air ionizers are sometimes included with claims of microbe reduction when the lid is closed. Recently, there has been development in using photocatalytic glazes and ultraviolet light to clean the bowl.
Controls
Text explaining the controls of these toilets tends to be in Japanese only. Although many of the buttons often have pictograms, the flush button is often written only in Kanji, meaning that non-Japanese users may initially find it difficult to locate the correct button.
In January 2017, The Japan Sanitary Equipment Industry Association, a consortium of companies producing plumbing products including Toto Ltd., Panasonic, and Toshiba, has agreed to unify the iconography used on the often baffling control panels for Japanese toilets. The toilet manufacturers plan to implement the eight new pictogram on models released from 2017 onward, with a view to the system becoming an international standard.
Future developments
Recently, researchers have added medical sensors into these toilets, which can measure the blood sugar based on the urine, and also measure the pulse, blood pressure, and the body fat content of the user. Talking toilets that greet the user have also started being made. Other measurements are currently being researched. The data may automatically be sent to a doctor through a built-in internet-capable cellular telephone. However, these devices are still very rare in Japan, and their future commercial success is difficult to predict. A voice-operated toilet that understands verbal commands is under development. Toto, NAiS (a division of Panasonic), and other companies also produce portable, battery-operated travel washlets, which must be filled with warm water before use.
Washlet Syndrome
The repetitive use of a "type water jet on a high-pressure setting for an enema, can weaken the capability for self-evacuation of the Washlet user, which can lead to more serious constipation." If a Washlet high-pressure water jet is used on the anus repeatedly, it may cause excessive cleanliness, prompting other bacteria to adhere around the anus, causing skin disease (inflammation) around the anus. Some proctologists in Japan have named this or .
There have been claims of benefit in preventing urinary tract infections and also concerns that washlet use can cause increased risk of urinary tract infection, aggravate vaginal flora when the bidet feature is used, and cause cross-contamination from the wand or water tank, but the effects appear to be minimal and neither a substantial risk nor of measurable benefit for healthy adults.
Urinals
Urinals in Japan are very similar to the urinals in the rest of the world, and mainly used for public male toilets or male toilets with a large number of users.
Female urinals never caught on in Japan, although there were attempts made to popularize the American Sanistand female urinal by the Japanese toilet manufacturing company Toto between 1951 and 1968. This device was shaped like a cone and placed on the floor. However, those were never very popular, and only a few of them remain, including those underneath the now-demolished National Olympic Stadium from the 1964 Summer Olympics in Tokyo, which was added to accommodate people from a wide range of cultures.
Japan-specific accessories
Toilets in Japan have very similar accessories as most toilets worldwide, including toilet paper, a toilet brush, a sink, etc. However, there are some Japan-specific accessories that are rarely found outside Japan.
The Sound Princess
Many Japanese women are embarrassed at the thought of being heard by others during urination (see paruresis). To cover the sound of bodily functions, many women used to flush public toilets continuously while using them, wasting a large amount of water in the process. As persuasion campaigns did not stop this practice, a device was introduced in the 1980s that, after activation, produces the sound of flushing water without the need for actual flushing. A Toto brand name commonly found is the . This device is now routinely placed in most new public women's rooms, and many older public women's rooms have been upgraded.
The Otohime may be either a separate battery-operated device attached to the wall of the toilet, or included in an existing washlet. The device is activated by pressing a button, or by the wave of a hand in front of a motion sensor. When activated, the device creates a loud flushing sound similar to a toilet being flushed. This sound either stops after a preset time or can be halted through a second press on the button. It is estimated that this saves up to of water per use. However, some women believe that the Otohime sounds artificial and prefer to use a continuous flushing of the toilet instead of the recorded flush of the Otohime.
Toilet slippers
In Japanese culture, there is a tendency to separate areas into "clean" and "unclean", and the contact between these areas is minimized. For example, the inside of the house is considered a clean area, whereas the outside of the house is considered unclean. To keep the two areas separated, shoes are taken off before entering the house so that the unclean shoes do not touch the "clean" area inside of the house. Historically, toilets were located outside of the house, and shoes were worn for a trip to the toilet. Nowadays, the toilet is almost always inside the home and hygienic conditions have improved significantly, but the toilet is still considered an "unclean" area.
To further minimize contact between the "unclean" toilet floor and the "clean" floor in the rest of the house, many private homes and also some public toilets have in front of the toilet door that should be used when in the toilet and removed immediately after leaving the toilet. This also indicates if the toilet is in use. They can be as simple as a pair of rubber slippers, decorated slippers with prints of anime characters for small children, or even animal fur slippers. A frequent faux pas of foreigners is to forget to take off the toilet slippers after a visit to the restroom, and then use these in the non-toilet areas, hence mixing the "clean" and "unclean" areas.
Public toilets
Public toilets are usually readily available all over Japan, and can be found in department stores, supermarkets, book stores, CD shops, parks, most convenience stores, and in all but the most rural train stations. Some older public toilet buildings lack doors, meaning that men using the urinals are in full view of people walking past. Beginning in the 1990s, there has been a movement to make public toilets cleaner and more hospitable than they had been in the past.
The number of public restrooms that have both Western and squat types of toilets is increasing. Many train stations in the Tokyo area and public schools throughout Japan, for example, only have squat toilets. In addition, parks, temples, traditional Japanese restaurants, and older buildings typically only have squat toilets. Western-style toilets are usually indicated by the kanji characters 洋式 (yōshiki), the English words "Western-style", a symbol for the type of toilet, or any combination of the three. Handicapped bathrooms are always Western style.
Many public toilets do not have soap for washing hands, or towels for drying hands. Many people carry a handkerchief with them for such occasions, and some even carry soap. Some public toilets are fitted with powerful hand dryers to reduce the volume of waste generated from paper towels. Hand dryers and taps are sometimes installed with motion-sensors as an additional resource-saving measure.
In a project launched by the Nippon Foundation, 16 well-known architects were asked to renovate 17 public toilets located in the public parks of Shibuya, Tokyo. Shigeru Ban designed restrooms that are surrounded by transparent tinted glass, which allows a person to evaluate the interior before entering. In August 2020, these restrooms were installed in Haru-no-Ogawa Community Park and the Yoyogi Fukamachi Mini Park.
Cultural aspects
In the often-crowded living conditions of Japanese cities and with the lack of rooms that can be locked from inside in a traditional Japanese house, the toilet is one of the few rooms in the house that allows for a degree of privacy. Some toilet rooms are equipped with a bookshelf, in others people may enter with a newspaper, and some are even filled with character goods and posters. Even so, these toilets are whenever possible, in rooms separate from those for bathing. This is due to a concern about separating clean from unclean, and this attribute is a selling point in properties for rent.
Both the traditional squat toilet and the high-tech toilet are a source of confusion for foreigners unaccustomed to these devices. There are humorous reports of individuals using a toilet, and randomly pressing buttons on the control panel either out of curiosity or in search for the flushing control, and suddenly to their horror receiving an unexpected jet of water directed at the genitals or anus. As the water jet continues for a few seconds after the novice jumps up, he also gets himself or the bathroom wet. Many Japanese toilets now feature pressure-sensitive seats that automatically shut off the bidet when the user stands up. Many have the buttons labeled in English to reduce culture shock.
In January 2017, the Japan Sanitary Equipment Industry Association agreed to standardize the iconography used on control panels of Japanese toilets, in an attempt to reduce confusion for foreign visitors.
Environmental aspects
The environmental impact of modern style washlets differs from regular flush toilets. Modern toilets use less water than old toilets, and the self-cleaning options also reduce the amount of detergent. Some toilets even change the amount of water for the flush depending if the seat was flipped up (indicating male urination) or not. They also cause less toilet paper to be used. On the other hand, these toilets also consume energy, and are estimated to consume 5% of the energy of the average Japanese household. In rural areas, toilets that use very little or no water have also been designed. These are also considered as emergency toilets in case of earthquakes.
Toilet sinks
Many toilets in Japan with a water tank include a built-in sink. This is a simple water-saving grey water system: clean municipal water is used to wash the hands, then the waste water from hand washing is used to fill the tank for flushing. It also is a space saving feature in small, older bathrooms.
Market acceptance
Washlets in Japan cost from US$200, with the majority priced around US$500 for washlet upgrades for existing Western-style toilets. Top-of-the-range washlets, including the ceramic bowl, can cost up to US$5,000.
Toto Ltd. is the largest producer of toilets, including washlets, worldwide. Washlets and other toilet related products are also produced by Inax, and Panasonic.
The total market worldwide for high-tech toilets was about US$800 million in 1997. The largest producer in this category is Toto, with 65% of the market share, while the second largest is Inax at 25%. The main market for washlets is still in Japan, and Toto reports that overseas sales account for just 5% of its revenue. The primary foreign market is China, where Toto sells over one million washlets each year. In the US for example, sales are well below Japanese levels, even though sales improved from 600 units per month in 2001 to 1,000 units per month in 2003. In Europe, Toto sells only 5,000 washlets annually.
While most Europeans would probably regard Japanese washlets as quite a curiosity, the number of such toilets being installed in Europe is increasing. This is mainly for toilets for the handicapped. Depending on the type of disability, handicapped persons may have difficulties reaching the anus region to clean themselves after toilet use. Hence, the introduction of toilets with a water jet cleaner and blow dryer allows such persons to clean themselves without assistance.
There are several reasons for low sales outside of Japan. One main reason is that it takes time for customers to get used to the idea of a washlet. Sales in Japan were slow when the device was introduced in 1980. After some acclimatization, sales improved significantly starting in 1985. Around 1990, 10% of Japanese households had a washlet; this number increased to over 50% in 2002. Toto expects a corresponding improvement in foreign sales within a few years.
Another factor is the lack of a power supply near the toilet. While virtually all Japanese washrooms have an electric outlet behind the toilet, many foreign bathrooms lack a nearby outlet. In Australia, New Zealand, Ireland, the UK, and many other countries, high current electrical outlets installed in close proximity to water, or where persons may be wet, are prohibited by codes due to possible health and safety concerns.
Lastly, the outlet of the toilet (for S-type toilets) is a maximum from the back wall, but Japanese toilets need it to be at least so an S-type European toilet cannot be replaced easily with a Japanese toilet. They are much more expensive than traditional Western toilets. In Europe, there is competition with the traditional Western bidet, while North Americans are unaccustomed to bidets.
History
During the Jōmon period (1400 BC to 300 BC) settlements were built in a horseshoe shape, with a central plaza in the middle and garbage heaps around the settlement. In these garbage heaps, calcified fecal remains of humans or dogs, so called coprolites, were found, indicating that these garbage dumps were also used as toilets.
The earliest sewer systems are from the Yayoi period (300 BC to 250 AD). These systems were used in larger settlements, probably in combination with toilets.
A possible ritual site, that may also have been a toilet using flowing water, dating back to the early 3rd century was found in Sakurai, Nara. Another cesspit analyzed by archaeologists in detail was found at the site of the Fujiwara Palace in Kashihara, Nara, the first location of the imperial city from 694 to 710. This toilet was constructed over an open pit similar to an outhouse.
During the Nara period (710 to 784), a drainage system was created in the capital in Nara, consisting of 10–15 cm wide streams where the user could squat over with one foot on each side of the stream. Wooden sticks called chūgi were used as a sort of toilet paper. In earlier days seaweed was used for cleaning, but by the Edo period, these had been replaced by toilet paper made of washi (traditional Japanese paper). In the mountainous regions, wooden scrapers and large leaves were used too.
Often, toilets were constructed over a running stream; one of the first known flushing toilets was found at Akita castle, dating back to the 8th century, with the toilet constructed over a diverted stream.
However, historically, pit toilets were more common, as they were easier to build and allowed the reuse of the feces as fertilizer—very important in a country where Buddhism and its associated mostly vegetarian, pescetarian lifestyle acted to reduce dependence on livestock for food. The waste products of rich people were sold at higher prices because their diet was better.
Various historic documents dating from the 9th century describe laws regarding the construction of fresh and waste water channels, and detail the disposal procedures for toilet waste.
Prisoners shall be directed to clean up sewage at the Palace and government offices as well as toilets of the east and west on the morning after a rainy night
— Collected Interpretations of the Administrative Laws Ryo-no-shuge
Selling human waste products as fertilizers became much less common after World War II, both for sanitary reasons and because of the proliferation of chemical fertilizers, and less than 1% is now used for fertilization. Because of this history, Japan had a much higher historical standard of hygiene. For example, in Japan, the orderly disposal of human waste was a common component of the culture. The first Westerner to visit Edo expressed his shock to see such a clean city.
In Okinawa, the toilet was often attached to the pig pen, and the pigs were fed with the human waste product. This practice was banned as unhygienic after World War II by the American authorities.
During the Azuchi–Momoyama period (1568 to 1600), the "Taiko Sewerage" was built around Osaka Castle, and it still exists and functions today. The use of modern sewage systems began in 1884, with the installation of the first brick and ceramic sewer in Kanda, Tokyo. More plumbing and sewage systems were installed after the Great Kantō earthquake to avoid diseases after future earthquakes. However, the construction of sewers increased only after World War II to cope with the waste products of the growing population centers. By the year 2000, 60% of the population was connected to a sewer system. The national Sewage Day is September 10.
Western-style toilets and urinals started to appear in Japan at the beginning of the 20th century, but only after World War II did their use become more widespread, due to the influence of the American occupation. The Occupation government eschewed the use of human excreta as fertilizer, which led to a sense of shame over this practice, and in rural areas where the practice had persisted, human waste quickly went from being recycled to being disposed of. Specific places where night soil continued to be recycled required conscious political leadership, such as the Shinkyō Commune in Nara Prefecture.
In 1977, the sale of Western-style toilets exceeded the sale of traditional squat toilets in Japan. Based on toilets with a built-in bidet from Switzerland and the US, the world's largest sanitary equipment company, Toto, introduced the Washlet in 1980. Japanese companies currently produce some of the most advanced, highest tech toilets in the world.
See also
Electronic bidet
Science and technology in Japan
Mariko Aoki phenomenon, the urge to defecate while visiting a bookstore
TOTO Neorest 600
TOTO Drake II
Toilet meal
References
External links
Chozick, Matthew R. "Views from the loo queues"—21 July 2007 article from The Japan Times: Tokyo residents, foreigners on vacation, professors, and celebrities are interviewed about Japanese toilet use.
Tokyo Toilet Map with pictures of public toilets in Japan.
ToiletZone Picture of private toilets in Japan.
Toilets in Tokyo
Ito, Masami, "Toilets: Japan power behind throne", The Japan Times, 2 November 2010, p. 3.
トイレットペーパーのポータルサイト|トイレットペーパーの歴史 History of toilet paper at Toiletpaper.co.jp. Retrieved March 28, 2011. Japanese-language site.
Toilet MP3 Akihabara News. Retrieved March 28, 2011.
Japanese manners : Toilet Hokkaido Japanese Language School.
Toilets
Japanese home
Japanese architectural features | Toilets in Japan | [
"Biology"
] | 6,180 | [
"Excretion",
"Toilets"
] |
1,049,191 | https://en.wikipedia.org/wiki/Equivalent%20circuit | In electrical engineering, an equivalent circuit refers to a theoretical circuit that retains all of the electrical characteristics of a given circuit. Often, an equivalent circuit is sought that simplifies calculation, and more broadly, that is a simplest form of a more complex circuit in order to aid analysis. In its most common form, an equivalent circuit is made up of linear, passive elements. However, more complex equivalent circuits are used that approximate the nonlinear behavior of the original circuit as well. These more complex circuits often are called macromodels of the original circuit. An example of a macromodel is the Boyle circuit for the 741 operational amplifier.
Examples
Thévenin and Norton equivalents
One of linear circuit theory's most surprising properties relates to the ability to treat any two-terminal circuit no matter how complex as behaving as only a source and an impedance, which have either of two simple equivalent circuit forms:
Thévenin equivalent – Any linear two-terminal circuit can be replaced by a single voltage source and a series impedance.
Norton equivalent – Any linear two-terminal circuit can be replaced by a current source and a parallel impedance.
However, the single impedance can be of arbitrary complexity (as a function of frequency) and may be irreducible to a simpler form.
DC and AC equivalent circuits
In linear circuits, due to the superposition principle, the output of a circuit is equal to the sum of the output due to its DC sources alone, and the output from its AC sources alone. Therefore, the DC and AC response of a circuit is often analyzed independently, using separate DC and AC equivalent circuits which have the same response as the original circuit to DC and AC currents respectively. The composite response is calculated by adding the DC and AC responses:
A DC equivalent of a circuit can be constructed by replacing all capacitances with open circuits, inductances with short circuits, and reducing AC sources to zero (replacing AC voltage sources by short circuits and AC current sources by open circuits.)
An AC equivalent circuit can be constructed by reducing all DC sources to zero (replacing DC voltage sources with short circuits and DC current sources with open circuits)
This technique is often extended to small-signal nonlinear circuits like tube and transistor circuits, by linearizing the circuit about the DC bias point Q-point, using an AC equivalent circuit made by calculating the equivalent small signal AC resistance of the nonlinear components at the bias point.
Two-port networks
Linear four-terminal circuits in which a signal is applied to one pair of terminals and an output is taken from another, are often modeled as two-port networks. These can be represented by simple equivalent circuits of impedances and dependent sources. To be analyzed as a two port network the currents applied to the circuit must satisfy the port condition: the current entering one terminal of a port must be equal to the current leaving the other terminal of the port. By linearizing a nonlinear circuit about its operating point, such a two-port representation can be made for transistors: see hybrid pi and h-parameter circuits.
Delta and Wye circuits
In three phase power circuits, three phase sources and loads can be connected in two different ways, called a "delta" connection and a "wye" connection. In analyzing circuits, sometimes it simplifies the analysis to convert between equivalent wye and delta circuits. This can be done with the wye-delta transform.
Li-ion batteries
The electrical behavior of a Lithium-ion battery cell is often approximated by an equivalent circuit model. Such a model consists of a voltage generator driven by the state of charge, representing the open-circuit voltage of the cell, a resistor representing the internal resistance of the cell, and some RC parallels to simulate the dynamic voltage transients.
In biology
Equivalent circuits can be used to electrically describe and model either a) continuous materials or biological systems in which current does not actually flow in defined circuits or b) distributed reactances, such as found in electrical lines or windings, that do not represent actual discrete components. For example, a cell membrane can be modelled as a capacitance (i.e. the lipid bilayer) in parallel with resistance-DC voltage source combinations (i.e. ion channels powered by an ion gradient across the membrane).
See also
Equivalent impedance transforms
Miller theorem
Lumped element model
Steinmetz equivalent circuit
References
Circuit theorems | Equivalent circuit | [
"Physics"
] | 896 | [
"Equations of physics",
"Circuit theorems",
"Physics theorems"
] |
1,049,227 | https://en.wikipedia.org/wiki/Washlet | is a Japanese line of cleansing toilet seats manufactured and sold by the company Toto. The electronic bidet features a water spray element for genital and anal cleansing. and commonly appears on toilets all over Japan. The device was released in June 1980 and as of January 2022, Toto has sold more than 60 million units.
History
In the 1960s, Japanese plumbing company Toto's goal was to import American "wash air seats" for domestic sales, mainly for sale to hospitals and nursing homes. Toto began domestic production in 1969, but wash air seats were expensive and sometimes caused scalding injuries due to poor regulation of water temperature.
In 1980, Toto began to sell its improved Washlets in Japan after surveying employees to determine appropriate spray positions, since there were no biometric statistics available.
In the 1980s, the term "Washlet" originated by the company Toto. Recognized for its pioneering role in 2012, the original Washlet G model was certified as item 55 of Mechanical Engineering Heritage.
Design
In 1996, Toto also released Washlets designed for Japanese-style squat toilets, but they proved difficult to use due to accuracy issues. Japanese-style toilets were replaced with their Western-style counterparts, and the model was discontinued around 2003.
In October 2005, Toto released other improvements, incorporating sleep mode for energy conservation, a remote control, and a Washlet that could play MP3 audio files. Upon her visit to Japan in 2005, pop singer Madonna commented that she had "missed Japan’s warm toilet seats."
Functions
The cleansing features include buttons labeled Oshiri ("Rear") and Bidet ("Front") with translations in English speaking regions. Most current models have a sensor preventing water from spraying while a person is not sitting on the toilet.
For antibacterial and disinfectant purposes, the nozzle is designed at such an angle that the water does not splash back on the inside of the toilet (43º for anuses, 53º for vulvas), and the nozzle itself is washed with warm water when stowed away and before use. Anal and genital cleansing functions operate on different nozzles. Some models feature deodorizers and dryers for the user's convenience.
The control panels usually features settings to change the intensity of the water spray, as well as warmth.
See also
Electronic bidet
Toilets in Japan
References
External links
Official link
Drake II specifications on TOTO official website
Washlet – TotoUSA.com.html
Products introduced in 1980
Toilets
Japanese inventions | Washlet | [
"Biology"
] | 517 | [
"Excretion",
"Toilets"
] |
1,049,228 | https://en.wikipedia.org/wiki/Place%20cell | A place cell is a kind of pyramidal neuron in the hippocampus that becomes active when an animal enters a particular place in its environment, which is known as the place field. Place cells are thought to act collectively as a cognitive representation of a specific location in space, known as a cognitive map. Place cells work with other types of neurons in the hippocampus and surrounding regions to perform this kind of spatial processing. They have been found in a variety of animals, including rodents, bats, monkeys and humans.
Place-cell firing patterns are often determined by stimuli in the environment such as visual landmarks, and olfactory and vestibular stimuli. Place cells have the ability to suddenly change their firing pattern from one pattern to another, a phenomenon known as remapping. This remapping may occur in either some of the place cells or in all place cells at once. It may be caused by a number of changes, such as in the odor of the environment.
Place cells are thought to play an important role in episodic memory. They contain information about the spatial context a memory took place in. And they seem to perform consolidation by exhibiting replay – the reactivation of the place cells involved in a certain experience at a much faster timescale. Place cells show alterations with age and disease, such as Alzheimer's disease, which may be involved in a decrease of memory function.
The 2014 Nobel Prize in Physiology or Medicine was awarded to John O'Keefe for the discovery of place cells, and to Edvard and May-Britt Moser for the discovery of grid cells.
Background
Place cells were first discovered by John O'Keefe and Jonathan Dostrovsky in 1971 in rats' hippocampuses. They noticed that rats with impairments in their hippocampus performed poorly in spatial tasks, and thus hypothesised that this area must hold some kind of spatial representation of the environment. To test this hypothesis, they developed chronic electrode implants, with which they could record the activity of individual cells extracellularly in the hippocampus. They noted that some of the cells showed activity when a rat was "situated in a particular part of the testing platform facing in a particular direction". These cells would later be called place cells.
In 1976, O'Keefe performed a follow-up study, demonstrating the presence of what they called place units. These units were cells that fired in a particular place in the environment, the place field. They are described as having a low resting firing rate (<1 Hz) when a rat is not in its place field, but a particularly high firing rate, which can be over 100 Hz in some cases, within the place field. Additionally, O'Keefe described six special cells, which he called misplace units, which also fire only in a particular place, but only when the rat performed an additional behaviour, such as sniffing, which was often correlated with the presence of a novel stimulus, or the absence of an expected stimulus. The findings ultimately supported the cognitive map theory, the idea that the hippocampus hold a spatial representation, a cognitive map of the environment.
There has been much debate as to whether hippocampal place cells function depends on landmarks in the environment, on environmental boundaries, or on an interaction between the two. Additionally, not all place cells rely on the same external cues. One important distinction in cues is local and distal, where local cues appear in the immediate vicinity of a subject, whereas distal cues are far away, and act more like landmarks. Individual place cells have been shown to follow either or rely on both. Additionally, the cues on which the place cells rely may depend on previous experience of the subject and the saliency of the cue.
There has also been much debate as to whether hippocampal pyramidal cells truly encode non-spatial information as well as spatial information. According to the cognitive map theory, the hippocampus's primary role is to store spatial information through place cells and the hippocampus was biologically designed to provide a subject with spatial information. Recent findings, such as a study showing that place cells respond to non-spatial dimensions, such as sound frequency, disagree with the cognitive map theory. Instead, they support a new theory saying that the hippocampus has a more general function encoding continuous variables, and location just happens to be one of those variables. This fits in with the idea that the hippocampus has a predictive function.
Relationship to grid cells
It has been proposed that place cells are derivatives of grid cells, pyramidal cells in the entorhinal cortex. This theory suggests that the place fields of the place cells are a combination of several grid cells, which have hexagonal grid-like patterns of activity. The theory has been supported by computational models. The relation may arise through Hebbian learning. But grid cells may perform a more supporting role in the formation of place fields, such as path integration input.
Another non-spatial explanation of hippocampal function suggests that the hippocampus performs clustering of inputs to produce representations of the current context – spatial or non-spatial.
Properties
Place fields
Place cells fire in a specific region of an environment, known as a place field. Place fields are roughly analogous to the receptive fields of sensory neurons, in that the firing region corresponds to a region of sensory information in the environment. However, unlike receptive fields, place cells show no topography, meaning that two neighboring cells do not necessarily have neighboring place fields. Place cells fire spikes in bursts at a high frequency inside the place field, but outside of the place field they remain relatively inactive. Place fields are allocentric, meaning that they are defined with respect to the outside world rather than the body. By orienting based on the environment rather than the individual, place fields can work effectively as neural maps of the environment. A typical place cell will have only one or a few place fields in a small laboratory environment. However, in larger environments, place cells have been shown to contain multiple place fields which are usually irregular. Place cells may also show directionality, meaning they will only fire in a certain location when travelling in a particular direction.
Remapping
Remapping refers to the change in the place field characteristics that occurs when a subject experiences a new environment, or the same environment in a new context. This phenomenon was first reported in 1987, and is thought to play a role in the memory function of the hippocampus. There are broadly two types of remapping: global remapping and partial remapping. When global remapping occurs, most or all of the place cells remap, meaning they lose or gain a place field, or their place field changes its location. Partial remapping means that most place fields are unchanged and only a small portion of the place cells remap. Some of the changes to the environment that have been shown to induce remapping include changing the shape or size of the environment, the color of the walls, the smell in the environment, or the relevance of a location to the task at hand.
Phase precession
The firing of place cells is timed in relation to local theta waves, a process termed phase precession. Upon entering a place field, place cells will fire in bursts at a particular point in the phase of the underlying theta waves. However, as an animal progresses through the place field, the firing will happen progressively earlier in the phase. It is thought that this phenomenon increases the accuracy of the place coding, and aids in plasticity, which is required for learning.
Directionality
In some cases place cells show directionality, meaning they will only fire in a location when the subject is travelling in a particular direction. However, they may also be omnidirectional, meaning they fire regardless of the direction the subject. The lack of directionality in some place cells might occur particularly in impoverished environments, whereas in more complicated environments directionality is enhanced. The radial arm maze is one such environment where directionality does occur. In this environment, cells may even have multiple place fields, of which one is strongly directional, while the others are not. In virtual reality corridors, the degree of directionality in the population of place cells is particularly high. The directionality of place cells has been shown to emerge as a result of the animal's behaviour. For example, the receptive fields become skewed when rats travel a linear track in a single direction. Recent theoretical studies suggest that place cells encode a successor representation which maps the current state to the predicted successor states, and that directionality emerges from this formalism. This computational framework also provides an account for the distortion of place fields around obstacles.
Sensory input
Place cells were initially believed to fire in direct relation to simple sensory inputs, but studies have suggested that this may not be the case. Place fields are usually unaffected by large sensory changes, like removing a landmark from an environment, but respond to subtle changes, like a change in color or shape of an object. This suggests that place cells respond to complex stimuli rather than simple individual sensory cues. According to the functional differentation model, sensory information is processed in various cortical structures upstream of the hippocampus before actually reaching the structure, so that the information received by place cells is a compilation, a functional derivative, of different stimuli.
Sensory information received by place cells can be categorized as either metric or contextual information, where metric information corresponds to where place cells should fire and contextual input corresponds to whether or not a place field should fire in a certain environment. Metric sensory information is any kind of spatial input that might indicate a distance between two points. For example, the edges of an environment might signal the size of the overall place field or the distance between two points within a place field. Metric signals can be either linear or directional. Directional inputs provide information about the orientation of a place field, whereas linear inputs essentially form a representational grid. Contextual cues allow established place fields to adapt to minor changes in the environment, such as a change in object color or shape. Metric and contextual inputs are processed together in the entorhinal cortex before reaching the hippocampal place cells. Visuospatial and olfactory inputs are examples of sensory inputs that are utilized by place cells. These types of sensory cues can include both metric and contextual information.
Visuospatial inputs
Spatial cues such as geometric boundaries or orienting landmarks are important examples of metric input. An example is the walls of an environment, which provides information about relative distance and location. Place cells generally rely on set distal cues rather than cues in the immediate proximal environment, though local cues can have a profound impact on local place fields. Visual sensory inputs can also supply important contextual information. A change in color of a specific object or the walls of the environment can affect whether or not a place cell fires in a particular field. Thus, visuospatial sensory information is critical to the formation and recollection of place field.
Olfactory inputs
Although place cells primarily rely on visuospatial input, some studies suggest that olfactory input may also affect the formation and stability of place fields. Olfaction may compensate for a loss of visual information, or even be responsible for the formation of stable place fields in the same way visuospatial cues are. This has been confirmed by a study in a virtual environment that was composed of odor gradients. Change in the olfactory stimulus in an environment may also cause the remapping of place cells.
Vestibular inputs
Stimuli from the vestibular system, such as rotations, can cause changes in place cells firing. After receiving vestibular input some place cells may remap to align with this input, though not all cells will remap and are more reliant on visual cues. Bilateral lesions of the vestibular system in patients may cause abnormal firing of hippocampal place cells as evidenced, in part, by difficulties with spatial tasks such as the radial arm maze and the Morris water navigation task.
Movement inputs
Movement can also be an important spatial cue. Mice use their self-motion information to determine how far and in which direction they have travelled, a process called path integration. This is especially the case in the absence of continuous sensory inputs. For example, in an environment with a lack of visuospatial inputs, an animal might search for the environment edge using touch, and discern location based on the distance of its movement from that edge. Path integration is largely aided by grid cells, which are a type of neuron in the entorhinal cortex that relay information to place cells in the hippocampus. Grid cells establish a grid representation of a location, so that during movement place cells can fire according to their new location while orienting according to the reference grid of their external environment.
Episodic memory
Place cells play an important role in episodic memory. One important aspect of episodic memory is the spatial context in which the event occurred. Hippocampal place cells have stable firing patterns even when cues from a location are removed and specific place fields begin firing when exposed to signals or a subset of signals from a previous location. This suggests that place cells provide the spatial context for a memory by recalling the neural representation of the environment in which the memory occurred. By establishing spatial context, place cells play a role in completing memory patterns. Furthermore, place cells are able to maintain a spatial representation of one location while recalling the neural map of a separate location, effectively differentiating between present experience and past memory. Place cells are therefore considered to demonstrate both pattern completion and pattern separation qualities.
Pattern completion
Pattern completion is the ability to recall an entire memory from a partial or degraded sensory cue. Place cells are able to maintain a stable firing field even after significant signals are removed from a location, suggesting that they can recall a pattern based on only part of the original input. Furthermore, the pattern completion exhibited by place cells is symmetric, because an entire memory can be retrieved from any part of it. For example, in an object-place association memory, spatial context can be used to recall an object and the object can be used to recall the spatial context.
Pattern separation
Pattern separation is the ability to differentiate one memory from other stored memories. Pattern separation begins in the dentate gyrus, a section of the hippocampus involved in memory formation and retrieval. Granule cells in the dentate gyrus process sensory information using competitive learning, and relay a preliminary representation to form place fields. Place fields are extremely specific, as they are capable of remapping and adjusting firing rates in response to subtle sensory signal changes. This specificity is critical for pattern separation, as it distinguishes memories from one another.
Reactivation, replay, and preplay
Place cells often exhibit reactivation outside their place fields. This reactivation has a much faster time scale than the actual experience, and it occurs mostly in the same order in which it was originally experienced, or, more rarely, in reverse. Replay is believed to have a functional role in memory retrieval and memory consolidation. However, when replay is disturbed, it does not necessarily affect place coding, which means it is not essentially for consolidation in all circumstances. The same sequence of activity may occur before the actual experience. This phenomenon, termed preplay, may have a role in prediction and learning.
Model animals
Place cells were first discovered in rats, but place cells and place-like cells have since been found in a number of different animals, including rodents, bats and primates. Additionally, evidence for place cells in humans was found in 2003.
Rodents
Both rats and mice are often used as model animals for place cells research. Rats became especially popular after the development of multiarray electrodes, which allows for the simultaneous recording of a large number of cells. However, mice have the advantage that a larger range of genetic variants are available. Additionally mice can be headfixed, allowing for the use of microscopy techniques to look directly into the brain. Though rats and mice have similar place cells dynamics, mice have smaller place cells, and on the same size track have an increase in number of place fields per cell. Additionally, their replay is weaker compared to the replay in rats.
In addition to rats and mice, place cells have also been found in chinchillas.
Rats furthermore have social place cells, cells which encode the position of other rats. This finding was published in Science at the same time as the report of social place cells in bats.
Bats
Place cells were reported in Egyptian fruit bats for the first time in 2007 by Nachum Ulanovsky and his lab. The place cells in bats have a place field in 3D, which is probably due to the bat flying in three dimensions. The place cells in bats can be based on either vision or echolocation, which remapping taking place when bats switch between the two. Bats also have social place cells; this finding was published in Science at the same time as the report of social place cells in rats.
Primates
Place-related responses have been found in cells of the Japanese macaque and common marmoset, however, whether these are true place cells or spatial view cells is still debated. Spatial view cells respond to locations that are visually explored by eye movement, or the "view of a space", rather than the location of the monkey's body. In the macaque, cells were recorded while the monkey was driving a motorised cab around the experimental room. Additionally, place-related responses have been found macaques while they navigated in a virtual reality. More recently, place cells may have been identified in the hippocampus of freely moving macaques and marmosets.
Disturbances to place cell function
Effects of alcohol
Place cell firing rate decreases dramatically after ethanol exposure, causing reduced spatial sensitivity, which has been hypothesised to be the cause of impairments in spatial procession after alcohol exposure.
Alzheimer's disease
Problems with spatial memory and navigation are thought to be one of the early indications of Alzheimer's disease. Place cells have been shown to degenerate in Alzheimer's mouse models, which causes such problems with spatial memory in these mice. Furthermore, the place cells in these models have unstable representations of space, and cannot learn stable representations for new environments as well as place cells in healthy mice. The hippocampal theta waves, as well as the gamma waves, that influence place cell firing, for example through phase precession, are also affected.
Aging
Place field properties, including the rate of firing and spike characteristics such as width and amplitude of the spikes, are largely similar between young and aged rats in the CA1 hippocampal region. However, while the size of place fields in the hippocampal CA3 region remains the same between young and aged rats, average firing rate in this region is higher in aged rats. Young rats exhibit place field plasticity: when they are moving along a straight path, place fields are activated one after another. When young rats repeatedly traverse the same straight path, connection between place fields are strengthened due to plasticity, causing subsequent place fields to fire more quickly and causing place field expansion, possibly aiding young rats in spatial memory and learning. However, this observed place field expansion and plasticity is decreased in aged rat subjects, possibly reducing their capacity for spatial learning and memory.
This plasticity can be rescued in aged rats by giving them memantine, an antagonist that blocks the NMDA receptors which is known to improve spatial memory, and was therefore used in an attempt to restore place field plasticity in aged subjects. NMDA receptors, which are glutamate receptors, exhibit decreased activity in aged subjects. The application of memantine leads to in increase in place field plasticity in aged rat subjects. Although memantine aids in the encoding process of spatial information in aged rat subjects, it does not help with the retrieval of this information later in time.
Aged rats further show a high instability in their place cells in the CA1 region. When introduced to the same environment several times, the hippocampal map of the environment changed about 30% of the time, suggesting that the place cells are remapping in response to the exact same environment. Contrarily, the CA3 place cells are show increased plasticity in aged subjects. The same place fields in the CA3 region to activate in similar environments, whereas different place fields in young rats would fire in similar environments because they would pick up on subtle differences in these environments. One possible cause of these changes in plasticity may be increased reliance on self-motion cues.
See also
Spatial view cells, primate hippocampal counterpart for visual field.
Grid cells
Head direction cells
List of distinct cell types in the adult human body
References
External links
Articles containing video clips
Hippocampus (brain)
Neurons
Spatial cognition | Place cell | [
"Physics"
] | 4,266 | [
"Spacetime",
"Space",
"Spatial cognition"
] |
1,049,256 | https://en.wikipedia.org/wiki/Apollonian%20gasket | In mathematics, an Apollonian gasket or Apollonian net is a fractal generated by starting with a triple of circles, each tangent to the other two, and successively filling in more circles, each tangent to another three. It is named after Greek mathematician Apollonius of Perga.
Construction
The construction of the Apollonian gasket starts with three circles , , and (black in the figure), that are each tangent to the other two, but that do not have a single point of triple tangency. These circles may be of different sizes to each other, and it is allowed for two to be inside the third, or for all three to be outside each other. As Apollonius discovered, there exist two more circles and (red) that are tangent to all three of the original circles – these are called Apollonian circles. These five circles are separated from each other by six curved triangular regions, each bounded by the arcs from three pairwise-tangent circles. The construction continues by adding six more circles, one in each of these six curved triangles, tangent to its three sides. These in turn create 18 more curved triangles, and the construction continues by again filling these with tangent circles, ad infinitum.
Continued stage by stage in this way, the construction adds new circles at stage , giving a total of circles after stages. In the limit, this set of circles is an Apollonian gasket. In it, each pair of tangent circles has an infinite Pappus chain of circles tangent to both circles in the pair.
The size of each new circle is determined by Descartes' theorem, which states that, for any four mutually tangent circles, the radii of the circles obeys the equation
This equation may have a solution with a negative radius; this means that one of the circles (the one with negative radius) surrounds the other three.
One or two of the initial circles of this construction, or the circles resulting from this construction, can degenerate to a straight line, which can be thought of as a circle with infinite radius. When there are two lines, they must be parallel, and are considered to be tangent at a point at infinity. When the gasket includes two lines on the -axis and one unit above it, and a circle of unit diameter tangent to both lines centered on the -axis, then the circles that are tangent to the -axis are the Ford circles, important in number theory.
The Apollonian gasket has a Hausdorff dimension of about 1.3057. Because it has a well-defined fractional dimension, even though it is not precisely self-similar, it can be thought of as a fractal.
Symmetries
The Möbius transformations of the plane preserve the shapes and tangencies of circles, and therefore preserve the structure of an Apollonian gasket.
Any two triples of mutually tangent circles in an Apollonian gasket may be mapped into each other by a Möbius transformation, and any two Apollonian gaskets may be mapped into each other by a Möbius transformation. In particular, for any two tangent circles in any Apollonian gasket, an inversion in a circle centered at the point of tangency (a special case of a Möbius transformation) will transform these two circles into two parallel lines, and transform the rest of the gasket into the special form of a gasket between two parallel lines. Compositions of these inversions can be used to transform any two points of tangency into each other. Möbius transformations are also isometries of the hyperbolic plane, so in hyperbolic geometry all Apollonian gaskets are congruent. In a sense, there is therefore only one Apollonian gasket, up to (hyperbolic) isometry.
The Apollonian gasket is the limit set of a group of Möbius transformations known as a Kleinian group.
For Euclidean symmetry transformations rather than Möbius transformations, in general, the Apollonian gasket will inherit the symmetries of its generating set of three circles. However, some triples of circles can generate Apollonian gaskets with higher symmetry than the initial triple; this happens when the same gasket has a different and more-symmetric set of generating circles. Particularly symmetric cases include the Apollonian gasket between two parallel lines (with infinite dihedral symmetry), the Apollonian gasket generated by three congruent circles in an equilateral triangle (with the symmetry of the triangle), and the Apollonian gasket generated by two circles of radius 1 surrounded by a circle of radius 2 (with two lines of reflective symmetry).
Integral Apollonian circle packings
If any four mutually tangent circles in an Apollonian gasket all have integer curvature (the inverse of their radius) then all circles in the gasket will have integer curvature.
Since the equation relating curvatures in an Apollonian gasket, integral or not, is
it follows that one may move from one quadruple of curvatures to another by Vieta jumping, just as when finding a new Markov number.
The first few of these integral Apollonian gaskets are listed in the following table. The table lists the curvatures of the largest circles in the gasket. Only the first three curvatures (of the five displayed in the table) are needed to completely describe each gasket – all other curvatures can be derived from these three.
Enumerating integral Apollonian circle packings
The curvatures are a root quadruple (the smallest in some integral circle packing) if . They are primitive when . Defining a new set of variables by the matrix equation
gives a system where satisfies the Descartes equation precisely when . Furthermore, is primitive precisely when , and is a root quadruple precisely when .
This relationship can be used to find all the primitive root quadruples with a given negative bend . It follows from and that , and hence that . Therefore, any root quadruple will satisfy . By iterating over all the possible values of , , and one can find all the primitive root quadruples. The following Python code demonstrates this algorithm, producing the primitive root quadruples listed above.
import math
def get_primitive_bends(n: int):
if n == 0:
yield 0, 0, 1, 1
return
for m in range(math.ceil(n / math.sqrt(3))):
s = m**2 + n**2
for d1 in range(max(2 * m, 1), math.floor(math.sqrt(s)) + 1):
d2, remainder = divmod(s, d1)
if remainder == 0 and math.gcd(n, d1, d2) == 1:
yield -n, d1 + n, d2 + n, d1 + d2 + n - 2 * m
for n in range(15):
for bends in get_primitive_bends(n):
print(bends)The curvatures appearing in a primitive integral Apollonian circle packing must belong to a set of six or eight possible residues classes modulo 24, and numerical evidence supported that any sufficiently large integer from these residue classes would also be present as a curvature within the packing. This conjecture, known as the local-global conjecture, was proved to be false in 2023.
Symmetry of integral Apollonian circle packings
There are multiple types of dihedral symmetry that can occur with a gasket depending on the curvature of the circles.
No symmetry
If none of the curvatures are repeated within the first five, the gasket contains no symmetry, which is represented by symmetry group C1; the gasket described by curvatures (−10, 18, 23, 27) is an example.
D1 symmetry
Whenever two of the largest five circles in the gasket have the same curvature, that gasket will have D1 symmetry, which corresponds to a reflection along a diameter of the bounding circle, with no rotational symmetry.
D2 symmetry
If two different curvatures are repeated within the first five, the gasket will have D2 symmetry; such a symmetry consists of two reflections (perpendicular to each other) along diameters of the bounding circle, with a two-fold rotational symmetry of 180°. The gasket described by curvatures (−1, 2, 2, 3) is the only Apollonian gasket (up to a scaling factor) to possess D2 symmetry.
D3 symmetry
There are no integer gaskets with D3 symmetry.
If the three circles with smallest positive curvature have the same curvature, the gasket will have D3 symmetry, which corresponds to three reflections along diameters of the bounding circle (spaced 120° apart), along with three-fold rotational symmetry of 120°. In this case the ratio of the curvature of the bounding circle to the three inner circles is 2 − 3. As this ratio is not rational, no integral Apollonian circle packings possess this D3 symmetry, although many packings come close.
Almost-D3 symmetry
The figure at left is an integral Apollonian gasket that appears to have D3 symmetry. The same figure is displayed at right, with labels indicating the curvatures of the interior circles, illustrating that the gasket actually possesses only the D1 symmetry common to many other integral Apollonian gaskets.
The following table lists more of these almost-D3 integral Apollonian gaskets. The sequence has some interesting properties, and the table lists a factorization of the curvatures, along with the multiplier needed to go from the previous set to the current one. The absolute values of the curvatures of the "a" disks obey the recurrence relation , from which it follows that the multiplier converges to + 2 ≈ 3.732050807.
Sequential curvatures
For any integer n > 0, there exists an Apollonian gasket defined by the following curvatures: (−n, n + 1, n(n + 1), n(n + 1) + 1). For example, the gaskets defined by (−2, 3, 6, 7), (−3, 4, 12, 13), (−8, 9, 72, 73), and (−9, 10, 90, 91) all follow this pattern. Because every interior circle that is defined by n + 1 can become the bounding circle (defined by −n) in another gasket, these gaskets can be nested. This is demonstrated in the figure at right, which contains these sequential gaskets with n running from 2 through 20.
History
Although the Apollonian gasket is named for Apollonius of Perga -- because of its construction's dependence on the solution to the problem of Apollonius -- the earliest description of the gasket is from 1706 by Leibniz in a letter to Des Bosses.
The first modern definition of the Apollonian gasket is given by Kasner and Supnick.
See also
Apollonian network, a graph derived from finite subsets of the Apollonian gasket
Apollonian sphere packing, a three-dimensional generalization of the Apollonian gasket
Sierpiński triangle, a self-similar fractal with a similar combinatorial structure
Notes
References
Benoit B. Mandelbrot: The Fractal Geometry of Nature, W H Freeman, 1982,
Paul D. Bourke: "An Introduction to the Apollony Fractal". Computers and Graphics, Vol 30, Issue 1, January 2006, pages 134–136.
A.A. Kirillov: A Tale of Two Fractals, Birkhauser, 2013.
David Mumford, Caroline Series, David Wright: Indra's Pearls: The Vision of Felix Klein, Cambridge University Press, 2002,
Jeffrey C. Lagarias, Colin L. Mallows, Allan R. Wilks: Beyond the Descartes Circle Theorem, The American Mathematical Monthly, Vol. 109, No. 4 (Apr., 2002), pp. 338–361, (arXiv:math.MG/0101066 v1 9 Jan 2001)
External links
Alexander Bogomolny, Apollonian Gasket, cut-the-knot
A Matlab script to plot 2D Apollonian gasket with n identical circles using circle inversion
Online experiments with JSXGraph
Apollonian Gasket by Michael Screiber, The Wolfram Demonstrations Project.
Interactive Apollonian Gasket Demonstration of an Apollonian gasket running on Java
Dana Mackenzie. Computing Science: A Tisket, a Tasket, an Apollonian Gasket. American Scientist, January/February 2010.
. Newspaper story about an artwork in the form of a partial Apollonian gasket, with an outer circumference of nine miles.
Dynamic apollonian gaskets, Tartapelago by Giorgio Pietrocola, 2014.
Fractals
Hyperbolic geometry
Circle packing | Apollonian gasket | [
"Mathematics"
] | 2,700 | [
"Geometry problems",
"Functions and mappings",
"Mathematical analysis",
"Packing problems",
"Mathematical objects",
"Fractals",
"Circle packing",
"Mathematical relations",
"Mathematical problems"
] |
1,049,453 | https://en.wikipedia.org/wiki/Semiclassical%20gravity | Semiclassical gravity is an approximation to the theory of quantum gravity in which one treats matter and energy fields as being quantum and the gravitational field as being classical.
In semiclassical gravity, matter is represented by quantum matter fields that propagate according to the theory of quantum fields in curved spacetime. The spacetime in which the fields propagate is classical but dynamical. The dynamics of the theory is described by the semiclassical Einstein equations, which relate the curvature of the spacetime that is encoded by the Einstein tensor to the expectation value of the energy–momentum tensor (a quantum field theory operator) of the matter fields, i.e.
where G is the gravitational constant, and indicates the quantum state of the matter fields.
Energy–momentum tensor
There is some ambiguity in regulating the energy–momentum tensor, and this depends upon the curvature. This ambiguity can be absorbed into the cosmological constant, the gravitational constant, and the quadratic couplings
and
There is another quadratic term of the form
but in four dimensions this term is a linear combination of the other two terms and a surface term. See Gauss–Bonnet gravity for more details.
Since the theory of quantum gravity is not yet known, it is difficult to precisely determine the regime of validity of semiclassical gravity. However, one can formally show that semiclassical gravity could be deduced from quantum gravity by considering N copies of the quantum matter fields and taking the limit of N going to infinity while keeping the product GN constant. At a diagrammatic level, semiclassical gravity corresponds to summing all Feynman diagrams that do not have loops of gravitons (but have an arbitrary number of matter loops). Semiclassical gravity can also be deduced from an axiomatic approach.
Experimental status
There are cases where semiclassical gravity breaks down. For instance, if M is a huge mass, then the superposition
where the locations A and B are spatially separated, results in an expectation value of the energy–momentum tensor that is M/2 at A and M/2 at B, but one would never observe the metric sourced by such a distribution. Instead, one would observe the decoherence into a state with the metric sourced at A and another sourced at B with a 50% chance each. Extensions of semiclassical gravity that incorporate decoherence have also been studied.
Applications
The most important applications of semiclassical gravity are to understand the Hawking radiation of black holes and the generation of random Gaussian-distributed perturbations in the theory of cosmic inflation, which is thought to occur at the very beginning of the Big Bang.
Notes
References
Birrell, N. D. and Davies, P. C. W., Quantum fields in curved space, (Cambridge University Press, Cambridge, UK, 1982).
Robert M. Wald, Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics. University of Chicago Press, 1994.
See also
Quantum field theory in curved spacetime
Quantum gravity | Semiclassical gravity | [
"Physics"
] | 618 | [
"Quantum gravity",
"Unsolved problems in physics",
"Physics beyond the Standard Model"
] |
1,049,561 | https://en.wikipedia.org/wiki/Hollow%20structural%20section | A hollow structural section (HSS) is a type of metal profile with a hollow cross section. The term is used predominantly in the United States, or other countries which follow US construction or engineering terminology.
HSS members can be circular, square, or rectangular sections, although other shapes such as elliptical are also available. HSS is only composed of structural steel per code.
HSS is sometimes mistakenly referenced as hollow structural steel. Rectangular and square HSS are also commonly called tube steel or box section. Circular HSS are sometimes mistakenly called steel pipe, although true steel pipe is actually dimensioned and classed differently from HSS. (HSS dimensions are based on exterior dimensions of the profile; pipes are also manufactured to an exterior tolerance, albeit to a different standard.) The corners of HSS are heavily rounded, having a radius which is approximately twice the wall thickness. The wall thickness is uniform around the section.
In the UK, or other countries which follow British construction or engineering terminology, the term HSS is not used. Rather, the three basic shapes are referenced as CHS, SHS, and RHS, being circular, square, and rectangular hollow sections. Typically, these designations will also relate to metric sizes, thus the dimensions and tolerances differ slightly from HSS.
Use in structures
HSS, especially rectangular sections, are commonly used in welded steel frames where members experience loading in multiple directions. Square and circular HSS have very efficient shapes for this multiple-axis loading as they have uniform geometry along two or more cross-sectional axes, and thus uniform strength characteristics. This makes them good choices for columns. They also have excellent resistance to torsion.
HSS can also be used as beams, although wide flange or I-beam shapes are in many cases a more efficient structural shape for this application. However, the HSS has superior resistance to lateral torsional buckling.
The flat square surfaces of rectangular HSS can ease construction, and they are sometimes preferred for architectural aesthetics in exposed structures, although elliptical HSS are becoming more popular in exposed structures for the same aesthetic reasons.
In the recent past, HSS was commonly available in mild steel, such as A500 grade B. Today, HSS is commonly available in mild steel, A500 grade C. Other steel grades available for HSS are A847 (weathering steel), A1065 (large sections up to 50 inch sq made with SAW process), and recently approved A1085 (higher strength, tighter tolerances than A500).
Manufacture
Square HSS is made the same way as pipe. During the manufacturing process flat steel plate is gradually changed in shape to become round where the edges are presented ready to weld. The edges are then welded together to form the mother tube. During the manufacturing process the mother tube goes through a series of shaping stands which form the round HSS (mother tube) into the final square or rectangular shape. Most American manufacturers adhere to the ASTM A500 or newly adopted ASTM A1085 standards, while Canadian manufacturers follow both ASTM A500 and CSA G40.21. European hollow sections are generally in accordance with the EN 10210 standard.
Filled HSS
HSS is often filled with concrete to improve fire rating, as well as robustness. When this is done, the product is referred to as a Lally column after its inventor John Lally of Waltham, Massachusetts. (The pronunciation is often corrupted to lolly column.) For example, barriers around parking areas (bollards) made of HSS are often filled, to at least bumper height, with concrete. This is an inexpensive (when replacement costs are factored in) way of adding compressive strength to the bollard, which can help prevent unsightly local denting, though it does not generally significantly increase the overall structural properties of the bollard.
See also
Hollow-core slab
Honeycomb structure
Lightening holes
Profile (engineering)
Structural pipe fitting
Structural steel
External links
Architectural AISthetics (Opening Keynote Address for digifest 2004)
Steel Tube Institute
Steel Tube Institute, tube dimensions
Structural engineering
Structural steel | Hollow structural section | [
"Engineering"
] | 839 | [
"Structural engineering",
"Structural steel",
"Civil engineering",
"Construction"
] |
1,049,599 | https://en.wikipedia.org/wiki/Conventional%20landing%20gear | Conventional landing gear, or tailwheel-type landing gear, is an aircraft undercarriage consisting of two main wheels forward of the center of gravity and a small wheel or skid to support the tail. The term taildragger is also used.
The term "conventional" persists for historical reasons, but all modern jet aircraft and most modern propeller aircraft use tricycle gear.
History
In early aircraft, a tailskid made of metal or wood was used to support the tail on the ground. In most modern aircraft with conventional landing gear, a small articulated wheel assembly is attached to the rearmost part of the airframe in place of the skid. This wheel may be steered by the pilot through a connection to the rudder pedals, allowing the rudder and tailwheel to move together.
Before aircraft commonly used tailwheels, many aircraft (like a number of First World War Sopwith aircraft, such as the Camel fighter) were equipped with steerable tailskids, which operate similar to a tailwheel. When the pilot pressed the right rudder pedal—or the right footrest of a "rudder bar" in World War I—the skid pivoted to the right, creating more drag on that side of the plane and causing it to turn to the right. While less effective than a steerable wheel, it gave the pilot some control of the direction the craft was moving while taxiing or beginning the takeoff run, before there was enough airflow over the rudder for it to become effective.
Another form of control, which is less common now than it once was, is to steer using "differential braking", in which the tailwheel is a simple, freely castering mechanism, and the aircraft is steered by applying brakes to one of the mainwheels in order to turn in that direction. This is also used on some tricycle gear aircraft, with the nosewheel being the freely castering wheel instead. Like the steerable tailwheel/skid, it is usually integrated with the rudder pedals on the craft to allow an easy transition between wheeled and aerodynamic control.
Advantages
The tailwheel configuration offers several advantages over the tricycle landing gear arrangement, which make tailwheel aircraft less expensive to manufacture and maintain.
Due to its position much further from the center of gravity, a tailwheel supports a smaller part of the aircraft's weight allowing it to be made much smaller and lighter than a nosewheel. As a result, the smaller wheel weighs less and causes less parasitic drag.
Because of the way airframe loads are distributed while operating on rough ground, tailwheel aircraft are better able to sustain this type of use over a long period of time, without cumulative airframe damage occurring.
If a tailwheel fails on landing, the damage to the aircraft will be minimal. This is not the case in the event of a nosewheel failure, which usually results in a prop strike.
Due to the increased propeller clearance on tailwheel aircraft, less stone chip damage will result from operating a conventionally geared aircraft on rough or gravel airstrips, making them well suited to bush flying.
Tailwheel aircraft are more suitable for operation on skis.
Tailwheel aircraft are easier to fit into and maneuver inside some hangars.
Disadvantages
The conventional landing gear arrangement has disadvantages compared to nosewheel aircraft.
Tailwheel aircraft are more subject to "nose-over" accidents due to incorrect application of brakes by the pilot.
Conventional geared aircraft are much more susceptible to ground looping. A ground loop occurs when directional control is lost on the ground and the tail of the aircraft passes the nose, swapping ends, in some cases completing a full circle. This event can result in damage to the aircraft's undercarriage, tires, wingtips, propeller and engine. Ground-looping occurs because whereas a nosewheel aircraft is steered from ahead of the center of gravity, a taildragger is steered from behind (much like driving a car backwards at high speed), so that on the ground a taildragger is inherently unstable, whereas a nosewheel aircraft will self-center if it swerves on landing. In addition, some tailwheel aircraft must transition from using the rudder to steer to using the tailwheel while passing through a speed range when neither is wholly effective due to the nose high angle of the aircraft and lack of airflow over the rudder. Avoiding ground loops requires more pilot training and skill.
Tailwheel aircraft generally suffer from poorer forward visibility on the ground, compared to nose wheel aircraft. Often this requires continuous "S" turns on the ground to allow the pilot to see where they are taxiing.
Tailwheel aircraft are more difficult to taxi during high wind conditions, due to the higher angle of attack on the wings which can then develop more lift on one side, making control difficult or impossible. They also suffer from lower crosswind capability and in some wind conditions may be unable to use crosswind runways or single-runway airports.
Due to the nose-high attitude on the ground, propeller-powered taildraggers are more adversely affected by P-factor – asymmetrical thrust caused by the propeller's disk being angled to the direction of travel, which causes the blades to produce more lift when going down than when going up due to the difference in angle the blade experiences when passing through the air. The aircraft will then pull to the side of the upward blade. Some aircraft lack sufficient rudder authority in some flight regimes (particularly at higher power settings on takeoff) and the pilot must compensate before the aircraft starts to yaw. Some aircraft, particularly older, higher powered aircraft such as the P-51 Mustang, cannot use full power on takeoff and still safely control their direction of travel. On landing this is less of a factor, however opening the throttle to abort a landing can induce severe uncontrollable yaw unless the pilot is prepared for it.
Jet-powered tailwheel aircraft
Jet aircraft generally cannot use conventional landing gear, as this orients the engines at a high angle, causing their jet blast to bounce off the ground and back into the air, preventing the elevators from functioning properly. This problem occurred with the third, or "V3" prototype of the German Messerschmitt Me 262 jet fighter. After the first four prototype Me 262 V-series airframes were built with retracting tailwheel gear, the fifth prototype was fitted with fixed tricycle landing gear for trials, with the sixth prototype onwards getting fully retracting tricycle gear. A number of other experimental and prototype jet aircraft had conventional landing gear, including the first successful jet, the Heinkel He 178, the Ball-Bartoe Jetwing research aircraft, and a single Vickers VC.1 Viking, which was modified with Rolls-Royce Nene engines to become the world's first jet airliner.
Rare examples of jet-powered tailwheel aircraft that went into production and saw service include the British Supermarine Attacker naval fighter and the Soviet Yakovlev Yak-15. Both first flew in 1946 and owed their configurations to being developments of earlier propeller powered aircraft. The Attacker's tailwheel configuration was a result of it using the Supermarine Spiteful's wing, avoiding expensive design modification or retooling. The engine exhaust was behind the elevator and tailwheel, reducing problems. The Yak-15 was based on the Yakovlev Yak-3 propeller fighter. Its engine was mounted under the forward fuselage. Despite its unusual configuration, the Yak-15 was easy to fly. Although a fighter, it was mainly used as a trainer aircraft to prepare Soviet pilots for flying more advanced jet fighters.
Monowheel undercarriage
A variation of the taildragger layout is the monowheel landing gear.
To minimize drag, many modern gliders have a single wheel, retractable or fixed, centered under the fuselage, which is referred to as monowheel gear or monowheel landing gear. Monowheel gear is also used on some powered aircraft, where drag reduction is a priority, such as the Europa XS. Monowheel power aircraft use retractable wingtip legs (with small castor wheels attached) to prevent the wingtips from striking the ground. A monowheel aircraft may have a tailwheel (like the Europa) or a nosewheel (like the Schleicher ASK 23 glider).
Training
Taildragger aircraft require more training time for student pilots to master. This was a large factor in the 1950s switch by most manufacturers to nosewheel-equipped trainers, and for many years nosewheel aircraft have been more popular than taildraggers. As a result, most Private Pilot Licence (PPL) pilots now learn to fly in tricycle gear aircraft (e.g. Cessna 172 or Piper Cherokee) and only later transition to taildraggers.
Techniques
Landing a conventional geared aircraft can be accomplished in two ways.
Normal landings are done by touching all three wheels down at the same time in a three-point landing. This method does allow the shortest landing distance but can be difficult to carry out in crosswinds, as rudder control may be reduced severely before the tailwheel can become effective.
The alternative is the wheel landing. This requires the pilot to land the aircraft on the mainwheels while maintaining the tailwheel in the air with elevator to keep the angle of attack low. Once the aircraft has slowed to a speed that can ensure control will not be lost, but above the speed at which rudder effectiveness is lost, then the tailwheel is lowered to the ground.
Examples
Examples of tailwheel aircraft include:
Airplanes
de Havilland Canada DHC-2 Beaver
Douglas DC-3
Maule M-7
Messerschmitt Bf 109
Piper J-3 Cub
Supermarine Spitfire
Helicopters
Boeing AH-64 Apache - Attack helicopter
Sikorsky SH-3 Sea King - Anti-submarine helicopter
Modifications of tricycle gear aircraft
Several aftermarket modification companies offer kits to convert many popular nose-wheel equipped aircraft to conventional landing gear. Aircraft for which kits are available include:
Cessna 150
Cessna 152
Cessna 172
Cessna 175
Cessna 182
Piper PA-22 Tri-Pacer
References
Citations
Bibliography
Boyne, Walter J. "Goering's Big Bungle". Air Force Magazine, Vol. 91, No. 11, November 2008.
Aviation Publishers Co. Limited, From the Ground Up, page 11 (27th revised edition)
Aircraft undercarriage
Aircraft configurations | Conventional landing gear | [
"Engineering"
] | 2,100 | [
"Aircraft configurations",
"Aerospace engineering"
] |
1,049,610 | https://en.wikipedia.org/wiki/Cerebro | Cerebro (; Spanish for "brain", from Latin ) is a fictional device appearing in American comic books published by Marvel Comics. The device is used by the X-Men (in particular, their leader, Professor Charles Xavier) to detect humans, specifically mutants. It was created by Professor X and Magneto, and was later enhanced by Dr. Hank McCoy.
Concept and creation
Cerebro first appeared in X-Men #7 (September 1964). Professor Jeffrey J. Kripal, in his 2011 book Mutants and Mystics: Science Fiction, Superhero Comics, and the Paranormal, calls Cerebro "a piece of psychotronics" and describes it as "a spiderlike, Kirby-esque system of machines and wires that transmitted extrasensory data into Professor Xavier's private desk in another room". Kripal notes that Cerebro made multiple subsequent central appearances, including Giant-Size X-Men #1 (1975), where Cerebro senses and locates a supermutant across the globe, resulting in the recreation of the X-Men team.
Use and function of the device
Cerebro amplifies the brainwaves of the user. In the case of telepaths, it enables the user to detect traces of others worldwide, also able to distinguish between humans and mutants. Depictions of its inherent strength have been inconsistent; at times in the storylines it could detect mutated aliens outside of the planet, when at others it could only scan for mutants' signatures in the United States. It is not clear whether it finds mutants by the power signature they send out when they use their powers or by the presence of the X-gene in their body; both methods have been used throughout the comics.
Using Cerebro can be extremely dangerous, and telepaths without well-trained, disciplined minds put themselves at great risk when attempting to use it. This is due to the psychic feedback that users experience when operating Cerebro. As the device greatly enhances natural psychic ability, users who are unprepared for the sheer enormity of this increased psychic input can be quickly and easily overwhelmed, resulting in insanity, coma, permanent brain damage or even death. The one exception has been Magneto, who has been said to have minor or latent telepathic abilities as well as experience amplifying his mental powers with mechanical devices of his own design.
The only characters to use Cerebro on a frequent basis are Professor X, Jean Grey, Emma Frost and the Stepford Cuckoos. However, Cable, Rachel Summers, Kitty Pryde, Danielle Moonstar, Psylocke and Ruth Aldine have also used it. After the device was upgraded to Cerebra, Cassandra Nova used it in order to exchange minds with Xavier. The Stepford Cuckoos once utilized the machine to amplify their combined ability, with only one of them directly connected to the machine, but all of them experiencing its interaction due to their psychic rapport.
Some mutants have learned to shield themselves from Cerebro, usually via their own telepathic ability. Magneto can shield himself from the device through use of minimal telepathic powers; in the film series, he does so with a specially constructed helmet.
It would soon become apparent as to just what and how Cerebro was really meant to be used; on top of tracking and locating mutants across the globe the tracking device's primary function was to act as a soul jar that could catalog the thought pattern self of any and every mutant ever pinpointed through it. X essentially utilized this function to resurrect the mutant strike team lost while battling Orchis by withholding their hard copied mind's, their anima, onto home grown clone bodies; which would effectively allow him to resurrect any and every mutant who has ever died or will die by imprinting a shell with their respective neuropsychic imprint.
History of the device
Originally, Cerebro was a device similar to a computer that was built into a desk in Xavier's office. This early version of Cerebro operated on punched cards, and did not require a user (telepathic or otherwise) to interface with it. A prototype version of Cerebro named Cyberno was used by Xavier to track down Cyclops in the "Origins of the X-Men" back-up story in X-Men #40. In the first published appearance of Cerebro, X-Men #7, Professor X left the X-Men on a secret mission (to find Lucifer) and left Cerebro to the new team leader, Cyclops, who used it to keep track of known evil mutants and to find new evil mutants. The device also warned the X-Men of the impending threat posed by the non-mutant Juggernaut prior to that character's first appearance. Later, the device was upgraded to the larger and more familiar telepathy-based technology with its interface helmet.
When the human-Sentinel gestalt Bastion stole Cerebro from the X-Mansion, Cerebro was hybridized with Bastion's programming via nanotechnology. The resulting entity, a self-aware form of Cerebro, created two minions, Cerebrites Alpha and Beta, through which it would act without exposing itself. It also used its Danger Room-derived records of the powers of the X-Men and the Brotherhood of Evil Mutants to create its own team of imposter "X-Men" whose members possessed the combined powers of specific members of each of the two teams. Cerebro's goal was to put human beings in stasis so that mutants could inherit the Earth, and to this end it hunted down a group of synthetic children called the Mannites who possessed vast psychic powers. It was destroyed by the X-Men, with the help of Professor X and the Mannite named Nina.
More recently, following the example set by the X-Men films, Cerebro has been replaced by Cerebra (referred to as Cerebro's big sister), a machine the size of a small room in the basement of Xavier's School For Higher Learning. Though designed to resemble the movie version of Cerebro, Cerebra is much smaller than the films' version. It resembles a pod filled with a sparkling fog that condenses into representations of mental images.
After it is discovered that Terrigen is toxic to mutants and Storm's X-Men move to Limbo, Forge programs Cerebra into the body of a Sentinel and uploads her with the capability to showcase human emotion. Cerebra accompanies the X-Men on many of their missions to help find mutants and bring them to X-Haven where they'll be safe from the Terrigen. Along with being able to detect mutants Cerebra can also fly and teleport, serving as a bridge between Earth and Limbo.
When the X-Men and Inhumans went to war to decide the fate of the remaining Terrigen cloud, Cerebra was destroyed after getting caught in the crossfire when Emma Frost unleashed an army of Sentinels programmed to kill Inhumans instead of mutants. While Storm's team of X-Men began returning refugees to their homes from X-Haven after Medusa destroyed the Terrigen cloud, Cerebra was found severely damaged in an abandoned barn surrounded by wild Sentinels. Once she was discovered and the X-Men saw that her current sentinel body was far beyond repair they uploaded her into a new body.
Later when a mutant nation was created on the Living Island Krakoa, Xavier reveals that when he approached Forge and asked him to expand the abilities of Cerebro, Forge was able to create a version of Cerebro that not only was capable of merely detecting mutant minds but also creating a copy of each mutants' mind. Forge was able to create this seventh version of the Cerebro as a portable unit able to be worn as a helmet by Xavier to focus his psionic talent at all times. Xavier first donned this Cerebro when he announced the existence of Krakoa to the world and invited all mutants to Krakoa. He then utilized its true functioning of stirring and transplanting persona & psyche while in conjunction with the technomorphically modified genus loci of Krakoa and the unified teamwork of the Five; a mutant conclave consisting of Joshua Foley, Hope Summers, Eva Bell, Kevin MacTaggert and Fabio Medina who gestate and accelerate the regrowth of fallen mutants by combining their powers. Xavier also had five working Cerebro Cradles: one main unit, three backup units, and one additional backup unit for unforeseen complications. These Cerebro Cradles are strategically located at multiple locations.
Not soon after, XENO mercenaries were able to infiltrate Krakoa's defenses and successfully assassinate Professor X, destroying Cerebro in the process. Before Professor X was resurrected, Magneto reshaped the broken shards of Cerebro into the Cerebro Sword to represent Xavier's dream, once broken, but now forged anew and refined. The sword retained the information stored in the other Cerebro Cradles, however, it is encrypted.
Later one of the backup Cerebro unit become sentient and rebranded itself under the name Cerebrax. Hunger for intelligence and power, the sentient machine begins killing mutants across the island and eventually takes control of Krakoa and begins unleashing a full-on attack. Answering the call to fight are Kid Omega, Omega Red, Wolverine, Domino and Phoebe Cuckoo. Kid Omega flies into Cerebrax and, with some help from Sage, unleashes a powerful explosion that ultimately destroys the Cerebro unit and himself. Given that Krakoa has the power to resurrect dead mutants, Wolverine tells Sage that they're going to have to do so for Kid Omega, however Sage reveals a problem, there's no trace of Kid Omega anywhere. He's wiped from all the Cerebro cradles.
Other versions
In Chris Claremont's X-Men: The End storyline, which takes place some 20 years ahead of standard X-Men continuity, Cerebro has been replaced in turn by the disembodied brain of Martha Johansson, a human psychic who was introduced during Grant Morrison's run on the X-Men.
In the video game X-Men Legends, Cerebro is identical to its appearance and usage in the X-Men film. Jean Grey and Emma Frost use the device at one point to attempt to return Professor X's mind to his body. In X-Men Legends II: Rise of Apocalypse, it was destroyed along with the rest of the mansion, but Forge mentioned plans on building Cerebra to replace it. He described Cerebra as Cerebro's big sister.
In the video game Marvel: Ultimate Alliance, while the team is staying in the Sanctum Sanctorum, Professor X used a device created by Beast allowing him to use Cerebro from long distance in order to find Nightcrawler, who had been kidnapped by Dr. Doom.
In the universe of Marvel Zombies, zombified versions of Beast and Mr. Fantastic reprogram Cerebro to help them and the other zombies track down the last remaining humans on Earth. Cerebro locates many in the European nation of Latveria, but all escape. In Marvel Zombies Return, the surviving zombies escape to another world where many of them restart the original infection, this time permanently fusing Professor X's partly zombiefied body with Cerebro so that he can find humans for them.
In the MC2 universe, the X-People carry "mini-cerebros", that can detect mutants just as well as the full-size version.
In other media
Films
Generation X
In the 1996 Generation X telefilm on Fox, Cerebro was depicted as a desktop personal computer with a few custom peripherals.
X-Men
Professor Jeffrey J. Kripal, in his 2011 book Mutants and Mystics: Science Fiction, Superhero Comics, and the Paranormal, describes the Cerebro of the X-Men films as "a futuristic superroom into which Professor Xavier wheels over a bridge in order to don the helmet that would magnify his already extraordinary telepathic powers and project the results onto the skull-like internal walls of the room." In the films X-Men and X2: X-Men United, Cerebro is a device that fills a massive spherical room in the basement of Xavier's School. The helmet interface is similar to the version seen in the comics, although the bulk of Cerebro's machinery is contained in the surrounding walls. While in use, three-dimensional images of the humans whose minds are being scanned by the device appear around the interface bridge. Unlike the comics' version of Cerebro, the film version can detect both human and mutant minds with ease. The unique signature of mutant brainwaves is shown in the first film by the mental images of humans depicted in black and white, while those of mutants show up in red. When Xavier illustrates his connection with every human and mutant mind on Earth in the sequel, X2, mutants appear in red, and humans in white.
In the first film, Professor X mentions to Wolverine that Magneto helped him build it, and therefore knows how to construct helmets with circuitry to block its detection abilities. Cerebro is sabotaged by Mystique so that it injures Professor X, putting him into a coma. The only person seen using Cerebro effectively in the films is Xavier; Jean Grey successfully used the device to locate Magneto in the original film, but the input overwhelmed her nascent telepathic power and left her stunned. This has not been mentioned in the comics, although the Magneto of the comics can use Cerebro, and has designed similar devices.
X2: X-Men United
In X2: X-Men United, the device was copied and modified by William Stryker in his plot to have a brainwashed Xavier use his Cerebro-amplified powers to kill the world's mutants, although this plan was later 'hi-jacked' by Magneto—immune to the telepathic assault via his helmet—so that Xavier would be used to kill humans. According to X2, it is difficult to pinpoint the location of mutants who have the ability to teleport and are constantly in transit, such as Nightcrawler.
In both films, Magneto's helmet is capable of blocking the telepathic signals from Cerebro, as well as any telepathic mutants.
X-Men: First Class
In X-Men: First Class, an early version of Cerebro exists in an unnamed CIA science facility, built by the young Hank McCoy to amplify brainwaves. In a slight departure from the source material, its creation and design is attributed to Hank, instead of Charles Xavier. It is used by Xavier to find and recruit mutants for training in order to oppose Sebastian Shaw. It is later destroyed by Riptide as Shaw searches the facility for the young mutants.
In the film, Emma Frost comments on her perception of Xavier's increased telepathic range when using Cerebro, which she feels despite being some thousands of miles away.
X-Men: Days of Future Past
In X-Men: Days of Future Past, Cerebro appears in the future X-Jet as a built in extension to Xavier's hover-chair and is made up of three sensor-pads and a 3D holographic projector. In the past, it appears as it did in X-Men and X2, albeit dusty from long years of neglect due to the past Xavier's current inability to use his powers. As his abilities begin to return, the young Xavier initially attempts to use it to find Mystique after she escapes from their first confrontation, but has trouble concentrating enough to use it properly due to his current emotional turmoil. However, a conversation with his future self—using the time-displaced Wolverine as a 'bridge' to make contact with his other self in the future, who is close to Wolverine's currently-comatose body—helps him regain his old focus, allowing him to temporarily control others to speak to Raven before sending a psychic projection directly to her.
In the Rogue Cut version of the film, Cerebro is being used in the future as a prison for Rogue, who is being experimented on by the Sentinels' human agents in the hope of finding a way to duplicate her ability to take powers from others, with Cerebro being used as the room's interior is shielded from external telepathic probes. Also, in 1973, Mystique returns to the mansion to get treatment for her wound as a cover for her real agenda to smash Cerebro, preventing Xavier from finding her again.
X-Men: Apocalypse
Cerebro appears in X-Men: Apocalypse where Xavier uses Cerebro and sees Moira searching for Erik. Xavier tells Alex to destroy Cerebro after Apocalypse is able to use Xavier's search for him to take control of Xavier's powers through Cerebro, although Apocalypse still manages to use Xavier to make humanity sacrifice most of its nuclear weapons before Cerebro is lost.
Logan
In this alternate timeline of Logan, Cerebro has become a covering at Logan and Charles Xavier's home at an abandoned smelting mill in Mexico.
Deadpool 2
Cerebro appears when Deadpool 2 when a depressed Wade Wilson/Deadpool is trying to use the Cerebro at the X-Mansion to “look into the future.”
Dark Phoenix
Cerebro appears in Dark Phoenix when Xavier uses it to navigate Jean's mind and later to locate Jean, Magneto and Hank.
Television
X-Men: The Animated Series
In X-Men: The Animated Series Cerebro was heavily featured throughout the series' duration. It was primarily used by Professor Xavier and he was shown to use it in various ways, such as detecting mutants, increasing his powers, and even understanding Shi'ar technology, and so forth. There was no specified room where Cerebro was kept as in the other animated series but instead came out from the ceiling in most notably the War Room where the X-Men held their team meetings. Jean Grey was also noted to use Cerebro frequently and it would amplify her telepathic powers as it did for Professor X. Jean Grey in this animated series did not always join the X-Men on their field missions but rather monitored them telepathically using Cerebro's help. Even the White Queen of the Hellfire Club, Emma Frost, used Cerebro when she telepathically hacked into it to secretly "spy" on Xavier, the X-Men, and to learn more about Jean Grey and her transformation into the Phoenix. The X-Men's Blackbird jet was also equipped with its own Cerebro.
X-Men: Evolution
In the animated series X-Men: Evolution, Cerebro was featured numerous times. It was shown being used mainly by the Professor and eventually Jean Grey. In the beginning of the series Cerebro was a primitive version of what it would later become as the show progressed eventually taking an appearance identical to the Cerebro in the X-Men films. Cerebro originally appeared as a computer console with custom peripherals that came out of a hidden wall component in the mansion. Eventually, this Cerebro was destroyed by Professor X's evil step-brother the Juggernaut. When it was rebuilt the Cerebro was given its own room, instead of the hidden wall component as before, and looked identical to the designs of Cerebro in the films. Cerebro even came in a portable helmet form for travel and field missions. Jean Grey used this Cerebro to amplify her telepathic powers as she did in the comics and previous series. It even helped boost Jean's telepathic powers in order to battle a possessed Professor X in the series finale. During the fight, Cerebro was shown to unleash the Phoenix within Jean for a split second, eventually gaining the power to defeat the evil Xavier, and return him to normal. In the episode "Fun and Games", Arcade, a student version of one of the X-Men greatest villains, hacked into Cerebro and used it to control the mansion's security system to attack the X-Men believing the program to be a game. However, he made no use of its telepathy-enhancement technology, instead merely rewiring it to allow him access to the security systems.
Wolverine and the X-Men
In the 2008 series, Wolverine and the X-Men, Cerebro is extremely important to the overall series as it serves as a link to the past, present, and future. Originally Cerebro was damaged in an unexplained attack on Professor X in the present where he ends up in a coma only to awake twenty years into the future. In the future twenty years from now the X-Men have all been killed and the world is being controlled by the mutant-hunting robots named Sentinels. Xavier, with the surviving Cerebro components he finds, telepathically contacts the X-Men twenty years in the past (the present) and instructs them to stop those who would create the bleak future he awakes in twenty years later (his present). During the majority of the X-Men's present, as well as its first appearances in the future, it is similar to the version seen in the X-Men films, however, for the majority of the scenes in the future, Xavier uses a Portable version of Cerebro. With Warren Worthington's money and Forge's technical expertise, the X-Men were able to get the destroyed Cerebro at the mansion repaired. As Xavier is comatose in the present and Jean Grey missing, Emma Frost serves as the team's resident telepath and she primarily uses Cerebro.
Black Panther
In the 2010 series Black Panther, Storm uses Cerebro to locate Juggernaut in Wakanda.
Legion
An early version of Cerebro is seen in Legion used by Professor X in the third-season episode "Chapter 22."
M.O.D.O.K.
The Cerebro helmet appears in the second episode of M.O.D.O.K., where it was found by M.O.D.O.K. in a S.H.I.E.L.D. storage facility.
X-Men '97
In X-Men '97 Cerebro was heavily featured throughout the series' duration. It was used by Jean Grey to probe Henry Peter Gyrich's mind to find out about the whereabouts of Bolivar Trask. She also sees a horrifying vision of Master Mold destroying Genosha in the episode "To Me, My X-Men". In the episode "Tolerance is Extinction - Part 1", Beast recalibrated Cerebro to scan for cyborg brain frequencies, to find out how many people have been turned into Prime Sentinels by Bastion.
References
1964 in comics
X-Men
Fictional elements introduced in 1964 | Cerebro | [
"Technology"
] | 4,804 | [
"Fictional computers",
"Computers"
] |
1,049,625 | https://en.wikipedia.org/wiki/Intercontinental%20Exchange%20Futures | The International Exchange, now ICE Futures (since 2005-04-7), based in London, was one of the world's largest energy futures and options exchanges. Its flagship commodity, Brent Crude was a world benchmark for oil prices, but the exchange also handled futures contracts and options on fuel oil, natural gas, electricity (baseload and peakload), coal contracts and, as of 22 April 2005, carbon emission allowances with the European Climate Exchange (ECX).
The IPE was acquired by the Intercontinental Exchange in 2001. The IPE was an open outcry exchange until 7 April 2005, when its name was changed to ICE Futures and all trading was shifted onto an electronic trading platform.
History
Until the 1970s, the price of oil was relatively stable with production largely controlled by the biggest oil companies. During that decade two oil price shocks led to continued price volatility in the market; short-term physical markets evolved, and the need to hedge emerged.
A group of energy and futures companies founded the IPE in 1980, and the first contract, for gas oil futures, was launched the following year. In June 1988, the IPE launched Brent Crude futures.
Since its inception, oil futures and latterly options have been traded in pits on the trading floor using the open outcry system. As business volumes have grown, the IPE has moved location several times to accommodate new pits and more traders.
The Exchange has experienced incremental growth, year-on-year for most of its history. Complexity, but also efficiency have increased as new trading instruments such as swaps, futures, and options have been developed.
Contracts
Since 1997, the ICE Futures has expanded its offerings from Brent Crude and Gas Oil to include Natural Gas (1997), Electricity (2004), and ECX carbon financial instruments (2005). These expansions have allowed ICE Futures to offer a wider range of energy products. More advanced transactions are also now possible, due to cross- and multi-product transactions, which eliminate the need to use multiple markets or an adviser.
References
External links
ICE
See also
List of futures exchanges
Commodity exchanges in the United Kingdom
Energy exchanges
Economy of London
Petroleum economics
Petroleum organizations
Intercontinental Exchange
International organisations based in London | Intercontinental Exchange Futures | [
"Chemistry",
"Engineering"
] | 454 | [
"Petroleum",
"Petroleum organizations",
"Energy organizations"
] |
1,049,651 | https://en.wikipedia.org/wiki/YellowTAB | yellowTAB was a German software firm that produced an operating system called "yellowTAB ZETA". While the operating system was based on BeOS 5.1.0, the company never publicly confirmed that it has the BeOS source code or what their licensing agreement with BeOS's owners PalmSource was. The company went insolvent and ceased trading in 2006. Later, David Schlesinger, directory of Open Source technologies at ACCESS, Inc., which had meanwhile become the owner of the BeOS source code, stated that there had never been a license agreement covering yellowTAB's use of the source code and that ZETA was therefore an infringed copy.
The company's offices were in Mannheim, and its corporate motto was Assume The Power. Following their closure, the OS was taken over by magnussoft, who started selling it as "magnussoft ZETA".
yellowTAB has come under some criticism from the BeOS userbase, who claim that the company did not give back what it took from the Haiku project and other open source BeOS projects. In many cases, open source programmers have recreated yellowTAB's extensions to BeOS, most notably their SVG graphics extensions to OpenTracker. However, yellowTAB's actions to date have not violated the BSD/MIT licences under which most open source BeOS projects exist.
In March 2006, yellowTAB donated their "Intel Extreme" driver to one of the Haiku developers for integration into the Haiku source tree where further development was to take place. Both yellowTAB and Haiku developers were to collaborate on Intel Extreme Graphics driver development, but to date this code has not yet been committed to the repository.
In April 2006, insolvency protection proceedings were filed for the company, although employees denied that it was actually filed by the company, suggesting potential malicious intent. However, the firm has transferred development and support of ZETA to a third-party, magnussoft.
References
External links
yellowTAB - official site
Software companies of Germany
BeOS
Companies based in Mannheim | YellowTAB | [
"Technology"
] | 430 | [
"BeOS",
"Computing platforms"
] |
1,049,666 | https://en.wikipedia.org/wiki/ZETA%20%28operating%20system%29 | ZETA, earlier yellowTAB ZETA, was an operating system formerly developed by yellowTAB of Germany based on the Be Operating System developed by Be Inc.; because of yellowTAB's insolvency, ZETA was later being developed by an independent team of which little was known, and distributed by magnussoft. As of February 28, 2007 the current and last version of ZETA was 1.5.
On March 28, 2007, magnussoft announced that it has discontinued funding the development of ZETA by March 16, because the sales figures had fallen far short of the company's expectations, so that the project was no longer economically viable. A few days later, the company also stopped the distribution of ZETA in reaction to allegations that ZETA constituted an illegal unlicensed derivative of the BeOS source code and binaries.
Development
ZETA was an effort to bring BeOS up to date, adding support for newer hardware, and features that had been introduced in other operating systems in the years since Be Incorporated ceased development in 2001. Among the new features were USB 2.0 support, SATA support, samba support, a new media player, and enhanced localization of system components. Unlike Haiku and other open source efforts to recreate some or all of BeOS's functionality from scratch, ZETA was based on the actual BeOS code base, and it is closed source.
ZETA contributed to an increase in activity in the BeOS commercial software market, with a number of new products for both ZETA and the earlier BeOS being released.
However, some critics point to a list of goals for the first release that do not appear to have been met (including Java 1.4.2 and ODBC support). Other reviewers point to bugs that still exist from BeOS, and question whether yellowTAB has the complete access to the source code they would need to make significant updates.
Some changes that were made could break compilation of code, and in some cases (most notably Mozilla), break the actual application if any code optimizations are applied, resulting in much slower builds.
YellowTAB promoted ZETA mainly in the German market, where it used to be sold through infomercials and on RTL Shop, and in Japan still being a beta version. Prior to Magnussoft stopping the distribution of ZETA, it was mainly distributed directly by magnussoft.
Versions
Criticism
ZETA and yellowTAB have been surrounded by controversy. Critics of yellowTAB questioned for a long time the legality of ZETA, and whether yellowTAB had legal access to the sources of BeOS; it is now known that yellowTAB could not have developed ZETA to the extent that they did without access to the source code, but doubts remain as to whether yellowTAB actually had legal access to the code or not.
Furthermore, critics did not see ZETA as real advancement of BeOS, but rather as an unfinished and buggy operating system loaded with third party applications that were either obsolete, unsupported, or non-functional. This was particularly true in the initial releases of ZETA, and it was in clear conflict with the attention to detail that BeOS used to stand for, disappointing the BeOS community who at one point had high expectations for ZETA. While yellowTAB did clean up the selection of bundled applications in following versions, ZETA remains somewhat unstable when compared to other modern desktop operating systems.
But perhaps the most criticized practice by yellowTAB was its tendency to make claims that turned out to be either half truths or vague enough that they could not be confirmed. Not only did yellowTAB announce certain developments that never materialized (such as Java, and ODBC among others), but it would also support certain capabilities beyond what ZETA was actually capable of (e.g., compatibility with MS Office). According to sources close to yellowTAB, this is believed to have led to a high return rate from customers that bought ZETA from the German RTL TV shopping channel, and the reason for which RTL eventually stopped selling the product.
There was some criticism within the greater BeOS community regarding the lack of a "personal" edition of Zeta. This is a somewhat controversial standpoint, given the history of BeOS and Be Inc. Throughout the life of the Intel version of BeOS, Be Inc regularly created and distributed BeOS demo discs on CD. The discs were somewhat crippled and would not mount a BFS partition nor would they install to a physical hard drive. They served as a test for hardware support and a taster for the operating system. Zeta was offered in a similar way – demo discs with similar limitations were made available. Unfortunately, many in the BeOS community, especially those who came to BeOS post the demise of Be Inc, tended to have an issue with the "crippled" demo discs. The controversy is as follows: the final commercial release of BeOS, Revision 5, included a freely distributable "virtual" BeOS installation. The installer created a virtual BeOS image in a file on the host OS, and the computer could then boot into BeOS using a boot disk or via the installation of Bootman (the native BeOS boot manager.) Be Inc intended this release to be a taster and to draw users into buying the Professional edition, which was fully installable to a physical hard drive partition. Unfortunately, many users discovered that it was a trivial task to install the personal version to a real partition, and so Be Inc ultimately lost much of the sales potential for the product. Both YellowTab and Magnussoft learnt from this, and therefore did not offer a version of Zeta that could be installed without purchasing a license.
German language – the Zeta initial builds and much of the packaging was geared towards a German-speaking audience. This was reduced in later versions, but the first few beta releases and release candidates had many oddities where Zeta would fall back to German, no matter what locale was set.
Version 1.0 of Zeta included a badly thought out activation component, which requires a code to be entered and authenticated via a remote server before the nag screen will stop and full functionality is restored. The nag is fairly easily circumvented by replacing the executable called with a stub executable, but the activation was incredibly poorly executed and often failed. The activation was removed by the 1.21 release.
Cease of distribution
A cease of distribution letter was posted by Magnussoft on 5 April 2007.
See also
Comparison of operating systems
References
External links
ZETA 1.5 Review – Reviewed by Thom Holwerda for OSNews
BeOS
Proprietary operating systems | ZETA (operating system) | [
"Technology"
] | 1,351 | [
"BeOS",
"Computing platforms"
] |
1,049,691 | https://en.wikipedia.org/wiki/Sequence%20logo | In bioinformatics, a sequence logo is a graphical representation of the sequence conservation of nucleotides (in a strand of DNA/RNA) or amino acids (in protein sequences).
A sequence logo is created from a collection of aligned sequences and depicts the consensus sequence and diversity of the sequences.
Sequence logos are frequently used to depict sequence characteristics such as protein-binding sites in DNA or functional units in proteins.
Overview
A sequence logo consists of a stack of letters at each position.
The relative sizes of the letters indicate their frequency in the sequences.
The total height of the letters depicts the information content of the position, in bits.
Logo creation
To create sequence logos, related DNA, RNA or protein sequences, or DNA sequences that have common conserved binding sites, are aligned so that the most conserved parts create good alignments. A sequence logo can then be created from the conserved multiple sequence alignment. The sequence logo will show how well residues are conserved at each position: the higher the number of residues, the higher the letters will be, because the better the conservation is at that position. Different residues at the same position are scaled according to their frequency. The height of the entire stack of residues is the information measured in bits. Sequence logos can be used to represent conserved DNA binding sites, where transcription factors bind.
The information content (y-axis) of position is given by:
for amino acids,
for nucleic acids,
where is the uncertainty
(sometimes called the Shannon entropy) of position
Here, is the relative frequency of base or amino acid at position , and is the small-sample correction for an alignment of letters. The height of letter in column is given by
The approximation for the small-sample correction, , is given by:
where is 4 for nucleotides, 20 for amino acids, and is the number of sequences in the alignment.
Consensus logo
A consensus logo is a simplified variation of a sequence logo that can be embedded in text format.
Like a sequence logo, a consensus logo is created from a collection of aligned protein or DNA/RNA sequences and conveys information about the conservation of each position of a sequence motif or sequence alignment
. However, a consensus logo displays only conservation information, and not explicitly the frequency information of each nucleotide or amino acid at each position. Instead of a stack made of several characters, denoting the relative frequency of each character, the consensus logo depicts the degree of conservation of each position using the height of the consensus character at that position.
Advantages and drawbacks
The main, and obvious, advantage of consensus logos over sequence logos is their ability to be embedded as text in any Rich Text Format supporting editor/viewer and, therefore, in scientific manuscripts. As described above, the consensus logo is a cross between sequence logos and consensus sequences. As a result, compared to a sequence logo, the consensus logo omits information (the relative contribution of each character to the conservation of that position in the motif/alignment). Hence, a sequence logo should be used preferentially whenever possible. That being said, the need to include graphic figures in order to display sequence logos has perpetuated the use of consensus sequences in scientific manuscripts, even though they fail to convey information on both conservation and frequency. Consensus logos represent therefore an improvement over consensus sequences whenever motif/alignment information has to be constrained to text.
Extensions
Hidden Markov models (HMMs) not only consider the information content of aligned positions in an alignment, but also of insertions and deletions. In an HMM sequence logo used by Pfam, three rows are added to indicate the frequencies of occupancy (presence) and insertion, as well as the expected insertion length.
See also
Sequence motif
Position-specific scoring matrix
DNA binding site
References
External links
How to read sequence logos.
Recommendations for Making Sequence Logos.
Erill, I., "A gentle introduction to information content in transcription factor binding sites", Eprint
What is (in) a sequence logo?
Bioinformatics
Statistical charts and diagrams | Sequence logo | [
"Engineering",
"Biology"
] | 813 | [
"Bioinformatics",
"Biological engineering"
] |
1,049,925 | https://en.wikipedia.org/wiki/Cray%20XD1 | The Cray XD1 was an entry-level supercomputer range, made by Cray Inc.
The XD1 uses AMD Opteron 64-bit CPUs, and utilizes the Direct Connect Architecture over HyperTransport to remove the bottleneck at the PCI and contention at the memory. The MPI latency is ¼ that of Infiniband, and 1/30 that of Gigabit Ethernet.
The XD1 was originally designed by OctigaBay Systems Corp. of Vancouver, British Columbia, Canada as the OctigaBay 12K system. The company was acquired by Cray Inc. in February 2004.
Announced on 4 October 2004, the Cray XD1 range incorporate Xilinx Virtex-II Pro FPGAs for application acceleration. With 12 CPUs in a chassis, and up to 12 chassis installable in a rack, XD1 systems may hold several 144-CPU multiples in multirack configurations. The operating system used on the XD1 is a customized version of Linux, and the machine's load balancing / resource management system is an enhanced version of Sun Microsystems' Sun Grid Engine.
External links
Cray Legacy Products
Xd1
X86 supercomputers | Cray XD1 | [
"Technology"
] | 256 | [
"Computing stubs",
"Computer hardware stubs"
] |
1,050,057 | https://en.wikipedia.org/wiki/Quasidihedral%20group | In mathematics, the quasi-dihedral groups, also called semi-dihedral groups, are certain non-abelian groups of order a power of 2. For every positive integer n greater than or equal to 4, there are exactly four isomorphism classes of non-abelian groups of order 2n which have a cyclic subgroup of index 2. Two are well known, the generalized quaternion group and the dihedral group. One of the remaining two groups is often considered particularly important, since it is an example of a 2-group of maximal nilpotency class. In Bertram Huppert's text Endliche Gruppen, this group is called a "Quasidiedergruppe". In Daniel Gorenstein's text, Finite Groups, this group is called the "semidihedral group". Dummit and Foote refer to it as the "quasidihedral group"; we adopt that name in this article. All give the same presentation for this group:
.
The other non-abelian 2-group with cyclic subgroup of index 2 is not given a special name in either text, but referred to as just G or Mm(2). When this group has order 16, Dummit and Foote refer to this group as the "modular group of order 16", as its lattice of subgroups is modular. In this article this group will be called the modular maximal-cyclic group of order . Its presentation is:
.
Both these two groups and the dihedral group are semidirect products of a cyclic group <r> of order 2n−1 with a cyclic group <s> of order 2. Such a non-abelian semidirect product is uniquely determined by an element of order 2 in the group of units of the ring and there are precisely three such elements, , , and , corresponding to the dihedral group, the quasidihedral, and the modular maximal-cyclic group.
The generalized quaternion group, the dihedral group, and the quasidihedral group of order 2n all have nilpotency class n − 1, and are the only isomorphism classes of groups of order 2n with nilpotency class n − 1. The groups of order pn and nilpotency class n − 1 were the beginning of the classification of all p-groups via coclass. The modular maximal-cyclic group of order 2n always has nilpotency class 2. This makes the modular maximal-cyclic group less interesting, since most groups of order pn for large n have nilpotency class 2 and have proven difficult to understand directly.
The generalized quaternion, the dihedral, and the quasidihedral group are the only 2-groups whose derived subgroup has index 4. The Alperin–Brauer–Gorenstein theorem classifies the simple groups, and to a degree the finite groups, with quasidihedral Sylow 2-subgroups.
Examples
The Sylow 2-subgroups of the following groups are quasidihedral:
PSL3(Fq) for q ≡ 3 mod 4,
PSU3(Fq) for q ≡ 1 mod 4,
the Mathieu group M11,
GL2(Fq) for q ≡ 3 mod 4.
References
Finite groups | Quasidihedral group | [
"Mathematics"
] | 676 | [
"Mathematical structures",
"Algebraic structures",
"Finite groups"
] |
1,050,099 | https://en.wikipedia.org/wiki/Cosmopolis%20XXI | Cosmopolis XXI was a late 2000s Russian concept launch vehicle billed as a space tourism vehicle, similar to Mojave Aerospace's Tier One program. Designed and built by the Myasishchev Design Bureau, it would use the M-55X launch aircraft (derived from Myasishchev M-55), and the proposed C-21 spaceplane or its successor the Explorer. It would be a TSTSO (Two-Stage to SubOrbit) launch platform.
The Explorer spaceplane is a suborbital tourist spaceplane based on the C-21 design. The plane was being developed by Space Adventures with the Russian Federal Space Agency and was intended to carry 3 passengers. It is to be air-launched by carrier aircraft from a Space Adventures spaceport. Space Adventures abandoned the Explorer project in 2010 because "it got too expensive." It is unclear if Russia continues its development.
References
Suborbital Rocketship Fleet to Carry Tourists Spaceward in Style. Space.com, 2006-02-22.
External links
Space Tourism Pioneers, Space Adventures and the Ansari X Prize Title Sponsors, to Provide First Suborbital Spaceflight Tourism Vehicles (SpaceAdventures)
New group to develop passenger spaceship (MSNBC). Retrieved 2024-04-17.
C-21 Spacecraft at Space Adventures
Myasishchev aircraft
Human spaceflight programs
Space Adventures
Space tourism | Cosmopolis XXI | [
"Astronomy",
"Engineering"
] | 285 | [
"Space programs",
"Outer space",
"Astronomy stubs",
"Human spaceflight programs",
"Outer space stubs"
] |
1,050,125 | https://en.wikipedia.org/wiki/Laver%20table | In mathematics, Laver tables (named after Richard Laver, who discovered them towards the end of the 1980s in connection with his works on set theory) are tables of numbers that have certain properties of algebraic and combinatorial interest. They occur in the study of racks and quandles.
Definition
For any nonnegative integer n, the n-th Laver table is the 2n × 2n table whose entry in the cell at row p and column q (1 ≤ p,q ≤ 2n) is defined as
where is the unique binary operation that satisfies the following two equations for all p, q in {1,...,2n}:
and
Note: Equation () uses the notation to mean the unique member of {1,...,2n} congruent to x modulo 2n.
Equation () is known as the (left) self-distributive law, and a set endowed with any binary operation satisfying this law is called a shelf. Thus, the n-th Laver table is just the multiplication table for the unique shelf ({1,...,2n}, ) that satisfies Equation ().
Examples: Following are the first five Laver tables, i.e. the multiplication tables for the shelves ({1,...,2n}, ), n = 0, 1, 2, 3, 4:
There is no known closed-form expression to calculate the entries of a Laver table directly, but Patrick Dehornoy provides a simple algorithm for filling out Laver tables.
Properties
For all p, q in {1,...,2n}: .
For all p in {1,...,2n}: is periodic with period πn(p) equal to a power of two.
For all p in {1,...,2n}: is strictly increasing from to .
For all p,q:
Are the first-row periods unbounded?
Looking at just the first row in the n-th Laver table, for n = 0, 1, 2, ..., the entries in each first row are seen to be periodic with a period that's always a power of two, as mentioned in Property 2 above. The first few periods are 1, 1, 2, 4, 4, 8, 8, 8, 8, 16, 16, ... . This sequence is nondecreasing, and in 1995 Richard Laver proved, under the assumption that there exists a rank-into-rank (a large cardinal property), that it actually increases without bound. (It is not known whether this is also provable in ZFC without the additional large-cardinal axiom.) In any case, it grows extremely slowly; Randall Dougherty showed that 32 cannot appear in this sequence (if it ever does) until n > A(9, A(8, A(8, 254))), where A denotes the Ackermann–Péter function.
References
Further reading
.
.
Shelves and the infinite: https://johncarlosbaez.wordpress.com/2016/05/06/shelves-and-the-infinite/
Mathematical logic
Combinatorics | Laver table | [
"Mathematics"
] | 674 | [
"Mathematical logic",
"Discrete mathematics",
"Combinatorics"
] |
1,050,149 | https://en.wikipedia.org/wiki/Spillway | A spillway is a structure used to provide the controlled release of water downstream from a dam or levee, typically into the riverbed of the dammed river itself. In the United Kingdom, they may be known as overflow channels. Spillways ensure that water does not damage parts of the structure not designed to convey water.
Spillways can include floodgates and fuse plugs to regulate water flow and reservoir level. Such features enable a spillway to regulate downstream flow—by releasing water in a controlled manner before the reservoir is full, operators can prevent an unacceptably large release later.
Other uses of the term "spillway" include bypasses of dams and outlets of channels used during high water, and outlet channels carved through natural dams such as moraines.
Water normally flows over a spillway only during flood periods, when the reservoir has reached its capacity and water continues entering faster than it can be released. In contrast, an intake tower is a structure used to control water release on a routine basis for purposes such as water supply and hydroelectricity generation.
Types
A spillway is located at the top of the reservoir pool. Dams may also have bottom outlets with valves or gates which may be operated to release flood flow, and a few dams lack overflow spillways and rely entirely on bottom outlets.
The two main types of spillways are controlled and uncontrolled.
A controlled spillway has mechanical structures or gates to regulate the rate of flow. This design allows nearly the full height of the dam to be used for water storage year-round, and flood waters can be released as required by opening one or more gates.
An uncontrolled spillway, in contrast, does not have gates; when the water rises above the lip or crest of the spillway, it begins to be released from the reservoir. The rate of discharge is controlled only by the height of water above the reservoir's spillway. The fraction of storage volume in the reservoir above the spillway crest can only be used for the temporary storage of floodwater; it cannot be used as water supply storage because it sits higher than the dam can retain it.
In an intermediate type, normal level regulation of the reservoir is controlled by the mechanical gates. In this case, the dam is not designed to function with water flowing over the top if it, either due to the materials used for its construction or conditions directly downstream. If inflow to the reservoir exceeds the gate's capacity, an artificial channel called an auxiliary or emergency spillway will convey water. Often, that is intentionally blocked by a fuse plug. If present, the fuse plug is designed to wash out in case of a large flood greater than the discharge capacity of the spillway gates. Although many months may be needed for construction crews to restore the fuse plug and channel after such an operation, the total damage and cost to repair is less than if the main water-retaining structures had been overtopped. The fuse plug concept is used where building a spillway with the required capacity would be costly.
Open channel spillway
Chute spillway
A chute spillway is a common and basic design that transfers excess water from behind the dam down a smooth decline into the river below. These are usually designed following an ogee curve. Most often, they are lined on the bottom and sides with concrete to protect the dam and topography. They may have a controlling device and some are thinner and multiply-lined if space and funding are tight. In addition, they are not always intended to dissipate energy like stepped spillways. Chute spillways can be ingrained with a baffle of concrete blocks but usually have a "flip lip" and/or dissipator basin, which creates a hydraulic jump, protecting the toe of the dam from erosion.
Stepped spillway
Stepped channels and spillways have been used for over 3,000 years. Despite being superseded by more modern engineering techniques such as hydraulic jumps in the mid twentieth century, since around 1985 interest in stepped spillways and chutes has been renewed, partly due to the use of new construction materials (e.g. roller-compacted concrete, gabions) and design techniques (e.g. embankment overtopping protection). The steps produce considerable energy dissipation along the chute and reduce the size of the required downstream energy dissipation basin.
Research is still active on the topic, with newer developments on embankment dam overflow protection systems, converging spillways and small-weir design.
Bell-mouth spillway
A bell-mouth spillway is designed like an inverted bell, where water can enter around the entire perimeter. These uncontrolled spillways are also called morning glory (after the flower), or glory hole spillways. In areas where the surface of the reservoir may freeze, this type of spillway is normally fitted with ice-breaking arrangements to prevent the spillway from becoming ice-bound.
Some bell-mouth spillways are gate-controlled. The highest morning glory spillway in the world is at Hungry Horse Dam in Montana, U.S., and is controlled by a ring gate. The bell-mouth spillway in Covão dos Conchos reservoir in Portugal is constructed to look like a natural formation. The largest bell-mouth spillway is in Geehi Dam, in New South Wales, Australia, measuring in diameter at the lake's surface.
Siphon spillway
A siphon spillway uses the difference in height between the intake and the outlet to create the pressure difference required to remove excess water. Siphons require priming to remove air in the bend for them to function, and most siphon spillways are designed to use water to automatically prime the siphon. One such design is the volute siphon, which employs volutes, or fins, on a funnel to form water into a vortex that draws air out of the system. The priming happens automatically when the water level rises above the inlets.
Other types
The ogee crest over-tops a dam, a side channel wraps around the topography of a dam, and a labyrinth uses a zig-zag design to increase the sill length for a thinner design and increased discharge. A drop inlet resembles an intake for a hydroelectric power plant and transfers water from behind the dam directly through tunnels to the river downstream.
Design considerations
One parameter of spillway design is the largest flood it is designed to handle. The structures must safely withstand the appropriate spillway design flood (SDF), sometimes called the inflow design flood (IDF). The magnitude of the SDF may be set by dam safety guidelines, based on the size of the structure and the potential loss of human life or property downstream. The magnitude of the flood is sometimes expressed as a return period. A 100-year recurrence interval is the flood magnitude expected to be exceeded on the average of once in 100 years. This parameter may be expressed as an exceedance frequency with a 1% chance of being exceeded in any given year. The volume of water expected during the design flood is obtained by hydrologic calculations of the upstream watershed. The return period is set by dam safety guidelines, based on the size of the structure and the potential loss of human life or property downstream.
The United States Army Corps of Engineers bases their requirements on the probable maximum flood (PMF) and the probable maximum precipitation (PMP). The PMP is the largest precipitation thought to be physically possible in the upstream watershed. Dams of lower hazard may be allowed to have an IDF less than the PMF.
Energy dissipation
As water passes over a spillway and down the chute, potential energy converts into increasing kinetic energy. Failure to dissipate the water's energy can lead to scouring and erosion at the dam's toe (base). This can cause spillway damage and undermine the dam's stability. To put this energy in perspective, the spillways at Tarbela Dam could, at full capacity, produce 40,000 MW; about 10 times the capacity of its power plant.
The energy can be dissipated by addressing one or more parts of a spillway's design.
Steps
First, on the spillway surface itself by a series of steps along the spillway (see stepped spillway).
Flip bucket
Second, at the base of a spillway, a flip bucket can create a hydraulic jump and deflect water upwards.
Ski jump
A ski jump can direct water horizontally and eventually down into a plunge pool, or two ski jumps can direct their water discharges to collide with one another.
Stilling basin
Third, a stilling basin at the terminus of a spillway serves to further dissipate energy and prevent erosion. They are usually filled with a relatively shallow depth of water and sometimes lined with concrete. A number of velocity-reducing components can be incorporated into their design to include chute blocks, baffle blocks, wing walls, surface boils, or end sills.
Safety
Spillway gates may operate suddenly without warning, under remote control. Trespassers within the spillway are at high risk of drowning. Spillways are usually fenced and equipped with locked gates to prevent casual trespassing within the structure. Warning signs, sirens, and other measures may be in place to warn users of the downstream area of sudden release of water. Operating protocols may require "cracking" a gate to release a small amount of water to warn persons downstream.
The sudden closure of a spillway gate can result in the stranding of fish, and this is usually avoided.
Gallery
See also
Dam safety system
Reservoir
Stepped spillway
Fish ladder
Tailrace fishing
Toddbrook Reservoir
Oroville Dam crisis
References
External links
- information, images, and construction information about the Lake Berryessa glory hole.
Hydraulic structures
Dams
Flood control | Spillway | [
"Chemistry",
"Engineering"
] | 1,992 | [
"Flood control",
"Environmental engineering"
] |
1,050,195 | https://en.wikipedia.org/wiki/Evolutionary%20robotics | Evolutionary robotics is an embodied approach to Artificial Intelligence (AI) in which robots are automatically designed using Darwinian principles of natural selection. The design of a robot, or a subsystem of a robot such as a neural controller, is optimized against a behavioral goal (e.g. run as fast as possible). Usually, designs are evaluated in simulations as fabricating thousands or millions of designs and testing them in the real world is prohibitively expensive in terms of time, money, and safety.
An evolutionary robotics experiment starts with a population of randomly generated robot designs. The worst performing designs are discarded and replaced with mutations and/or combinations of the better designs. This evolutionary algorithm continues until a prespecified amount of time elapses or some target performance metric is surpassed.
Evolutionary robotics methods are particularly useful for engineering machines that must operate in environments in which humans have limited intuition (nanoscale, space, etc.). Evolved simulated robots can also be used as scientific tools to generate new hypotheses in biology and cognitive science, and to test old hypothesis that require experiments that have proven difficult or impossible to carry out in reality.
History
In the early 1990s, two separate European groups demonstrated different approaches to the evolution of robot control systems. Dario Floreano and Francesco Mondada at EPFL evolved controllers for the Khepera robot. Adrian Thompson, Nick Jakobi, Dave Cliff, Inman Harvey, and Phil Husbands evolved controllers for a Gantry robot at the University of Sussex.
However the body of these robots was presupposed before evolution.
The first simulations of evolved robots were reported by Karl Sims and Jeffrey Ventrella of the MIT Media Lab, also in the early 1990s. However these so-called virtual creatures never left their simulated worlds. The first evolved robots to be built in reality were 3D-printed by Hod Lipson and Jordan Pollack at Brandeis University at the turn of the 21st century.
See also
Bio-inspired robotics
Evolutionary computation
References
Evolutionary computation
Robotics | Evolutionary robotics | [
"Engineering",
"Biology"
] | 409 | [
"Bioinformatics",
"Evolutionary computation",
"Robotics",
"Automation"
] |
1,050,341 | https://en.wikipedia.org/wiki/Paint%20thinner | A paint thinner is a solvent used to dilute oil-based paints or varnish. In this context, to dilute is also known as to 'thin'. Paint thinners are diluents. Solvents labeled "paint thinner" are usually white or mineral spirits.
Uses
Principally, paints are either a colloidal suspension of solid pigment particles or are an emulsion of dense viscous dye gel or paste with a filler all dispersed through a lighter free-flowing liquid medium — the solvent. This solvent also controls flow and application properties, and in some cases can affect the stability of the paint while in liquid state. Its main function is to act as the carrier to ensure an even spread of the non-volatile components. After a long period in storage, the dense paint pigment and filler settles out over time and it can lose some of its solvent due to evaporation, becoming so thick and viscous that it does not flow properly when used. By the addition of more solvent, it can be diluted or re-dissolved to restore the paint to an appropriate consistency for use. The diluent acts to reduce the viscosity and so making a more free-flowing liquid, so in this context, "thinning" is the act of dilution.
These solvents can also be used as paint-brush cleaners to remove or to clean items that have become caked in dried-on paint.
Common paint thinners
Common solvents used historically as paint thinners are volatile organic compounds — forms of hydrocarbons — and include:
White spirit — also called mineral spirits
Acetone — a very simple ketone, often called nail varnish remover
Butanone / methyl ethyl ketone (MEK)
Dimethylformamide (DMF)
Glycol ethers — such as 2-Butoxyethanol
Alcohols — such as isopropyl alcohol / isopropanol and 1-propanol
Light naphtha distillates
Turpentine
Lacquer thinner — a combination of alcohols, alkyl esters, ethers, ketones, and aromatic hydrocarbons / arenes
Less common solvents used as paint thinner — like aromatic organic compounds that are more hazardous, so more heavily regulated and restricted in use — but still used in the construction industry include:
Aromatic hydrocarbons / arenes
Ethylbenzene
Toluene / toluol
Xylene / xylol
Alkyl esters
Amyl acetate
n-Butyl acetate
Butanol
Hazards and health concerns
Some paint thinners can ignite from just a small spark in relatively low temperatures. These solvents are volatile organic compounds (VOCs), with white or mineral spirits having a very low flash point at about 40°C (104°F), the same as some popular brands of charcoal starter. All such solvents with low flash points are hazardous and must be labelled as flammable.
Prolonged exposure to VOCs emitted by paint containing these solvents or its clean-up using paint thinner are hazardous to health. VOCs exhibit high lipid solubility and for this reason, they bioaccumulate in adipose / fatty tissues.
Extensive exposure to these vapours has been strongly related to organic solvent syndrome, although a definitive relation has yet to be fully established.
For safety reasons, the use of substances containing these solvents should always be done in well-ventilated areas, to limit the health consequences and minimise the risk injuries or fatalities. In countries with poor environmental protection regulation, workers commonly experience a high exposure to these chemicals with consequent damage to their health.
The American Conference of Governmental Industrial Hygienists has established threshold limit values (TLVs) for most of these compounds. The TLV is defined as the maximum concentration in air which can be breathed by a normal person — i.e. excluding children, pregnant women, etc. — in the course of a typical American work week of 40 hours, day-after-day through their work life without long-term ill effects. Globally, the most widely accepted standard for acceptable levels of VOC in paint is Green Seal's GS-11 Standards from the US which defines different VOC levels acceptable for different types of paint based on use case and performance requirements.
Due to their hazardous nature and environmental threat of damaging pollution — persistent organic pollutants from aromatic organic compounds that are resistant to degradation are often found in wastewater with poor handling and disposal resulting in them seeping into groundwater, contaminating public water supplies
— so in recent decades, laws from legislatures like the European Parliament in EU regulations have extensively reduced the usage of these VOC solvents in favour of water-based paints — that is, using ones like acrylic paints that have been reformulated to be made with water as the primary solvent, with only low levels of hydrocarbon solvents, if any — which perform in a very similar way as oil paints, but also are much less polluting, so have a much lower environmental impact.
Addiction
Paint thinners are often used as an inhalant, due to its accessibility and legality as a drug. Many teenagers become addicted to thinner and due to lack of knowledge, parents and caregivers do not notice it or give it much attention. By using paint thinner a person could experience hallucinations, sensitive hearing (for the first time), speech deformation, memory loss, etc.
See also
Environmental impact of paint
Substance-induced psychosis
References
Solvents
Paints | Paint thinner | [
"Chemistry"
] | 1,133 | [
"Paints",
"Coatings"
] |
1,050,386 | https://en.wikipedia.org/wiki/Audiology | Audiology (from Latin , "to hear"; and from Greek branch of learning , -logia) is a branch of science that studies hearing, balance, and related disorders. Audiologists treat those with hearing loss and proactively prevent related damage. By employing various testing strategies (e.g. behavioral hearing tests, otoacoustic emission measurements, and electrophysiologic tests), audiologists aim to determine whether someone has normal sensitivity to sounds. If hearing loss is identified, audiologists determine which portions of hearing (high, middle, or low frequencies) are affected, to what degree (severity of loss), and where the lesion causing the hearing loss is found (outer ear, middle ear, inner ear, auditory nerve and/or central nervous system). If an audiologist determines that a hearing loss or vestibular abnormality is present, they will provide recommendations for interventions or rehabilitation (e.g. hearing aids, cochlear implants, appropriate medical referrals).
In addition to diagnosing audiologic and vestibular pathologies, audiologists can also specialize in rehabilitation of tinnitus, hyperacusis, misophonia, auditory processing disorders, cochlear implant use and/or hearing aid use. Audiologists can provide hearing health care from birth to end-of-life.
Audiologist
An audiologist is a health care provider specializing in identifying, diagnosing, treating, and monitoring disorders of the auditory and vestibular systems. Audiologists are trained to diagnose, manage and/or treat hearing, tinnitus, or balance problems. They dispense, manage, and rehabilitate hearing aids and assess candidacy for and map hearing implants, such as cochlear implants, middle ear implants and bone conduction implants. They counsel families through a new diagnosis of hearing loss in infants, and help teach coping and compensation skills to late-deafened adults. They also help design and implement personal and industrial hearing safety programs, newborn hearing screening programs, school hearing screening programs, and provide special or custom fitted ear plugs and other hearing protection devices to help prevent hearing loss. Audiologists are trained to evaluate peripheral vestibular disorders originating from pathologies of the vestibular portion of the inner ear. They also provide treatment for certain vestibular and balance disorders, such as Benign Paroxysmal Positional Vertigo (BPPV). In addition, many audiologists work as auditory or acoustic scientists in a research capacity.
Audiologists are trained in anatomy and physiology, hearing aids, cochlear implants, electrophysiology, acoustics, psychophysics and psychoacoustics, neurology, vestibular function and assessment, balance disorders, counseling and communication options such as sign language. Audiologists may also run a neonatal hearing screening program which has been made compulsory in many hospitals in US, UK and India.
An audiologist usually graduates with one of the following qualifications: BSc, MSc(Audiology), AuD, STI, PhD, or ScD, depending on the program and country attended. In 2018, a report by CareerCast found the occupation of audiologist to be the third least stressful job surveyed.
History
The use of the terms audiology and audiologist in publications has been traced back only as far as 1946. The creator of the term remains unknown, but Berger identified possible originators as Mayer BA Schier, Willard B Hargrave, Stanley Nowak, Norman Canfield, or Raymond Carhart. In a biographical profile by Robert Galambos, Hallowell Davis is credited with coining the term in the 1940s, saying the then-prevalent term "auricular training" sounded like a method of teaching people how to wiggle their ears. The first US university course for audiologists was offered by Carhart at Northwestern University, in 1946.
Audiology was born of interdisciplinary collaboration. The substantial prevalence of hearing loss observed in the veteran population after World War II inspired the creation of the field as it is known today. The International Society of Audiology (ISA) was founded in 1952 to "...facilitate the knowledge, protection and rehabilitation of human hearing" and to "...serve as an advocate for the profession and for the hearing impaired throughout the world." It promotes interactions among national societies, associations and organizations that have similar missions, through the organization of a biannual world congress, through the publication of the scientific peer-reviewed International Journal of Audiology and by offering support to the World Health Organization's efforts towards addressing the needs of the hearing impaired and deaf community.
Requirements
The International Society of Audiology maintains Global Audiology, which is a portal in Wikiversity that provides information of audiology education and practice around the world. Summary information is provided below:
Australia
In Australia, Audiologists must hold a Master of Audiology, Master of Clinical Audiology, Master of Audiology Studies or alternatively a bachelor's degree from overseas certified by the private agency Vocational Education, Training and Assessment Services (VETASSESS). Although audiologists in Australia are not required to be a member of any professional body, audiology graduates can undergo a clinical training program or internship leading to accreditation with Audiology Australia (AudA) or the Australian College of Audiology (ACAud) which involves supervised practice and professional development, and typically lasts one year.
To provide rehabilitative services to eligible pensioners, war veterans, and children and young adults under the age of 26 as part of the Hearing Services Program, an audiologist must hold a qualified practitioner (QP) number which can be sought by first obtaining accreditation.
Brazil
In Brazil, audiology training is part of speech pathology and audiology undergraduate, four-year courses. The University of São Paulo was the first university to offer a bachelor's degree, and it started operations in 1977. At the federal level, the recognition of the educational programs and the profession of speech pathologist and audiologist took place on December 9, 1981, signed by President João Figueiredo (law no. 6965). The terms audiology and audiologist can be tracked in Brazilian publications since 1946. The work of audiologists in Brazil was described in 2007.
Canada
In Canada, a masters of science (MSc) is the minimum requirement to practice audiology in the country. The profession is regulated in New Brunswick, Quebec, Ontario, Manitoba, Saskatchewan, Alberta, and British Columbia, where it is illegal to practice without being registered as a full member in the appropriate provincial regulatory body.
Bangladesh
A BSc (Hons) in audiology and speech language pathology is required.
India
To practice audiology, professionals need to have either a bachelor's or a master's degree in audiology and be registered with Rehabilitation Council of India (RCI).
Malaysia
Three Malaysian educational institutions offer degrees in audiology.
United Kingdom
There are currently five routes to becoming a registered audiologist:
FdSc in hearing aid audiology
BSc in audiology
MSc in audiology
Fast-track conversion Diploma for those with a BSc in another relevant science subject, available at Southampton, Manchester, UCL, London, and Edinburgh
BSc(Hons) in clinical physiology (audiology) available at Glasgow Caledonian University (all applicants must be NHS employees)
United States
In the United States, audiologists are regulated by state licensure or registration in all 50 states and the District of Columbia. Starting in 2007, the doctor of audiology (AuD) became the entry-level degree for clinical practice for some states, with most states expected to follow this requirement very soon, as there are no longer any professional programs in audiology which offer the master's degree. Minimum requirements for the AuD degree include a minimum of 75 semester hours of post-baccalaureate study, meeting prescribed competencies, passing a national exam offered by Praxis Series of the Educational Testing Service, and practicum experience that is equivalent to a minimum of 12 months of full-time, supervised experience. Most states have continuing education renewal requirements that must be met to stay licensed. Audiologists can also earn certification from the American Speech-Language-Hearing Association or through the American Board of Audiology (ABA). Currently, there are over 70 AuD programs in the United States.
In the past, audiologists have typically held a master's degree and the appropriate healthcare license. However, in the 1990s the profession began to transition to a doctoral level as a minimum requirement. In the United States, starting in 2007, audiologists were required to receive a doctoral degree (AuD or PhD) in audiology from an accredited university graduate or professional program before practicing. All states require licensing, and audiologists may also carry national board certification from the American Board of Audiology or a certificate of clinical competence in audiology (CCC-A) from the American Speech-Language-Hearing Association.
Pakistan
In Pakistan, a master's or doctoral degree in audiology is required to practice this profession. This medical degree must come from a recognised institute, most of which are government, otherwise the person didn't get the license to practice the audiology. Pakistan Medical and Dental Council (PMDC) issues the practicing license to all the medical students. Besides these, the person who provides the medical instruments to these doctors should also have the certificate of accreditation, issued by the Pakistan National Accreditation Council (PNAC).
Portugal
The exercise of audiologist profession in Portugal necessarily imply the qualifications degree in audiology or legally equivalent as defined in Decree-Law 320/99 of August 11 Article 4.
South Africa
In South Africa, there are currently five institutions offering training in audiology. The institutions offer different qualifications that make one eligible for practicing audiology in South Africa. The qualifications are as follows, I) B. Audiology, II) BSc. Audiology, III) B. Communication Pathology (Audiology), and IV) B. Speech Language Pathology and Audiology (BSLP&A). All practicing audiologists are required to be registered with the Health Professional Council of South Africa (HPCSA).
Turkey
Audiology in Turkey started in 1968 as an audiology master's degree program at Hacettepe University Faculty of Medicine, Department of Ear, Nose and Throat. The program, which was carried out as Audiology until 1989, has been revised since this year and continued as "Audiology and Speech Disorders" Master's and Doctoral education. The first undergraduate program was opened in 2011, and as of 2011, Audiologist has become a profession defined and officially recognized by the state of the Republic of Turkey.
See also
Audiology and hearing health professionals in developed and developing countries
Auditory brainstem response (ABR)
Auditory agnosia
Auditory processing disorder
Auditory verbal agnosia
Audiometrist
Audiometry
Balance disorder
Bone anchored hearing aid (BAHA)
Cochlear implant
Computational audiology
Dichotic listening test
Earplug
Electronystagmography (ENG/VNG)
European Federation of Audiology Societies
Global Audiology
Hearing Aid
Hearing impairment
International Society of Audiology
Listening
Noise induced hearing loss
Otoacoustic emissions
Otolaryngology
Otology
Otoscope
Speech and language pathology
Speech banana
Spatial hearing loss
Tympanometry
World Hearing Day
References
Otology
Rehabilitation team
Auditory system
Acoustics
Hearing
sv:Audionom
th:นักโสตสัมผัสวิทยา | Audiology | [
"Physics"
] | 2,318 | [
"Classical mechanics",
"Acoustics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.