id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
605,591
https://en.wikipedia.org/wiki/Media%20filter
A media filter is a type of filter that uses a bed of sand, peat, shredded tires, foam, crushed glass, geo-textile fabric, anthracite, crushed granite or other material to filter water for drinking, swimming pools, aquaculture, irrigation, stormwater management, oil and gas operations, and other applications. Each layer of media is designed to filter out specific types and sizes of particles, allowing for more efficient and effective removal of contaminants. Design One design brings the water in the top of a container through a "header" which distributes the water evenly. The filter "media" start with fine sand on the top and then becomes gradually coarser sand in a number of layers followed by gravel on the bottom, in gradually larger sizes. The top sand physically removes particles from the water. The job of the subsequent layers is to support the finer layer above and provide efficient drainage. As particles become trapped in the media, the differential pressure across the bed increases. Periodically, a backwash may be initiated to remove the solids trapped in the bed. During backwash, flow is directed in the opposite direction from normal flow. In multi-media filters, the layers in the media re-stratify due to density differences prior to resuming normal filtration. Multimedia filter can remove particles down to 10-25 microns. Advantages and disadvantages Advantages of multimedia filters Multimedia filters use multiple layers of different filter media to achieve more effective and efficient filtration than single-media filters like sand filters.     They can remove a wider range of particle sizes and types than single-media filters, resulting in more efficient filtration and longer filter life.   They are effective at removing suspended solids, turbidity, and other contaminants from water.     They can be used for a wide range of flow rates and particle sizes. They can be easily backwashed to clean the filter media and restore filtration efficiency.     They require little to no electricity to operate. Disadvantages of multimedia filters Multimedia filters have a higher capital cost compared to single-media filters like sand filters. They have a larger footprint and require more space than single-media filters. They may not be effective at removing some types of contaminants, such as dissolved organic compounds and bacteria. They may require pre-treatment to remove large particles or debris that could clog the filter media. They can create waste material (backwash water) that needs to be treated or disposed of properly. Uses Drinking water Media filters are used in drinking water treatment, where multimedia filters are used as a primary or secondary filtration step to remove a wider range of particle sizes and types than sand filters, including organic matter and smaller particles.   Municipal drinking water systems often use a rapid sand filter and/or a slow sand filter for purification. Silica sand is the most widely used medium in such filters. Anthracite coal, garnet sand, ilmenite, granular activated carbon, manganese green sand and crushed recycled glass are among the alternative filter media used. Stormwater Media filters are used to protect water quality in streams, rivers, and lakes. They can be effective at removing pollutants in stormwater such as suspended solids and phosphorus. Sand is the most common filter material. In other filters, sometimes called "organic filters," wood chips or leaf mold may be used. Sewage and wastewater Media filters are also used for cleaning the effluent from septic tanks and primary settlement tanks. The materials commonly used are sand, peat and natural stone fibre. Oil and gas industry The oil and gas industry uses media filters for various purposes in both upstream and downstream operations. Nut shell filters are commonly used as a tertiary oil removal step for treatment of produced water. Sand filters are often used to remove fine solids following biological treatment and clarification of oil refinery wastewater. Multi-media filters are used for removing suspended solids from both produced water and refinery wastewater. The materials commonly used in multi-media filters are gravel, sand, garnet, and anthracite. See also Biofilter Bioretention References __notoc__ Environmental engineering Irrigation Water filters Stormwater management References Nalco Water, an Ecolab Company. Nalco Water Handbook, Fourth Edition (McGraw-Hill Education: New York, Chicago, San Francisco, Athens, London, Madrid, Mexico City, Milan, New Delhi, Singapore, Sydney, Toronto, 2018). https://www.accessengineeringlibrary.com/content/book/9781259860973
Media filter
[ "Chemistry", "Engineering", "Environmental_science" ]
926
[ "Water filters", "Water treatment", "Stormwater management", "Chemical engineering", "Filters", "Water pollution", "Civil engineering", "Environmental engineering" ]
605,595
https://en.wikipedia.org/wiki/Off-side%20rule
The off-side rule describes syntax of a computer programming language that defines the bounds of a code block via indentation. The term was coined by Peter Landin, possibly as a pun on the offside law in association football. An off-side rule language is contrasted with a free-form language in which indentation has no syntactic meaning, and indentation is strictly a matter of style. An off-side rule language is also described as having significant indentation. Definition Peter Landin, in his 1966 article "The Next 700 Programming Languages", defined the off-side rule thus: "Any non-whitespace token to the left of the first such token on the previous line is taken to be the start of a new declaration." Example The following is an example of indentation blocks in Python; a popular off-side rule language. In Python, the rule is taken to define the boundaries of statements rather than declarations. def is_even(a: int) -> bool: if a % 2 == 0: print('Even!') return True print('Odd!') return False The body of the function starts on line 2 since it is indented one level (4 spaces) more than the previous line. The if clause body starts on line 3 since it is indented an additional level, and ends on line 4 since line 5 is indented a level less, a.k.a. outdented. The colon (:) at the end of a control statement line is Python syntax; not an aspect of the off-side rule. The rule can be realized without such colon syntax. Implementation The off-side rule can be implemented in the lexical analysis phase, as in Python, where increasing the indenting results in the lexer outputting an INDENT token, and decreasing the indenting results in the lexer outputting a DEDENT token. These tokens correspond to the opening brace { and closing brace } in languages that use braces for blocks, and means that the phrase grammar does not depend on whether braces or indentation are used. This requires that the lexer hold state, namely the current indent level, and thus can detect changes in indentation when this changes, and thus the lexical grammar is not context-free: INDENT and DEDENT depend on the contextual information of the prior indent level. Alternatives The primary alternative to delimiting blocks by indenting, popularized by broad use and influence of the language C, is to ignore whitespace characters and mark blocks explicitly with curly brackets (i.e., { and }) or some other delimiter. While this allows for more formatting freedom – a developer might choose not to indent small pieces of code like the break and continue statements – sloppily indented code might lead the reader astray, such as the goto fail bug. Lisp and other S-expression-based languages do not differentiate statements from expressions, and parentheses are enough to control the scoping of all statements within the language. As in curly bracket languages, whitespace is mostly ignored by the reader (i.e., the read function). Whitespace is used to separate tokens. The explicit structure of Lisp code allows automatic indenting, to form a visual cue for human readers. Another alternative is for each block to begin and end with explicit keywords. For example, in ALGOL 60 and its descendant Pascal, blocks start with keyword begin and end with keyword end. In some languages (but not Pascal), this means that newlines are important (unlike in curly brace languages), but the indentation is not. In BASIC and Fortran, blocks begin with the block name (such as IF) and end with the block name prepended with END (e.g., END IF). In Fortran, each and every block can also have its own unique block name, which adds another level of explicitness to lengthy code. ALGOL 68 and the Bourne shell (sh, and bash) are similar, but the end of the block is usually given by the name of the block written backward (e.g., case starts a switch statement and it spans until the matching esac; similarly conditionals if...then...[elif...[else...]]fi or for loops for...do...od in ALGOL68 or for...do...done in bash). An interesting variant of this occurs in Modula-2, a Pascal-like language which does away with the difference between one and multiline blocks. This allows the block opener ({ or BEGIN) to be skipped for all but the function level block, requiring only a block terminating token (} or END). It also fixes dangling else. Custom is for the end token to be placed on the same indent level as the rest of the block, giving a blockstructure that is very readable. One advantage to the Fortran approach is that it improves readability of long, nested, or otherwise complex code. A group of outdents or closing brackets alone provides no contextual cues as to which blocks are being closed, necessitating backtracking, and closer scrutiny while debugging. Further, languages that allow a suffix for END-like keywords further improve such cues, such as continue versus continue for x, and end-loop marker specifying the index variable NEXT I versus NEXT, and uniquely named loops CYCLE X1 versus CYCLE. However, modern source code editors often provide visual indicators, such as syntax highlighting, and features such as code folding to assist with these drawbacks. Productivity In the language Scala, early versions allowed curly braces only. Scala 3 added an option to use indenting to structure blocks. Designer Martin Odersky said that this was the single most important way Scala 3 improved his own productivity, that it makes programs over 10% shorter and keeps programmers "in the flow", and advises its use. Notable programming languages Notable programming languages with the off-side rule: ABC Agda Boo BuddyScript Cobra CoffeeScript Converge Curry Elm F#, in early versions, when #light is specified; in later versions when #light "off" is not GDScript (Godot engine) Haskell, only for where, let, do, or case ... of clauses when braces are omitted Inform 7 ISWIM, the abstract language that introduced the rule LiveScript Lobster Miranda MoonScript Nemerle, optional mode Nim occam PROMAL Python Scala, optional mode Scheme, when using one of several Scheme Requests for Implementations, the latest of which is SRFI 119 Spin Woma XL Other file formats Notable non-programming language, text file formats with significant indentation: GCode, RepRapFirmware dialect Haml Make, tab-indented line signifies a command Pug (formerly Jade), see Comparison of web template engines reStructuredText Sass Stylus YAML See also Prettyprint References Programming language topics Articles with example Python (programming language) code
Off-side rule
[ "Engineering" ]
1,461
[ "Software engineering", "Programming language topics" ]
605,697
https://en.wikipedia.org/wiki/Photohydrogen
In photochemistry, photohydrogen is hydrogen produced with the help of artificial or natural light. This is how the leaf of a tree splits water molecules into protons (hydrogen ions), electrons (to make carbohydrates) and oxygen (released into the air as a waste product). Photohydrogen may also be produced by the photodissociation of water by ultraviolet light. Photohydrogen is sometimes discussed in the context of obtaining renewable energy from sunlight, by using microscopic organisms such as bacteria or algae. These organisms create hydrogen with the help of hydrogenase enzymes which convert protons derived from the water splitting reaction into hydrogen gas which can then be collected and used as a biofuel. See also Solar hydrogen panel Photofermentation Biological hydrogen production (Algae) Photoelectrochemical cell Photosynthesis Hydrogen cycle Hydrogen economy References Biofuels technology Hydrogen production Photochemistry
Photohydrogen
[ "Chemistry", "Biology" ]
187
[ "Biofuels technology", "nan" ]
605,727
https://en.wikipedia.org/wiki/Kronecker%27s%20theorem
In mathematics, Kronecker's theorem is a theorem about diophantine approximation, introduced by . Kronecker's approximation theorem had been firstly proved by L. Kronecker in the end of the 19th century. It has been now revealed to relate to the idea of n-torus and Mahler measure since the later half of the 20th century. In terms of physical systems, it has the consequence that planets in circular orbits moving uniformly around a star will, over time, assume all alignments, unless there is an exact dependency between their orbital periods. Statement Kronecker's theorem is a result in diophantine approximations applying to several real numbers xi, for 1 ≤ i ≤ n, that generalises Dirichlet's approximation theorem to multiple variables. The classical Kronecker approximation theorem is formulated as follows. Given real n-tuples and , the condition: holds if and only if for any with the number is also an integer. In plainer language, the first condition states that the tuple can be approximated arbitrarily well by linear combinations of the s (with integer coefficients) and integer vectors. For the case of a and , Kronecker's Approximation Theorem can be stated as follows. For any with irrational and there exist integers and with , such that Relation to tori In the case of N numbers, taken as a single N-tuple and point P of the torus T = RN/ZN, the closure of the subgroup <P> generated by P will be finite, or some torus T′ contained in T. The original Kronecker's theorem (Leopold Kronecker, 1884) stated that the necessary condition for T′ = T, which is that the numbers xi together with 1 should be linearly independent over the rational numbers, is also sufficient. Here it is easy to see that if some linear combination of the xi and 1 with non-zero rational number coefficients is zero, then the coefficients may be taken as integers, and a character χ of the group T other than the trivial character takes the value 1 on P. By Pontryagin duality we have T′ contained in the kernel of χ, and therefore not equal to T. In fact a thorough use of Pontryagin duality here shows that the whole Kronecker theorem describes the closure of <P> as the intersection of the kernels of the χ with χ(P) = 1. This gives an (antitone) Galois connection between monogenic closed subgroups of T (those with a single generator, in the topological sense), and sets of characters with kernel containing a given point. Not all closed subgroups occur as monogenic; for example a subgroup that has a torus of dimension ≥ 1 as connected component of the identity element, and that is not connected, cannot be such a subgroup. The theorem leaves open the question of how well (uniformly) the multiples mP of P fill up the closure. In the one-dimensional case, the distribution is uniform by the equidistribution theorem. See also Weyl's criterion Dirichlet's approximation theorem References Diophantine approximation Topological groups
Kronecker's theorem
[ "Mathematics" ]
652
[ "Space (mathematics)", "Topological spaces", "Mathematical relations", "Topological groups", "Diophantine approximation", "Approximations", "Number theory" ]
605,802
https://en.wikipedia.org/wiki/Alkaline%20mucus
Alkaline mucus is a thick fluid produced by animals which confers tissue protection in an acidic environment, such as in the stomach. Properties Mucus that serves a protective function against acidic environments generally has a high viscosity, though the thickness and viscosity of the mucus layer can vary due to several factors. For example, alkaline mucus in the stomach increases in thickness when the stomach is distended. The pH level of the mucus also plays a role in its viscosity, as higher pH levels tend to alter the thickness of the mucus, making it less viscous. Because of this, invading agents such as Helicobacter pylori, a bacterium that causes stomach ulcers, can alter the pH of the mucus to make the mucus pliable enough to move through. Exposure to atmospheric air also tends to increase the pH level of alkaline mucus. In humans In humans, alkaline mucus is present in several organs and provides protection by way of its alkalinity and high viscosity. Alkaline mucus exists in the human eye, stomach, saliva, and cervix. In the stomach, alkaline mucus is secreted by gastric glands in the gastric mucosa of the stomach wall. Secretion of alkaline mucus is necessary to protect the mucous membrane of the stomach from acids released during digestion. Ulcers can develop as a result of damage caused to the gastric mucosal barrier. Duodenal ulcers have been shown to develop in sites that are in direct contact with pepsin and acids. To prevent damage and protect the mucus epithelium, alkaline mucus secretions increase in the digestive system when food is being eaten. In the cervix, alkaline mucus has been shown to possess bactericidal properties to protect the cervix, uterus, peritoneal cavity, and vagina from microbes. References Digestive system
Alkaline mucus
[ "Biology" ]
423
[ "Digestive system", "Organ systems" ]
605,869
https://en.wikipedia.org/wiki/Spell%20checker
In software, a spell checker (or spelling checker or spell check) is a software feature that checks for misspellings in a text. Spell-checking features are often embedded in software or services, such as a word processor, email client, electronic dictionary, or search engine. Design A basic spell checker carries out the following processes: It scans the text and extracts the words contained in it. It then compares each word with a known list of correctly spelled words (i.e. a dictionary). This might contain just a list of words, or it might also contain additional information, such as hyphenation points or lexical and grammatical attributes. An additional step is a language-dependent algorithm for handling morphology. Even for a lightly inflected language like English, the spell checker will need to consider different forms of the same word, such as plurals, verbal forms, contractions, and possessives. For many other languages, such as those featuring agglutination and more complex declension and conjugation, this part of the process is more complicated. It is unclear whether morphological analysis—allowing for many forms of a word depending on its grammatical role—provides a significant benefit for English, though its benefits for highly synthetic languages such as German, Hungarian, or Turkish are clear. As an adjunct to these components, the program's user interface allows users to approve or reject replacements and modify the program's operation. Spell checkers can use approximate string matching algorithms such as Levenshtein distance to find correct spellings of misspelled words. An alternative type of spell checker uses solely statistical information, such as n-grams, to recognize errors instead of correctly-spelled words. This approach usually requires a lot of effort to obtain sufficient statistical information. Key advantages include needing less runtime storage and the ability to correct errors in words that are not included in a dictionary. In some cases, spell checkers use a fixed list of misspellings and suggestions for those misspellings; this less flexible approach is often used in paper-based correction methods, such as the see also entries of encyclopedias. Clustering algorithms have also been used for spell checking combined with phonetic information. History Pre-PC In 1961, Les Earnest, who headed the research on this budding technology, saw it necessary to include the first spell checker that accessed a list of 10,000 acceptable words. Ralph Gorin, a graduate student under Earnest at the time, created the first true spelling checker program written as an applications program (rather than research) for general English text: SPELL for the DEC PDP-10 at Stanford University's Artificial Intelligence Laboratory, in February 1971. Gorin wrote SPELL in assembly language, for faster action; he made the first spelling corrector by searching the word list for plausible correct spellings that differ by a single letter or adjacent letter transpositions and presenting them to the user. Gorin made SPELL publicly accessible, as was done with most SAIL (Stanford Artificial Intelligence Laboratory) programs, and it soon spread around the world via the new ARPAnet, about ten years before personal computers came into general use. SPELL, its algorithms and data structures inspired the Unix ispell program. The first spell checkers were widely available on mainframe computers in the late 1970s. A group of six linguists from Georgetown University developed the first spell-check system for the IBM corporation. Henry Kučera invented one for the VAX machines of Digital Equipment Corp in 1981. Unix The International Ispell program commonly used in Unix is based on R. E. Gorin's SPELL. It was converted to C by Pace Willisson at MIT. The GNU project has its spell checker GNU Aspell. Aspell's main improvement is that it can more accurately suggest correct alternatives for misspelled English words. Due to the inability of traditional spell checkers to check words in complex inflected languages, Hungarian László Németh developed Hunspell, a spell checker that supports agglutinative languages and complex compound words. Hunspell also uses Unicode in its dictionaries. Hunspell replaced the previous MySpell in OpenOffice.org in version 2.0.2. Enchant is another general spell checker, derived from AbiWord. Its goal is to combine programs supporting different languages such as Aspell, Hunspell, Nuspell, Hspell (Hebrew), Voikko (Finnish), Zemberek (Turkish) and AppleSpell under one interface. PCs The first spell checkers for personal computers appeared in 1980, such as "WordCheck" for Commodore systems which was released in late 1980 in time for advertisements to go to print in January 1981. Developers such as Maria Mariani and Random House rushed OEM packages or end-user products into the rapidly expanding software market. On the pre-Windows PCs, these spell checkers were standalone programs, many of which could be run in terminate-and-stay-resident mode from within word-processing packages on PCs with sufficient memory. However, the market for standalone packages was short-lived, as by the mid-1980s developers of popular word-processing packages like WordStar and WordPerfect had incorporated spell checkers in their packages, mostly licensed from the above companies, who quickly expanded support from just English to many European and eventually even Asian languages. However, this required increasing sophistication in the morphology routines of the software, particularly with regard to heavily-agglutinative languages like Hungarian and Finnish. Although the size of the word-processing market in a country like Iceland might not have justified the investment of implementing a spell checker, companies like WordPerfect nonetheless strove to localize their software for as many national markets as possible as part of their global marketing strategy. When Apple developed "a system-wide spelling checker" for Mac OS X so that "the operating system took over spelling fixes," it was a first: one "didn't have to maintain a separate spelling checker for each" program. Mac OS X's spellcheck coverage includes virtually all bundled and third party applications. Visual Tools VT Speller, introduced in 1994, was "designed for developers of applications that support Windows." It came with a dictionary but had the ability to build and incorporate use of secondary dictionaries. Browsers Web browsers such as Firefox and Google Chrome offer spell checking support, using Hunspell. Prior to using Hunspell, Firefox and Chrome used MySpell and GNU Aspell, respectively. Specialties Some spell checkers have separate support for medical dictionaries to help prevent medical errors. Functionality The first spell checkers were "verifiers" instead of "correctors." They offered no suggestions for incorrectly spelled words. This was helpful for typos but it was not so helpful for logical or phonetic errors. The challenge the developers faced was the difficulty in offering useful suggestions for misspelled words. This requires reducing words to a skeletal form and applying pattern-matching algorithms. It might seem logical that where spell-checking dictionaries are concerned, "the bigger, the better," so that correct words are not marked as incorrect. In practice, however, an optimal size for English appears to be around 90,000 entries. If there are more than this, incorrectly spelled words may be skipped because they are mistaken for others. For example, a linguist might determine on the basis of corpus linguistics that the word baht is more frequently a misspelling of bath or bat than a reference to the Thai currency. Hence, it would typically be more useful if a few people who write about Thai currency were slightly inconvenienced than if the spelling errors of the many more people who discuss baths were overlooked. The first MS-DOS spell checkers were mostly used in proofing mode from within word processing packages. After preparing a document, a user scanned the text looking for misspellings. Later, however, batch processing was offered in such packages as Oracle's short-lived CoAuthor and allowed a user to view the results after a document was processed and correct only the words that were known to be wrong. When memory and processing power became abundant, spell checking was performed in the background in an interactive way, such as has been the case with the Sector Software produced Spellbound program released in 1987 and Microsoft Word since Word 95. Spell checkers became increasingly sophisticated; now capable of recognizing grammatical errors. However, even at their best, they rarely catch all the errors in a text (such as homophone errors) and will flag neologisms and foreign words as misspellings. Nonetheless, spell checkers can be considered as a type of foreign language writing aid that non-native language learners can rely on to detect and correct their misspellings in the target language. Spell-checking for languages other than English English is unusual in that most words used in formal writing have a single spelling that can be found in a typical dictionary, with the exception of some jargon and modified words. In many languages, words are often concatenated into new combinations of words. In German, compound nouns are frequently coined from other existing nouns. Some scripts do not clearly separate one word from another, requiring word-splitting algorithms. Each of these presents unique challenges to non-English language spell checkers. Context-sensitive spell checkers There has been research on developing algorithms that are capable of recognizing a misspelled word, even if the word itself is in the vocabulary, based on the context of the surrounding words. Not only does this allow words such as those in the poem above to be caught, but it mitigates the detrimental effect of enlarging dictionaries, allowing more words to be recognized. For example, baht in the same paragraph as Thai or Thailand would not be recognized as a misspelling of bath. The most common example of errors caught by such a system are homophone errors, such as the bold words in the following sentence:Their coming too sea if its reel'. The most successful algorithm to date is Andrew Golding and Dan Roth's "Winnow-based spelling correction algorithm", published in 1999, which is able to recognize about 96% of context-sensitive spelling errors, in addition to ordinary non-word spelling errors. Context-sensitive spell checkers appeared in the now-defunct applications Microsoft Office 2007 and Google Wave. Grammar checkers attempt to fix problems with grammar beyond spelling errors, including incorrect choice of words. See also Cupertino effect Grammar checker Record linkage problem Spelling suggestion Words (Unix) Autocorrection LanguageTool References External links Norvig.com, "How to Write a Spelling Corrector", by Peter Norvig BBK.ac.uk, "Spellchecking by computer", by Roger Mitton CBSNews.com, Spell-Check Crutch Curtails Correctness, by Lloyd de Vries History and text of "Candidate for a Pullet Surprise" by Mark Eckman and Jerrold H. Zar Text editor features Checker Natural language processing
Spell checker
[ "Technology" ]
2,291
[ "Natural language processing", "Natural language and computing" ]
605,870
https://en.wikipedia.org/wiki/Inosculation
Inosculation is a natural phenomenon in which trunks, branches or roots of two trees grow together in a manner biologically similar to the artificial process of grafting. The term is derived from the Latin roots in + ōsculārī, "to kiss into/inward/against" or etymologically and more illustratively "to make a small mouth inward/into/against"; trees having undergone the process are referred to in forestry as gemels, from the Latin word meaning "a pair". It is most common for branches of two trees of the same species to grow together, though inosculation may be noted across related species. The branches first grow separately in proximity to each other until they touch. At this point, the bark on the touching surfaces is gradually abraded away as the trees move in the wind. Once the cambium of two trees touches, they sometimes self-graft and grow together as they expand in diameter. Inosculation customarily results when tree limbs are braided or pleached. The term inosculation is also used in the context of plastic surgery, as one of the three mechanisms by which skin grafts take at the host site. Blood vessels from the recipient site are believed to connect with those of the graft in order to restore vascularity. Species Inosculation is most common among the following taxa due to their thin bark: Apple Almond Ash Beech Crepe myrtle Chestnut Dogwood Elm Ficus Grape Hazelnut Hornbeam Laburnum Linden Maple Norway spruce Olive Peach Pear Privet River red gum Sycamore Willow Wisteria Conjoined trees Two trees may grow to their mature size adjacent to each other and seemingly grow together or conjoin, demonstrating inosculation. These may be of the same species or even of different genera or families, depending on whether the two trees have become truly grafted together (once the cambium of two trees touches, they self-graft and grow together). Usually grafting is only between two trees of the same or closely related species or genera, but the appearance of grafting can be given by two trees that are physically touching, rubbing, intertwined, or entangled. Both conifers and deciduous trees can become conjoined. Beech trees in particular are frequent conjoiners, as is blackthorn (Prunus spinosa). Such trees are often colloquially referred to as "husband and wife" trees, or "marriage trees". The straightforward application of the term comes from the obvious unification of two separate individual trees, although a more humorous use of the term relates to the sexually suggestive appearance of some natural examples. There may be a degree of religious intent, as some cults are organized around beliefs that trees contain a hidden or sacred power to cure or to enhance fertility, or that they contain the souls of ancestors or of the unborn. Examples On his Tour of Scotland, published in 1800, T. Garnett notes a tree near Inveraray that the locals called the Marriage tree, formed from a lime tree with two trunks that have been joined by a branch in the manner of a person putting an arm around another (see illustration) as would a married couple. On the way to the Heavenly Lake near Urumqi in China are a pair of trees that local people have called the Husband and Wife trees because they are connected by a living branch. The Tatajia Husband and Wife trees are in Taiwan and in Yakushima, Kagoshima-ken, Japan, are a pair of Husband and Wife trees formed from conjoined cedars. In Lambeg, Co. Down, slightly north of Wolfenden's Bridge, stand two beech trees (see 'Gallery') at the entrance to Chrome Hill, on the Lambeg to Ballyskeagh road. In the late 18th century, John Wesley was staying at Chrome Hill and decided to weave together two young beech trees to act as a symbol of unity between the Methodist Church and the Church of Ireland. At Doonholm near Ayr an ancient sycamore maple (Acer pseudoplatanus) was famous for the multiple fusion of its boughs that gave it a unique appearance and greatly strengthened it. Gallery See also Axel Erlandson Tree shaping References Notes Sources External links Video footage of Gemel trees in Ayrshire Tree Marriages in India Horticulture Plant physiology
Inosculation
[ "Biology" ]
900
[ "Plant physiology", "Plants" ]
606,000
https://en.wikipedia.org/wiki/Papillomaviridae
Papillomaviridae is a family of non-enveloped DNA viruses whose members are known as papillomaviruses. Several hundred species of papillomaviruses, traditionally referred to as "types", have been identified infecting all carefully inspected mammals, but also other vertebrates such as birds, snakes, turtles and fish. Infection by most papillomavirus types, depending on the type, is either asymptomatic (e.g. most Beta-PVs) or causes small benign tumors, known as papillomas or warts (e.g. human papillomavirus 1, HPV6 or HPV11). Papillomas caused by some types, however, such as human papillomaviruses 16 and 18, carry a risk of becoming cancerous. Papillomaviruses are usually considered as highly host- and tissue-tropic, and are thought to rarely be transmitted between species. Papillomaviruses replicate exclusively in the basal layer of the body surface tissues. All known papillomavirus types infect a particular body surface, typically the skin or mucosal epithelium of the genitals, anus, mouth, or airways. For example, human papillomavirus (HPV) type 1 tends to infect the soles of the feet, and HPV type 2 the palms of the hands, where they may cause warts. Additionally, there are descriptions of the presence of papillomavirus DNA in the blood and in the peripheral blood mononuclear cells. Papillomaviruses were first identified in the early 20th century, when it was shown that skin warts, or papillomas, could be transmitted between individuals by a filterable infectious agent. In 1935 Francis Peyton Rous, who had previously demonstrated the existence of a cancer-causing sarcoma virus in chickens, went on to show that a papillomavirus could cause skin cancer in infected rabbits. This was the first demonstration that a virus could cause cancer in mammals. Taxonomy of papillomaviruses There are over 100 species of papillomavirus recognised, though the ICTV officially recognizes a smaller number, categorized into 53 genera, as of 2019. All papillomaviruses (PVs) have similar genomic organizations, and any pair of PVs contains at least five homologous genes, although the nucleotide sequence may diverge by more than 50%. Phylogenetic algorithms that permit the comparison of homologies led to phylogenetic trees that have a similar topology, independent of the gene analyzed. Phylogenetic studies strongly suggest that PVs normally evolve together with their mammalian and bird host species, but adaptive radiations, occasional zoonotic events and recombinations may also impact their diversification. Their basic genomic organization appears maintained for a period exceeding 100 million years, and these sequence comparisons have laid the foundation for a PV taxonomy, which is now officially recognized by the International Committee on Taxonomy of Viruses. All PVs form the family Papillomaviridae, which is distinct from the Polyomaviridae thus eliminating the term Papovaviridae. Major branches of the phylogenetic tree of PVs are considered genera, which are identified by Greek letters. Minor branches are considered species and unite PV types that are genomically distinct without exhibiting known biological differences. This new taxonomic system does not affect the traditional identification and characterization of PV "types" and their independent isolates with minor genomic differences, referred to as "subtypes" and "variants", all of which are taxa below the level of "species". Additionally, phylogenetic groupings at higher taxonomic level have been proposed. This classification may need revision in the light of the existence of papilloma–polyoma virus recombinants. Additional species have also been described. Sparus aurata papillomavirus 1 has been isolated from fish. Human papillomaviruses Over 170 human papillomavirus types have been completely sequenced. They have been divided into 5 genera: Alphapapillomavirus, Betapapillomavirus, Gammapapillomavirus, Mupapillomavirus and Nupapillomavirus. At least 200 additional viruses have been identified that await sequencing and classification. Animal papillomaviruses Individual papillomavirus types tend to be highly adapted to replication in a single animal species. In one study, researchers swabbed the forehead skin of a variety of zoo animals and used PCR to amplify any papillomavirus DNA that might be present. Although a wide variety of papillomavirus sequences were identified in the study, the authors found little evidence for inter-species transmission. One zookeeper was found to be transiently positive for a chimpanzee-specific papillomavirus sequence. However, the authors note that the chimpanzee-specific papillomavirus sequence could have been the result of surface contamination of the zookeeper's skin, as opposed to productive infection. Cottontail rabbit papillomavirus (CRPV) can cause protuberant warts in its native host, the North American rabbit genus Sylvilagus. These horn-like warts may be the original basis for the urban legends of the American antlered rabbit the Jackalope and European Wolpertinger. European domestic rabbits (genus Oryctolagus) can be transiently infected with CRPV in a laboratory setting. However, since European domestic rabbits do not produce infectious progeny virus, they are considered an incidental or "dead-end" host for CRPV. Inter-species transmission has also been documented for bovine papillomavirus (BPV) type 1. In its natural host (cattle), BPV-1 induces large fibrous skin warts. BPV-1 infection of horses, which are an incidental host for the virus, can lead to the development of benign tumors known as sarcoids. The agricultural significance of BPV-1 spurred a successful effort to develop a vaccine against the virus. A few reports have identified papillomaviruses in smaller rodents, such as Syrian hamsters, the African multimammate rat and the Eurasian harvest mouse. However, there are no papillomaviruses known to be capable of infecting laboratory mice. The lack of a tractable mouse model for papillomavirus infection has been a major limitation for laboratory investigation of papillomaviruses. Four papillomaviruses are known to infect birds: Fringilla coelebs papillomavirus 1, Francolinus leucoscepus papillomavirus 1, Psittacus erithacus papillomavirus 1 and Pygoscelis adeliae papillomavirus 1. All these species have a gene (E9) of unknown function, suggesting a common origin. Evolution The evolution of papillomaviruses is thought to be slow compared to many other virus types, but there are no experimental measurements currently available. This is probably because the papillomavirus genome is composed of genetically stable double-stranded DNA that is replicated with high fidelity by the host cell's DNA replication machinery. It is believed that papillomaviruses generally co-evolve with a particular species of host animal over many years, although there are strong evidences against the hypothesis of coevolution. In a particularly speedy example, HPV-16 has evolved slightly as human populations have expanded across the globe and now varies in different geographic regions in a way that probably reflects the history of human migration. Cutaneotropic HPV types are occasionally exchanged between family members during the entire lifetime, but other donors should also be considered in viral transmission. Other HPV types, such as HPV-13, vary relatively little in different human populations. In fact, the sequence of HPV-13 closely resembles a papillomavirus of bonobos (also known as pygmy chimpanzees). It is not clear whether this similarity is due to recent transmission between species or because HPV-13 has simply changed very little in the six or so million years since humans and bonobos diverged. The most recent common ancestor of this group of viruses has been estimated to have existed . There are five main genera infecting humans (Alpha, Beta, Gamma, Mu and Nu). The most recent common ancestor of these genera evolved -. The most recent ancestor of the gamma genus was estimated to have evolved between and . Structure Papillomaviruses are non-enveloped, meaning that the outer shell or capsid of the virus is not covered by a lipid membrane. A single viral protein, known as L1, is necessary and sufficient for formation of a 55–60 nanometer capsid composed of 72 star-shaped capsomers (see figure). Like most non-enveloped viruses, the capsid is geometrically regular and presents icosahedral symmetry. Self-assembled virus-like particles composed of L1 are the basis of a successful group of prophylactic HPV vaccines designed to elicit virus-neutralizing antibodies that protect against initial HPV infection. As such, papillomaviridæ are stable in the environment. The papillomavirus genome is a double-stranded circular DNA molecule ~8,000 base pairs in length. It is packaged within the L1 shell along with cellular histone proteins, which serve to wrap and condense DNA. The papillomavirus capsid also contains a viral protein known as L2, which is less abundant. Although not clear how L2 is arranged within the virion, it is known to perform several important functions, including facilitating the packaging of the viral genome into nascent virions as well as the infectious entry of the virus into new host cells. L2 is of interest as a possible target for more broadly protective HPV vaccines. The viral capsid consists of 72 capsomeres of which 12 are five-coordinated and 60 are six-coordinated capsomeres, arranged on a T = 7d icosahedral surface lattice. Tissue specificity Papillomaviruses replicate exclusively in keratinocytes. Keratinocytes form the outermost layers of the skin, as well as some mucosal surfaces, such as the inside of the cheek or the walls of the vagina. These surface tissues, which are known as stratified squamous epithelia, are composed of stacked layers of flattened cells. The cell layers are formed through a process known as cellular differentiation, in which keratinocytes gradually become specialized, eventually forming a hard, crosslinked surface that prevents moisture loss and acts as a barrier against pathogens. Less-differentiated keratinocyte stem cells, replenished on the surface layer, are thought to be the initial target of productive papillomavirus infections. Subsequent steps in the viral life cycle are strictly dependent on the process of keratinocyte differentiation. As a result, papillomaviruses can only replicate in body surface tissues. Life cycle Infectious entry Papillomaviruses gain access to keratinocyte stem cells through small wounds, known as microtraumas, in the skin or mucosal surface. Interactions between L1 and sulfated sugars on the cell surface promote initial attachment of the virus. The virus is then able to get inside from the cell surface via interaction with a specific receptor, likely via the alpha-6 beta-4 integrin, and transported to membrane-enclosed vesicles called endosomes. The capsid protein L2 disrupts the membrane of the endosome through a cationic cell-penetrating peptide, allowing the viral genome to escape and traffic, along with L2, to the cell nucleus. Viral persistence and latency After successful infection of a keratinocyte, the virus expresses E1 and E2 proteins, which are for replicating and maintaining the viral DNA as a circular episome. The viral oncogenes E6 and E7 promote cell growth by inactivating the tumor suppressor proteins p53 and pRb. Keratinocyte stem cells in the epithelial basement layer can maintain papillomavirus genomes for decades. Production of progeny virus The current understanding is that viral DNA replication likely occurs in the G2 phase of the cell cycle and rely on recombination-dependent replication supported by DNA damage response mechanisms (activated by the E7 protein) to produce progeny viral genomes. Papillomavirus genomes are sometimes integrated into the host genome, especially noticeable with oncogenic HPVs, but is not a normal part of the virus life cycle and a dead-end that eliminates the potential of viral progeny production. The expression of the viral late genes, L1 and L2, is exclusively restricted to differentiating keratinocytes in the outermost layers of the skin or mucosal surface. The increased expression of L1 and L2 is typically correlated with a dramatic increase in the number of copies of the viral genome. Since the outer layers of stratified squamous epithelia are subject to relatively limited surveillance by cells of the immune system, it is thought that this restriction of viral late gene expression represents a form of immune evasion. New infectious progeny viruses are assembled in the cell nucleus. Papillomaviruses have evolved a mechanism for releasing virions into the environment. Other kinds of non-enveloped animal viruses utilize an active lytic process to kill the host cell, allowing release of progeny virus particles. Often this lytic process is associated with inflammation, which might trigger immune attack against the virus. Papillomaviruses exploit desquamation as a stealthy, non-inflammatory release mechanism. Association with cancer Although some papillomavirus types can cause cancer in the epithelial tissues they inhabit, cancer is not a typical outcome of infection. The development of papillomavirus-induced cancers typically occurs over the course of many years. Papillomaviruses have been associated with the development of cervical cancer, penile cancer and oral cancers. An association with vulval cancer and urothelial carcinoma with squamous differentiation in patients with neurogenic bladder has also been noted. There are cancer causing papillomavirus genome that encodes two small proteins called E6 and E7 that mimic cancer causing oncogenes. The way they work is that they stimulate unnatural growth of cells and block their natural defenses. Also they act on many signaling proteins that control proliferation and apoptosis. Laboratory study The fact that the papillomavirus life cycle strictly requires keratinocyte differentiation has posed a substantial barrier to the study of papillomaviruses in the laboratory, since it has precluded the use of conventional cell lines to grow the viruses. Because infectious BPV-1 virions can be extracted from the large warts the virus induces on cattle, it has been a workhorse model papillomavirus type for many years. CRPV, rabbit oral papillomavirus (ROPV) and canine oral papillomavirus (COPV) have also been used extensively for laboratory studies. As soon as researchers discovered that these viruses cause cancer, they worked together to find a vaccine to it. Currently, the most effective way to go about it is to mimic a virus that is composed of L1 protein but lack the DNA. Basically, our immune system builds defenses against infections, but if these infections do not cause disease they can be used as a vaccine. PDB entry 6bt3 shows how antibodies surfaces attack the surface of the virus to disable it. Some sexually transmitted HPV types have been propagated using a mouse "xenograft" system, in which HPV-infected human cells are implanted into immunodeficient mice. More recently, some groups have succeeded in isolating infectious HPV-16 from human cervical lesions. However, isolation of infectious virions using this technique is arduous and the yield of infectious virus is very low. The differentiation of keratinocytes can be mimicked in vitro by exposing cultured keratinocytes to an air/liquid interface. The adaptation of such "raft culture" systems to the study of papillomaviruses was a significant breakthrough for in vitro study of the viral life cycle. However, raft culture systems are relatively cumbersome and the yield of infectious HPVs can be low. The development of a yeast-based system that allows stable episomal HPV replication provides a convenient, rapid and inexpensive means to study several aspects of the HPV lifecycle (Angeletti 2002). For example, E2-dependent transcription, genome amplification and efficient encapsidation of full-length HPV DNAs can be easily recreated in yeast (Angeletti 2005). Recently, transient high-yield methods for producing HPV pseudoviruses carrying reporter genes has been developed. Although pseudoviruses are not suitable for studying certain aspects of the viral life cycle, initial studies suggest that their structure and initial infectious entry into cells is probably similar in many ways to authentic papillomaviruses. Human papillomavirus binds to heparin molecules on the surface of the cells that it infects. Studies have shown that the crystal of isolated L1 capsomeres has the heparin chains recognized by lysines lines grooves on the surface of the virus. Also those with the antibodies show that they can block this recognition. Genetic organization and gene expression The papillomavirus genome is divided into an early region (E), encoding six open reading frames (ORF) (E1, E2, E4, E5, E6, and E7) that are expressed immediately after initial infection of a host cell, and a late region (L) encoding a major capsid protein L1 and a minor capsid protein L2. All viral ORFs are encoded on one DNA strand (see figure). This represents a dramatic difference between papillomaviruses and polyomaviruses, since the latter virus type expresses its early and late genes by bi-directional transcription of both DNA strands. This difference was a major factor in establishment of the consensus that papillomaviruses and polyomaviruses probably never shared a common ancestor, despite the striking similarities in the structures of their virions. After the host cell is infected, HPV16 early promoter is activated and a polycistronic primary RNA containing all six early ORFs is transcribed. This polycistronic RNA contains three exons and two introns and undergoes active RNA splicing to generate multiple isoforms of mRNAs. One of the spliced isoform RNAs, E6*I, serves as an E7 mRNA to translate E7 oncoprotein. In contrast, an intron in the E6 ORF that remains intact without splicing is necessary for translation of E6 oncoprotein. However, viral early transcription subjects to viral E2 regulation and high E2 levels repress the transcription. HPV genomes integrate into host genome by disruption of E2 ORF, preventing E2 repression on E6 and E7. Thus, viral genome integration into host DNA genome increases E6 and E7 expression to promote cellular proliferation and the chance of malignancy. A major viral late promoter in viral early region becomes active only in differentiated cells and its activity can be highly enhanced by viral DNA replication. The late transcript is also a polycistronic RNA which contains two introns and three exons. Alternative RNA Splicing of this late transcript is essential for L1 and L2 expression and can be regulated by RNA cis-elements and host splicing factors. Technical discussion of papillomavirus gene functions Genes within the papillomavirus genome are usually identified after similarity with other previously identified genes. However, some spurious open reading frames might have been mistaken as genes simply after their position in the genome, and might not be true genes. This applies specially to certain E3, E4, E5 and E8 open reading frames. E1 Encodes a protein that binds to the viral origin of replication in the long control region of the viral genome. E1 uses ATP to exert a helicase activity that forces apart the DNA strands, thus preparing the viral genome for replication by cellular DNA replication factors. E2 The E2 protein serves as a master transcriptional regulator for viral promoters located primarily in the long control region. The protein has a transactivation domain linked by a relatively unstructured hinge region to a well-characterized DNA binding domain. E2 facilitates the binding of E1 to the viral origin of replication. E2 also utilizes a cellular protein known as Bromodomain-4 (Brd4) to tether the viral genome to cellular chromosomes. This tethering to the cell's nuclear matrix ensures faithful distribution of viral genomes to each daughter cell after cell division. It is thought that E2 serves as a negative regulator of expression for the oncogenes E6 and E7 in latently HPV-infected basal layer keratinocytes. Genetic changes, such as integration of the viral DNA into a host cell chromosome, that inactivate E2 expression tend to increase the expression of the E6 and E7 oncogenes, resulting in cellular transformation and possibly further genetic destabilization. E3 This small putative gene exists only in a few papillomavirus types. The gene is not known to be expressed as a protein and does not appear to serve any function. E4 Although E4 proteins are expressed at low levels during the early phase of viral infection, expression of E4 increases dramatically during the late phase of infection. In other words, its "E" appellation may be something of a misnomer. In the case of HPV-1, E4 can account for up to 30% of the total protein at the surface of a wart. The E4 protein of many papillomavirus types is thought to facilitate virion release into the environment by disrupting intermediate filaments of the keratinocyte cytoskeleton. Viral mutants incapable of expressing E4 do not support high-level replication of the viral DNA, but it is not yet clear how E4 facilitates DNA replication. E4 has also been shown to participate in arresting cells in the G2 phase of the cell cycle. E5 The E5 are small, very hydrophobic proteins that destabilise the function of many membrane proteins in the infected cell. The E5 protein of some animal papillomavirus types (mainly bovine papillomavirus type 1) functions as an oncogene primarily by activating the cell growth-promoting signaling of platelet-derived growth factor receptors. The E5 proteins of human papillomaviruses associated to cancer, however, seem to activate the signal cascade initiated by epidermal growth factor upon ligand binding. HPV16 E5 and HPV2 E5 have also been shown to down-regulate the surface expression of major histocompatibility complex class I proteins, which may prevent the infected cell from being eliminated by killer T cells. E6 E6 is a 151 amino-acid peptide that incorporates a type 1 motif with a consensus sequence –(T/S)-(X)-(V/I)-COOH. It also has two zinc finger motifs. E6 is of particular interest because it appears to have multiple roles in the cell and to interact with many other proteins. Its major role, however, is to mediate the degradation of p53, a major tumor suppressor protein, reducing the cell's ability to respond to DNA damage. E6 has also been shown to target other cellular proteins, thereby altering several metabolic pathways. One such target is NFX1-91, which normally represses production of telomerase, a protein that allows cells to divide an unlimited number of times. When NFX1-91 is degraded by E6, telomerase levels increase, inactivating a major mechanism keeping cell growth in check. Additionally, E6 can act as a transcriptional cofactor—specifically, a transcription activator—when interacting with the cellular transcription factor, E2F1/DP1. E6 can also bind to PDZ-domains, short sequences which are often found in signaling proteins. E6's structural motif allows for interaction with PDZ domains on DLG (discs large) and hDLG (Drosophila large) tumor suppressor genes. Binding at these locations causes transformation of the DLG protein and disruption of its suppressor function. E6 proteins also interact with the MAGUK (membrane-associated guanylate kinase family) proteins. These proteins, including MAGI-1, MAGI-2, and MAGI-3 are usually structural proteins, and can help with signaling. More significantly, they are believed to be involved with DLG's suppression activity. When E6 complexes with the PDZ domains on the MAGI proteins, it distorts their shape and thereby impedes their function. Overall, the E6 protein serves to impede normal protein activity in such a way as to allow a cell to grow and multiply at the increased rate characteristic of cancer. Since the expression of E6 is strictly required for maintenance of a malignant phenotype in HPV-induced cancers, it is an appealing target of therapeutic HPV vaccines designed to eradicate established cervical cancer tumors. E7 In most papillomavirus types, the primary function of the E7 protein is to inactivate members of the pRb family of tumor suppressor proteins. Together with E6, E7 serves to prevent cell death (apoptosis) and promote cell cycle progression, thus priming the cell for replication of the viral DNA. E7 also participates in immortalization of infected cells by activating cellular telomerase. Like E6, E7 is the subject of intense research interest and is believed to exert a wide variety of other effects on infected cells. As with E6, the ongoing expression of E7 is required for survival of cancer cell lines, such as HeLa, that are derived from HPV-induced tumors. E8 Only a few papillomavirus types encode a short protein from the E8 gene. In the case of BPV-4 (papillomavirus genus Xi), the E8 open reading frame may substitute for the E6 open reading frame, which is absent in this papillomavirus genus. These E8 genes are chemically and functionally similar to the E5 genes from some human papillomaviruses, and are also called E5/E8. L1 L1 spontaneously self-assembles into pentameric capsomers. Purified capsomers can go on to form capsids, which are stabilized by disulfide bonds between neighboring L1 molecules. L1 capsids assembled in vitro are the basis of prophylactic vaccines against several HPV types. Compared to other papillomavirus genes, the amino acid sequences of most portions of L1 are well-conserved between types. However, the surface loops of L1 can differ substantially, even for different members of a particular papillomavirus species. This probably reflects a mechanism for evasion of neutralizing antibody responses elicited by previous papillomavirus infections. L2 L2 exists in an oxidized state within the papillomavirus virion, with the two conserved cysteine residues forming an intramolecular disulfide bond. In addition to cooperating with L1 to package the viral DNA into the virion, L2 has been shown to interact with a number of cellular proteins during the infectious entry process. After the initial binding of the virion to the cell, L2 must be cleaved by the cellular protease furin. The virion is internalized, probably through a clathrin-mediated process, into an endosome, where acidic conditions are thought to lead to exposure of membrane-destabilizing portions of L2. The cellular proteins beta-actin and syntaxin-18 may also participate in L2-mediated entry events. After endosome escape, L2 and the viral genome are imported into the cell nucleus where they traffic to a sub-nuclear domain known as an ND-10 body that is rich in transcription factors. Small portions of L2 are well-conserved between different papillomavirus types, and experimental vaccines targeting these conserved domains may offer protection against a broad range of HPV types. See also Deer cutaneous fibroma References External links ICTV Report Papillomaviridae Viralzone: Papillomaviridae Los Alamos National Laboratory maintains a comprehensive (albeit somewhat dated) papillomavirus sequence database. This useful database provides detailed descriptions and references for various papillomavirus types. A short video which shows the effects of papillomavirus on the skin of an Indonesian man with epidermodysplasia verruciformis, the genetic inability to defend against some types of cutaneous HPV. Best Joint Supplement That Actually Works for Men, Women and Knee de Villiers, E.M., Bernard, H.U., Broker, T., Delius, H. and zur Hausen, H. Index of Viruses – Papillomaviridae (2006). In: ICTVdB – The Universal Virus Database, version 4. Büchen-Osmond, C (Ed), Columbia University, New York, USA. 00.099. Papillomaviridae description In: ICTVdB – The Universal Virus Database, version 4. Büchen-Osmond, C. (Ed), Columbia University, New York, USA Human papillomavirus particle and genome visualization ICTV Virus families
Papillomaviridae
[ "Biology" ]
6,261
[ "Viruses", "Papillomavirus" ]
606,123
https://en.wikipedia.org/wiki/Photoevaporation
Photoevaporation is the process where energetic radiation ionises gas and causes it to disperse away from the ionising source. The term is typically used in an astrophysical context where ultraviolet radiation from hot stars acts on clouds of material such as molecular clouds, protoplanetary disks, or planetary atmospheres. Molecular clouds One of the most obvious manifestations of astrophysical photoevaporation is seen in the eroding structures of molecular clouds that luminous stars are born within. Evaporating gaseous globules (EGGs) Evaporating gaseous globules or EGGs were first discovered in the Eagle Nebula. These small cometary globules are being photoevaporated by the stars in the nearby cluster. EGGs are places of ongoing star-formation. Planetary atmospheres A planet can be stripped of its atmosphere (or parts of the atmosphere) due to high energy photons and other electromagnetic radiation. If a photon interacts with an atmospheric molecule, the molecule is accelerated and its temperature increased. If sufficient energy is provided, the molecule or atom may reach the escape velocity of the planet and "evaporate" into space. The lower the mass number of the gas, the higher the velocity obtained by interaction with a photon. Thus hydrogen is the gas which is most prone to photoevaporation. Photoevaporation is the likely cause of the small planet radius gap. Examples of exoplanets with an evaporating atmosphere are HD 209458 b, HD 189733 b and Gliese 3470 b. Material from a possible evaporating planet around WD J0914+1914 might be responsible for the gaseous disk around this white dwarf. Protoplanetary disks Protoplanetary disks can be dispersed by stellar wind and heating due to incident electromagnetic radiation. The radiation interacts with matter and thus accelerates it outwards. This effect is only noticeable when there is sufficient radiation strength, such as coming from nearby O and B type stars or when the central protostar commences nuclear fusion. The disk is composed of gas and dust. The gas, consisting mostly of light elements such as hydrogen and helium, is mainly affected by the effect, causing the ratio between dust and gas to increase. Radiation from the central star excites particles in the accretion disk. The irradiation of the disk gives rise to a stability length scale known as the gravitational radius (). Outside of the gravitational radius, particles can become sufficiently excited to escape the gravity of the disk, and evaporate. After 106 – 107 years, the viscous accretion rates fall below the photoevaporation rates at . A gap then opens around , the inner disk drains onto the central star, or spreads to and evaporates. An inner hole extending to is produced. Once an inner hole forms, the outer disk is very rapidly cleared. The formula for the gravitational radius of the disk is where is the ratio of specific heats (= 5/3 for a monatomic gas), the universal gravitational constant, the mass of the central star, the mass of the Sun, the mean weight of the gas, Boltzmann constant, is the temperature of the gas and AU the Astronomical Unit. If we denote the coefficient in the above equation by the Greek letter then            ,                                           .                                                                     where is the number of degrees of freedom and we have used the formula: . For an atom, such as a hydrogen atom, then , because an atom can move in three different, orthogonal directions. Consequently, . If the hydrogen atom is ionized, i.e., it is a proton, and is in a strong magnetic field then , because the proton can move along the magnetic field and rotate around the field lines. In this case, . A diatomic molecule, e.g., a hydrogen molecule, has and . For a non-linear triatomic molecule, such as water, and . If becomes very large, then approaches zero. This is summarised in the Table 1 , where we see that different gases may have different gravitational radii. Table 1: Gravitational radius coefficient as a function of the degrees of freedom. Because of this effect, the presence of massive stars in a star-forming region is thought to have a great effect on planet formation from the disk around a young stellar object, though it is not yet clear if this effect decelerates or accelerates it. Regions containing protoplanetary disks with clear signs of external photoevaporation The most famous region containing photoevaporated protoplanetary disks is the Orion Nebula. They were called bright proplyds and since then the term was used for other regions to describe photoevaporation of protoplanetary disks. They were discovered with the Hubble Space Telescope. There might even be a planetary-mass object in the Orion Nebula that is being photoevaporated by θ 1 Ori C. Since then HST did observe other young star clusters and found bright proplyds in the Lagoon Nebula, the Trifid Nebula, Pismis 24 and NGC 1977. After the launch of the Spitzer Space Telescope additional observations revealed dusty cometary tails around young cluster members in NGC 2244, IC 1396 and NGC 2264. These dusty tails are also explained by photoevaporation of the proto-planetary disk. Later similar cometary tails were found with Spitzer in W5. This study concluded that the tails have a likely lifetime of 5 Myrs or less. Additional tails were found with Spitzer in NGC 1977, NGC 6193 and Collinder 69. Other bright proplyd candidates were found in the Carina Nebula with the CTIO 4m and near Sagittarius A* with the VLA. Follow-up observations of a proplyd candidate in the Carina Nebula with Hubble revealed that it is likely an evaporating gaseous globule. Objects in NGC 3603 and later in Cygnus OB2 were proposed as intermediate massive versions of the bright proplyds found in the Orion Nebula. References Concepts in stellar astronomy
Photoevaporation
[ "Physics" ]
1,235
[ "Concepts in stellar astronomy", "Concepts in astrophysics" ]
606,149
https://en.wikipedia.org/wiki/Meso%20compound
A meso compound or meso isomer is an optically inactive isomer in a set of stereoisomers, at least two of which are optically active. This means that despite containing two or more stereocenters, the molecule is not chiral. A meso compound is superposable on its mirror image (not to be confused with superimposable, as any two objects can be superimposed over one another regardless of whether they are the same). Two objects can be superposed if all aspects of the objects coincide and it does not produce a "(+)" or "(-)" reading when analyzed with a polarimeter. The name is derived from the Greek mésos meaning “middle”. For example, tartaric acid can exist as any of three stereoisomers depicted below in a Fischer projection. Of the four colored pictures at the top of the diagram, the first two represent the meso compound (the 2R,3S and 2S,3R isomers are equivalent), followed by the optically active pair of levotartaric acid (L-(R,R)-(+)-tartaric acid) and dextrotartaric acid (D-(S,S)-(-)-tartaric acid). The meso compound is bisected by an internal plane of symmetry that is not present for the non-meso isomers (indicated by an X). That is, on reflecting the meso compound through a mirror plane perpendicular to the screen, the same stereochemistry is obtained; this is not the case for the non-meso tartaric acid, which generates the other enantiomer. The meso compound must not be confused with a 50:50 racemic mixture of the two optically-active compounds, although neither will rotate light in a polarimeter. It is a requirement for two of the stereocenters in a meso compound to have at least two substituents in common (although having this characteristic does not necessarily mean that the compound is meso). For example, in 2,4-pentanediol, both the second and fourth carbon atoms, which are stereocenters, have all four substituents in common. Since a meso isomer has a superposable mirror image, a compound with a total of n chiral centers cannot attain the theoretical maximum of 2n stereoisomers if one of the stereoisomers is meso. A meso isomer need not have a mirror plane. It may have an inversion or a rotoreflexion symmetry such as S. For example, there are two meso isomers of 1,4-difluoro-2,5-dichlorocyclohexane but neither has a mirror plane, and there are two meso isomers of 1,2,3,4-tetrafluorospiropentane (see figure). Cyclic meso compounds 1,2-substituted cyclopropane has a meso cis-isomer (molecule has a mirror plane) and two trans-enantiomers: The two cis stereoisomers of 1,2-substituted cyclohexanes behave like meso compounds at room temperature in most cases. At room temperature, most 1,2-disubstituted cyclohexanes undergo rapid ring flipping (exceptions being rings with bulky substituents), and as a result, the two cis stereoisomers behave chemically identically with chiral reagents. At low temperatures, however, this is not the case, as the activation energy for the ring-flip cannot be overcome, and they therefore behave like enantiomers. Also noteworthy is the fact that when a cyclohexane undergoes a ring flip, the absolute configurations of the stereocenters do not change. References Stereochemistry
Meso compound
[ "Physics", "Chemistry" ]
811
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
606,295
https://en.wikipedia.org/wiki/XSL%20Formatting%20Objects
XSL-FO (XSL Formatting Objects) is a markup language for XML document formatting that is most often used to generate PDF files. XSL-FO is part of XSL (Extensible Stylesheet Language), a set of W3C technologies designed for the transformation and formatting of XML data. The other parts of XSL are XSLT and XPath. Version 1.1 of XSL-FO was published in 2006. XSL-FO is considered feature complete by W3C: the last update for the Working Draft was in January 2012, and its Working Group closed in November 2013. Basics Unlike the combination of HTML and CSS, XSL-FO is a unified presentational language. It has no semantic markup as this term is used in HTML. And, unlike CSS which modifies the default presentation of an external XML or HTML document, it stores all of the document's data within itself. The general idea behind XSL-FO's use is that the user writes a document, not in FO, but in an XML language. XHTML, DocBook, and TEI are all possible examples. Then, the user obtains an XSLT transform, either by writing one themselves or by finding one for the document type in question. This XSLT transform converts the XML into XSL-FO. Once the XSL-FO document is generated, it is then passed to an application called an FO processor. FO processors convert the XSL-FO document into something that is readable, printable or both. The most common output of XSL-FO is a PDF file or as PostScript, but some FO processors can output to other formats like RTF files or even just a window in the user's GUI displaying the sequence of pages and their contents. The XSLT language itself was originally conceived only for this purpose; it is now in widespread use for more general XML transformations. This transformation step is taken so much for granted in XSL-FO that it is not uncommon for people to call the XSLT that turns XML into XSL-FO the actual XSL-FO document itself. Even tutorials on XSL-FO tend to be written with XSLT commands around the FO processing instructions. The XSLT transformation step is exceptionally powerful. It allows for the automatic generation of a table of contents, linked references, an index, and various other possibilities. An XSL-FO document is not like a PDF or a PostScript document. It does not definitively describe the layout of the text on various pages. Instead, it describes what the pages look like and where the various contents go. From there, an FO processor determines how to position the text within the boundaries described by the FO document. The XSL-FO specification even allows different FO processors to have varying responses with regard to the resultant generated pages. For example, some FO processors can hyphenate words to minimize space when breaking a line, while others choose not to. Different processors may even use different hyphenation algorithms, ranging from very simple to more complex hyphenation algorithms that take into account whether the previous or next line also is hyphenated. These will change, in some borderline cases quite substantially, the layout of the various pages. There are other cases where the XSL-FO specification explicitly allows FO processors some degree of choice with regard to layout. This differentiation between FO processors, creating inconsistent results between processors is often not a concern. This is because the general purpose behind XSL-FO is to generate paged, printed media. XSL-FO documents themselves are usually used as intermediaries, mostly to generate either PDF files or a printed document as the final form to be distributed. This is as opposed to how HTML is generated and distributed as a final form directly to the user. Distributing the final PDF rather than the formatting language input (whether HTML/CSS or XSL-FO) means on the one hand that recipients aren't affected by the unpredictability resulting from differences among formatting language interpreters, while on the other hand means that the document cannot easily adapt to different recipient needs, such as different page size or preferred font size, or tailoring for on-screen versus on-paper versus audio presentation. Language concepts The XSL-FO language was designed for paged media; as such, the concept of pages is an integral part of XSL-FO's structure. FO works best for what could be called "content-driven" design. This is the standard method of layout for books, articles, legal documents, and so forth. It involves a single flowing span of fairly contiguous text, with various repeating information built into the margins of a page. This is as opposed to "layout-driven" design, which is used in newspapers or magazines. If content in those documents does not fit in the required space, some of it is trimmed away until it does fit. XSL-FO does not easily handle the tight restrictions of magazine layout; indeed, in many cases, it lacks the ability to express some forms of said layout. Despite the basic nature of the language's design, it is capable of a great deal of expressiveness. Tables, lists, side floats, and a variety of other features are available. These features are comparable to CSS's layout features, though some of those features are expected to be built by the XSLT. Document structure XSL-FO documents are XML documents, but they do not have to conform to any DTD or schema. Instead, they conform to a syntax defined in the XSL-FO specification. XSL-FO documents contain two required sections. The first section details a list of named page layouts. The second section is a list of document data, with markup, that uses the various page layouts to determine how the content fills the various pages. Page layouts define the properties of the page. They can define the directions for the flow of text, so as to match the conventions for the language in question. They define the size of a page as well as the margins of that page. More importantly, they can define sequences of pages that allow for effects where the odd and even pages look different. For example, one can define a page layout sequence that gives extra space to the inner margins for printing purposes; this allows more space to be given to the margin where the book will be bound. The document data portion is broken up into a sequence of flows, where each flow is attached to a page layout. The flows contain a list of blocks which, in turn, each contain a list of text data, inline markup elements, or a combination of the two. Content may also be added to the margins of the document, for page numbers, chapter headings and the like. Blocks and inline elements function in much the same way as for CSS, though some of the rules for padding and margins differ between FO and CSS. The direction, relative to the page orientation, for the progression of blocks and inlines can be fully specified, thus allowing FO documents to function under languages that are read different from English. The language of the FO specification, unlike that of CSS 2.1, uses direction-neutral terms like start and end rather than left and right when describing these directions. XSL-FO's basic content markup is derived from CSS and its cascading rules. As such, many attributes in XSL-FO propagate into the child elements unless explicitly overridden. Capabilities of XSL-FO v1.0 XSL-FO is capable of a great deal of textual layout functionality. In addition to the information as specified above, XSL-FO's language allows for the specification of the following. Multiple columns A page can be defined to have multiple columns. When this is the case, blocks flow from one column into the next by default. Individual blocks can be set to span all columns, creating a textual break in the page. The columns above this break will flow into each other, as will the columns below the break. But no text is allowed to flow from the above section to the below section. Because of the nature of XSL-FO's page specification, multiple pages may actually have different numbers and widths of columns. As such, text can flow from a 3 column page to a 5 column page to a 1 column page quite easily. All FO features work within the restrictions of a multi-column page. We can span multiple columns by specifying two attributes i.e.,. span, padding-after . Lists An XSL-FO list is, essentially, two sets of blocks stacked side by side. An entry consists of a block on the "left", or start inline direction, and a block sequence on the "right", or end inline direction. The block on the left is conceptually what would be the number or bullet in a list. However, it could just as easily be a string of text, as one might see in a glossary entry. The block on the right works as expected. Both of these blocks can be block containers, or have multiple blocks in a single list entry. Numbering of XSL-FO lists, when they are numbered, is expected to be done by the XSLT, or whatever other process, that generated the XSL-FO document. As such, number lists are to be explicitly numbered in XSL-FO. Pagination controls The user can specify Widow and Orphan for blocks or for the flow itself, and allow the attributes to cascade into child blocks. Additionally, blocks can be specified to be kept together on a single page. For example, an image block and the description of that image can be set to never be separated. The FO processor will do its best to adhere to these commands, even if it requires creating a great deal of empty space on a page. Footnotes The user can create footnotes that appear at the bottom of a page. The footnote is written, in the FO document, in the regular flow of text at the point where it is referenced. The reference is represented as an inline definition, though it is not required. The body is one or more blocks that are placed by the FO processor to the bottom of the page. The FO processor guarantees that wherever the reference is, the footnote cited by that reference will begin on the same page. This will be so even if it means creating extra empty space on a page. Tables An FO table functions much like an HTML/CSS table. The user specifies rows of data for each individual cell. The user can, also, specify some styling information for each column, such as background color. Additionally, the user can specify the first row as a table header row, with its own separate styling information. The FO processor can be told exactly how much space to give each column, or it can be told to auto-fit the text in the table. Text orientation controls FO has extensive controls for orienting blocks of text. One can, in the middle of a page, designate a block of text to be oriented in a different orientation. These oriented blocks can be used for languages in a different orientation from the rest of the document, or simply if one needs to orient the text for layout purposes. These blocks can contain virtually any kind of content, from tables to lists or even other blocks of reoriented text. Miscellaneous Page number citations. A page that contains a special tag can be cited in text, and the FO processor will fill in the actual page number where this tag appears. Block borders, in a number of styles. Background colors and images. Font controls and weighting, as in CSS. Side floats. Miscellaneous Inline Elements. Capabilities of XSL-FO v1.1 Version 1.1 of XSL-FO adds a number of new features to version 1.0. Multiple flows and flow mapping XSL-FO 1.0 was fairly restrictive about what text was allowed to go in what areas of a page. Version 1.1 loosens these restrictions significantly, allowing flowing text to be mapped into multiple explicit regions on a page. This allows for more newspaper-like typesetting. Bookmarks Many output formats for XSL-FO processors, specifically PDF, have bookmarking features. These allow the format to specify a string of text in a separate window that can be selected by the user. When selected, the document window scrolls immediately to a specific region of the document. XSL-FO v1.1 now provides the ability to create named bookmarks in XSL-FO, thus allowing the processor to pass this on to an output format that supports it. Indexing XSL-FO 1.1 has features that support the generation of an index that might be found at the back of a book. This is done through referencing of properly marked-up elements in the FO document. Last page citation The last page can be generated without providing an explicit in-document reference to a specific anchor in the FO document. The definition of "last page" can be restricted to within a specific set of pages or to cover the entire document. This allows the user to specify something like, "Page 2 out of 15", where page 15 is the page number of a last page definition. Table markers Table markers allow the user to create dynamic content within table headers and footers, such as running totals at the bottom of each page of a table or "table continued" indicators. Inside/outside floats XSL-FO 1.1 adds the keywords "inside" and "outside" for side floats, which makes it possible to achieve page layouts with marginalia positioned on the outside or inside edges of pages. Inside refers to the side of the page towards the book binding, and outside refers to the side of a page away from the book binding. Refined graphic sizing XSL-FO 1.1 refines the functionality for sizing of graphics to fit, with the ability to shrink to fit (but not grow to fit), as well as the ability to define specific scaling steps. In addition, the resulting scaling factor can be referenced for display (for example, to say in a figure caption, "image shown is 50% actual size"). Advantages XML language – Because it is an XML language, only an XSLT transform (and an XSLT processor) is required to generate XSL-FO code from any XML language. One can easily write a document in TEI or DocBook, and transform it into HTML for web viewing or PDF (through an FO processor) for printing. In fact, there are many pre-existing TEI and DocBook XSLTs for both of these purposes. Ease of use – Another advantage of XSL-FO is the relative ease of use. Much of the functionality of the language is based on work from CSS, so a CSS user will be familiar with the basics of the markup attributes. Understanding what a specific section of an FO document will look like is usually quite easy. Low cost – Compared with commercial typesetting and page layout products, XSL-FO can offer a much lower cost solution when it otherwise meets the typographic and layout requirements (see below). The initial cost of ownership is low (zero if the free implementations, such as Apache FOP and xmlroff, meet your requirements), especially compared to the cost of commercial composition tools. The skills required (primarily XSLT programming) are widely available. There are a number of good books on XSL-FO as well as online resources and an active user community. Multi-lingual – XSL-FO has been designed to work for all written human languages and the implementations have largely achieved that goal. This makes XSL-FO particularly well suited for composing documents localized into a large number of national languages where the requirement is to have a single tool set that can compose all the language versions of documents. This is especially valuable for technical documentation for things like consumer electronics, where Asian and Middle Eastern languages are important because those parts of world represent huge markets for things like mobile phones and computer peripherals. Mature standard – With the publication of XSL-FO 1.1, XSL-FO is proving to be a mature standard with a number of solid commercial and non-commercial implementations. There is no other comparable standard for page composition. Drawbacks Limited capabilities – XSL-FO was specifically designed to meet the requirements of "lightly designed" documents typified by technical manuals, business documents, invoices, and so on. While it can be and is used for more sophisticated designs, it is inherently limited in what it can do from a layout and typographic perspective. In particular, XSL-FO does not provide a direct way to get formatting effects that depend on knowing the page position relationship of two formatting objects. For example, there is no direct way to say "if this thing is on the same page as that thing, then do X, otherwise do Y". This is an explicit design decision reflecting the two-stage, transform-based abstract processing model used by XSL-FO. This limitation can be addressed by implementing a multi-pass process. Unfortunately, there is currently no standard for how the result of the first pass would be communicated back to the second pass. Most, if not all, implementations provide some form of processable intermediate result format that can be used for this, but any such process implemented today would, by necessity, be implementation specific. By the same token, there are important layout features that are simply not in XSL-FO, either because they were not of high enough priority or because designing them was too difficult to allow inclusion in version 1.1, or because there were insufficient implementations to allow their inclusion in the final specification per W3C rules. In addition to these architectural limitations, the current XSL-FO implementations, both commercial and open source, do not provide the same level of typographic sophistication provided by high-end layout tools like QuarkXPress or InDesign, or by programmable typesetting systems like LaTeX. For example, no current implementation provides features for ensuring that text lines on facing pages are lined up vertically. There is nothing in the XSL-FO specification that prevents it but nothing that requires it either. For most documents for which a completely automated composition solution is sufficient, that level of typographic sophistication is not needed. However, for high-end publications and mass-market books, it usually is; in some cases this can be met by using XSLT to generate a LaTeX document instead. Extension dependency – When considering the applicability of XSL-FO to a particular document or document design, one must consider proprietary extensions provided by the different XSL-FO implementations. These extensions add features that are not part of the core specification. For example, one product adds support for Japanese typographic conventions that the XSL-FO specification does not address. However, use of these features makes such an XSL-FO system a little more bound to a specific implementation (but not completely bound as it would be when using a totally proprietary composition system.) Impractical manual editing – It is generally impractical to edit XSL-FO instances by hand (XSL-FO was designed for clarity and completeness, not ease of editing.). Visual editing tools such as XFDesigner can alleviate the task, although not all XSL-FO tags are accessible (most notably markers and footnotes). XF Designer is no longer a supported product from Ecrion Software. When trying to decide whether or not XSL-FO will work for a given document, the following typographic and layout requirements usually indicate that XSL-FO will not work (although some of these may be satisfied by proprietary extensions): Need to restart footnote numbers or symbol sequence on each new page (however, some implementations provide extensions to support automatic footnote numbering.) Need to run text around both sides of a floated object (XSL-FO can run text around one side and the top and/or bottom, but not both sides; however, some implementations provide support for such complex layouts via proprietary extensions.) Need to have variable numbers of columns on a single page (however, at least two commercial implementations provide extensions for creating multi-column blocks within a page.) Need to have column-wide footnotes (several implementations provide column footnote extensions.) Need to have marginalia that are dynamically placed relative to other marginalia (for example, marginal notes that are evenly spaced vertically on the page). XSL-FO only provides features for placing marginalia so that it is vertically aligned with its anchor. Need to create content that spreads across two pages as a float or "out of line" object in an otherwise homogeneous sequence of repeating page masters (this can be done in XSL-FO 1.1 using multiple body regions and flow maps, but it requires being able to control the page masters used for those pages.) Need both bottom-floated content and footnotes on the same page. Need to be able to run text against an arbitrary curve (though some implementation support SVG, which can be used to get around this limitation). Need to be able to constrain lines to specific baseline grids (for example, to achieve exact registration of lines on facing pages.) Anything that requires page-aware layout, such as ensuring that a figure always occurs on the page facing its anchor point. Replacement XML and HTML standards, with the CSS standard, since CSS2 (paged media module) starts to supply basic features to printed media. With the CSS Paged Media Module Level 3, W3C is completing the formulation of an integrated standard for document formatting and to generate PDFs. So, since 2013, CSS3-paged is a W3C proposal for an XSL-FO replacement. Design notes for a Version 2.0 of XSL Formatting Objects were first published in 2009 and last updated in 2012. See also XHTML Apache FOP - Open source and royalty free implementation of XSL-FO XEP - Commercial and proprietary rendering engine Antenna House Formatter - XSL-FO and CSS formatting software - Commercial and proprietary rendering engine References External links XSL-FO 1.1 Specification on W3C XSL-FO 1.0 Specification on W3C What is XSL-FO? on XML.com FO examples and techniques - Reference site set up by Dave Pawson XSL-FO Tutorial and Samples XSL Formatting Objects Tutorial aXSL - Open-source API for processing XSL-FO documents FOray - Open source and royalty-free implementation of XSL-FO, using the aXSL interfaces XSL-FO introduction and examples FO.NET - XSL-FO to PDF renderer for .NET Markup languages Page description languages Typesetting programming languages World Wide Web Consortium standards XML-based standards
XSL Formatting Objects
[ "Technology" ]
4,845
[ "Computer standards", "XML-based standards" ]
606,392
https://en.wikipedia.org/wiki/Lathe%20%28graphics%29
In 3D computer graphics, a lathed object is a 3D model whose vertex geometry is produced by rotating the points of a spline or other point set around a fixed axis. The lathing may be partial; the amount of rotation is not necessarily a full 360 degrees. The point set providing the initial source data can be thought of as a cross section through the object along a plane containing its axis of radial symmetry. The reason the lathe has this name is because it creates symmetrical objects around a rotational axis, just like a real lathe would. Lathes are very similar to surfaces of revolution. However, lathes are constructed by rotating a curve defined by a set of points instead of a function. Note that this means that lathes can be constructed by rotating closed curves or curves that double back on themselves (such as the aforementioned torus), whereas a surface of revolution could not because such curves cannot be described by functions. See also Surface of revolution Solid of revolution Loft (3D) Computer-aided design
Lathe (graphics)
[ "Engineering" ]
205
[ "Computer-aided design", "Design engineering" ]
606,406
https://en.wikipedia.org/wiki/Underwater%20Demolition%20Team
The Underwater Demolition Team (UDT), or frogmen, were amphibious units created by the United States Navy during World War II with specialized missions. They were predecessors of the Navy's current SEAL teams. Their primary WWII function began with reconnaissance and underwater demolition of natural or man-made obstacles obstructing amphibious landings. Postwar they transitioned to scuba gear changing their capabilities. With that they came to be considered more elite and tactical during the Korean and Vietnam Wars. UDTs were pioneers in underwater demolition, closed-circuit diving, combat swimming, riverine warfare and midget submarine (dry and wet submersible) operations. They later were tasked with ensuring recovery of space capsules and astronauts after splash down in the Mercury, Gemini and Apollo space flight programs. Commando training was added making them the forerunner to the United States Navy SEAL program that exists today. By 1983, the UDTs were re-designated as SEAL Teams or Swimmer Delivery Vehicle Teams (SDVTs); however, some UDTs, had already been re-designated into UCTs and special boat units prior. SDVTs have since been re-designated SEAL Delivery Vehicle Teams. Early history The United States Navy studied the problems encountered by the disastrous Allied amphibious landings during the Gallipoli Campaign of World War I. This contributed to the development and experimentation of new landing techniques in the mid-1930s. In August 1941, landing trials were performed and one hazardous operation led to Army Second Lieutenant Lloyd E. Peddicord being assigned the task of analyzing the need for a human intelligence (HUMINT) capability. When the U.S. entered World War II, the Navy realized that in order to strike at the Axis powers the U.S. forces would need to perform a large number of amphibious attacks. The Navy decided that men would have to go in to reconnoiter the landing beaches, locate obstacles and defenses, as well as guide the landing forces ashore. In August 1942, Peddicord set up a recon school for his new unit, Navy Scouts and Raiders, at the amphibious training base at Little Creek, Virginia. In 1942, the Army and Navy jointly established the Amphibious Scout and Raider School at Fort Pierce, Florida. Here Lieutenant Commander Phil H. Bucklew, the "Father of Naval Special Warfare", helped organize and train what became the Navy's 'first group' to specialize in amphibious raids and tactics. The need for intelligence gathering prior to landings became paramount following the amphibious assault at the Battle of Tarawa in November 1943. Although Navy and Marine Corps planners had identified coral as an issue, they incorrectly assumed landing craft would be able to crawl over the coral. Marines were forced to exit their craft in chest deep water a thousand yards from shore, with many men drowning due to the irregularities of the reefs and Japanese gunners inflicting heavy U.S. casualties. After that experience, Rear Admiral Kelley Turner, Commander of the V Amphibious Corps (VAC), directed Seabee Lt. Crist (CEC) to come up with a means to deal with the coral and the men to do it. Lt. Crist staged 30 officers and 150 enlisted men from the 7th Naval Construction Regiment at Waipio Amphibious Operating Base on Oahu to form the nucleus of a reconnaissance and demolition training program. It is here that the UDTs of the Pacific were born. Later in war, the Army Engineers passed down demolition jobs to the U.S. Navy. It then became the Navy's responsibility to clear any obstacles and defenses in the near shore area. A memorial to the founding of the UDT has been built at Bellows Air Force Station near the original Amphibious Training Base (ATB) in Oahu. Naval Combat Demolition Units In early May 1943, a two-phase "Naval Demolition Project" was ordered by the Chief of Naval Operations (CNO) "to meet a present and urgent requirement". The first phase began at Amphibious Training Base (ATB) Solomons, Maryland with the establishment of Operational Naval Demolition Unit No. 1. Six Officers and eighteen enlisted men reported from the Seabees dynamiting and demolition school at Camp Peary for a four-week course. Those Seabees were immediately sent to participate in the invasion of Sicily where they were divided in three groups that landed on the beaches near Licata, Gela and Scoglitti. Also in May, the Navy created Naval Combat Demolition Units (NCDUs) tasked with eliminating beach obstructions in advance of amphibious assaults, going ashore in an LCRS inflatable boat. Each NCDU consisted of five enlisted men led by a single, junior (CEC) officer. In early May, Chief of Naval Operations Admiral Ernest J. King, picked Lieutenant Commander Draper L. Kauffman to lead the training. The first six classes graduated from "Area E" at the Seabee's Camp Peary between May and mid-July. Training was moved to Fort Pierce, Florida, where the first class began mid-July 1943. Despite the move and having the Scouts Raiders base close by, Camp Peary was Kauffman's primary source of recruits. "He would go up to Camp Peary's Dynamite School and assemble the Seabees in the auditorium saying: "I need volunteers for hazardous, prolonged and distant duty." Kauffman's other volunteers came from the U.S. Marines, and U.S. Army combat engineers. Training commenced with one grueling week designed to "separate the men from the boys". Some said that "the men had sense enough to quit, leaving Kauffman with the boys." It was and is still considered the first "Hell Week". Normandy In early November 1943 NCDU-11 was assigned as the advance NCDU party for Operation Overlord. They would be joined in England by 33 more NCDUs. They trained with the 146th, 277th and 299th Combat Engineers to prepare for the landing. Each Unit had five Combat engineers attached to it. The first 10 NCDUs divided into three groups. The senior officer, by rank, was the commanding officer of Group III, Lieutenant Smith (CEC). He assumed command in an unofficial capacity. His Group III worked on experimental demolitions and developed the Hagensen Pack.(an innovation that used of tetryl placed into rubber tubes that could be twisted around obstacles) As more teams arrived a NCDU Command was created for NCDUs: 11, 22–30, 41–46, 127–8, 130–42 The Germans had constructed elaborate defenses on the French coast. These included steel posts driven into the beach and topped with explosive charges. Large 3-ton steel barricades called Belgian Gates and hedgehogs were placed throughout the tidal zone. Behind which was a network of reinforced: coastal artillery, mortar and machine gun positions. The Scouts and Raiders spent weeks gathering information during nightly surveillance missions up and down the French coast. Replicas of the Belgian Gates were constructed on the south coast of England for the NCDUs to practice demolitions on. It was possible to blow a gate to pieces, but that only created a mass of tangled iron creating more of an obstacle. The NCDUs found that the best method was to blast the structural joints of a gate so that it fell down flat. The NCDU teams (designated Demolitions Gap-Assault teams) would come in at low tide to clear the obstacles. Their mission was to open sixteen wide corridors for the landing at each of the U.S. landing zones (Omaha Beach and Utah Beach). Unfortunately, the plans were not executed as laid out. The preparatory air and naval bombardment was ineffective, leaving many German guns to fire on the assault. Also, tidal conditions caused difficulties for the NCDUs. Despite heavy German fire and casualties, the NCDUs charges opened gaps in the defenses. As the infantry came ashore, some used obstacles for cover that had demolition charges on them. The greatest difficulty was on Omaha Beach. By nightfall thirteen of the planned sixteen gaps were open. Of the 175 NCDU men that landed, 31 were killed and 60 were wounded. The attack on Utah Beach was better, four dead and eleven wounded. Overall, NCDUs suffered a 53 percent casualty rate. NCDUs were also assigned to Operation Dragoon, the invasion of southern France, with a few units from Normandy participating there too. With Europe invaded Admiral Turner requisitioned all available NCDUs from Fort Pierce for integration into the UDTs for the Pacific. However, the first NCDUs, 1–10, had been staged at Turner City, Florida Island in the Solomon Islands during January 1944. A few were temporarily attached to UDTs. Later NCDUs 1–10 were combined to form Underwater Demolition Team Able. This team was disbanded with NCDUs 2 and 3, plus three others assigned to MacArthur's 7th Amphibious force, and were the only NCDUs remaining at war's end. The other men from Team Able were assigned to numbered UDTs. Underwater Demolition Teams During WWII The first units designated as Underwater Demolition Teams were formed in the Pacific Theater. Rear Admiral Turner, the Navy's amphibious expert, ordered the formation of Underwater Demolition Teams in response to the assault debacle experienced at Tarawa. Turner recognized that amphibious operations required intelligence of underwater obstacles. The personnel in teams 1-15 were primarily Seabees that had started out in the NCDUs. UDT training was at the Waipio Amphibious Operating Base, under V Amphibious Corps operational and administrative control. Most of the instructors and trainees were graduates of the Fort Pierce NCDU or Scouts and Raiders schools, Seabees, Marines, and Army soldiers. When Teams 1 and 2 were formed they were "provisional" and trained by a Marine Corps Amphibious Reconnaissance Battalion that had nothing to do with the Fort Pierce program. After a successful mission at Kwajalein, where 2 UDT men stripped down to swim trunks and effectively gathered the intelligence Admiral Turner desired, the UDT mission model evolved to daylight reconnaissance, wearing swim trunks, fins, and masks. The immediate success of the UDTs made them an indispensable part of all future amphibious landings. A UDT was organized with approximately sixteen officers and eighty enlisted. One Marine and one Army officer were liaisons within each team. They were deployed in every major amphibious landing after Tarawa with 34 teams eventually being commissioned. Teams 1–21 were the teams that had deployed operationally, with slightly over half of the officers and enlisted coming from the Seabees in those teams. The remaining teams were not deployed due to the war ending. Tarawa and the formation of UDTs Prior to Tarawa, both Naval and Marine Corps planners had identified coral as an issue for amphibious operations. At Tarawa the neap tide created draft issues for the Higgins boats (LCVPs) clearing the reef. The Amtracs carrying the first wave crossed the reef successfully. The LCVPs carrying the second wave ran aground, disembarking their Marines several hundred yards to shore in full combat gear, under heavy fire. Many drowned or were killed before making the beach, forced to wade across treacherously uneven coral. The first wave was left fighting without reinforcements and took heavy casualties on the beach. This disaster made it clear to Admiral Turner that pre-assault intelligence was needed to avoid similar difficulties in future operations. To that end, Turner ordered the formation of underwater demolition teams to do reconnaissance of beach conditions and do removal of submerged obstructions for Amphibious operations. After a thorough review, V Amphibious Corps found that the only people having any applicable experience with the coral were men in the Naval Construction Battalions. The Admiral tasked Lt. Thomas C. Crist (CEC) of CB 10 to develop a method for blasting coral under combat conditions and putting together a team for that purpose. Lt. Crist started by recruiting others he had blasted coral with in CB 10 and by the end November 1943 he had assembled close to 30 officers and 150 enlisted men from the 7th Naval Construction Regiment, at Waipio Amphibious Operating Base on Maui. Kwajalein and the evolution of the UDT mission model The first operation after Tarawa was Operation Flintlock in the Marshall Islands. It began with the island of Kwajalein in January 1944. Admiral Turner wanted the intelligence and to get it, the men that Lt. Crist had staged were used to form Underwater Demolition Teams: UDT 1 and UDT 2. Initially, the team commanders were Cmdr. E. D. Brewster (CEC) and Lt. Crist (CEC). However, Lt. Crist was made Ops officer of Team 2 and Lt. John T. Koehler was made the team Commander. As with all Seabee military training, the Marines provided it. A Marine Corps Amphibious Reconnaissance Battalion oversaw five weeks further training of the Seabees in UDTs 1 and 2 to prepare for the mission. UDT 1 was tasked with two daylight recons. The men were to follow Marine Corps Recon procedure with each two-man team getting close to the beach in an inflatable boat to make their observations wearing fatigues, boots, d helmets, and life-lined to their boats. Team 1 found that the reef kept them from ascertaining conditions both in the water and on the beach as had been anticipated. In keeping with the Seabee traditions of: (1) doing whatever it takes to accomplish the job and (2) not always following military rules to get it done, UDT 1 did both: the fatigues and boots came off. Ensign Lewis F. Luehrs and Seabee Chief Bill Acheson had anticipated that they would not be able to get the intel Admiral Turner wanted following USMC Recon protocol and had worn swim trunks beneath their fatigues. Stripping down, they swam 45 minutes undetected across the reef returning with sketches of gun emplacements and other intelligence. Still in their trunks, they were taken directly to Rear Admiral Turner's flagship to report. Afterwards, Turner concluded that the only way to get this kind of information was to do what these men had done as individual swimmers, which is what he relayed to Admiral Nimitz. The planning and decisions of Rear Admiral Turner, Ensign Luehrs, and Chief Acheson made Kwajalein a developmental day in UDT history, changing both the mission model and training regimen. Luehrs would make rank and be in UDT 3 until he was made XO of UDT 18. Acheson and three other UDT officers were posted to the 301st CB as blasting officers. The 301st specialized in Harbor dredging. It saved UDT teams from blasting channels and harbor clearance, but it required its own blasters. Admiral Turner ordered the formation of nine teams, six for V AC and three for III Amphibious Corps. Seabees made up the majority of the men in teams 1–9, 13 and 15. The officers of those teams were primarily CEC (Seabees). UDT 2 was sent to Roi-Namur where Lt. Crist earned a Silver Star. UDTs 1 and 2 were decommissioned upon return to Hawaii with most of the men transferred to UDTs 3, 4, 5, and 6. Admiral Turner ordered the formation of nine teams, three for III Amphibious Corps and six for V Amphibious Corps (in all Teams 3–11). As more NCDUs arrived in the Pacific they were used to form even more teams. UDT 15 was an all-NCDU team. To implement these changes and grow the UDTs, Koehler was made the commanding officer of the Naval Combat Demolition Training and Experimental Base on Maui. Admiral Turner also brought on LCDR Draper Kaufmann as a combat officer. It became obvious more men were needed than the NCDUs would supply and Cmdr. Kauffman was no longer recruiting Seabees, so Admiral Nimitz put out a call to the Pacific Fleet for volunteers. They would form three teams; UDT 14 would be the first of them. Recruiting was such an issue that three Lt. Cmdrs who had no background in demolition were transferred from USN Beach Battalions to command UDTs 11, 12, and 13. Admiral Turner requested the establishment of the Naval Combat Demolition Training and Experimental Base at Kihei independent of Fort Pierce, expanding upon what had been learned from UDT 1 at Kwajalein. Operations began in February 1944 with Lt. Crist the first head of training. Most of the procedures from Fort Pierce were changed, replaced with an emphasis on developing swimmers, daylight reconnaissance, and no lifelines. The uniform of the day changed to diving masks, swim trunks, and a Ka-Bar, creating the UDT image as "Naked Warriors" (swim-fins were added after UDT 10 introduced them). Roi-Namur, Saipan, Tinian, and Guam At Saipan and Tinian UDTs 5, 6, and 7 were given the missions: day time for Saipan and night for Tinian. At Saipan UDT 7 developed a method to recover swimmers on the move without making the recovery vessel a stationary target. For Guam UDTs 3, 4, and 6 were the teams assigned. When it was over the Seabee-dominated teams had made naval history. For the Marianas operations Admiral Turner recommended over sixty Silver Stars and over three hundred Bronze Stars with Vs for UDTs 3–7 That was unprecedented in U.S. Naval/Marine Corps history. For UDTs 5 and 7, all officers received silver stars and all the enlisted received bronze stars with Vs for Operation Forager (Tinian). For UDTs 3 and 4 all officers received a silver stars and all the enlisted received bronze stars with Vs for Operation Forager (Guam). Admiral Conolly felt the commanders of teams 3 and 4 (Lt. Crist and Lt. W.G. Carberry) should have received Navy Crosses. Teams 4 & 7 also received Naval Unit Commendations. Peleliu, Philippines, and Iwo Jima UDTs 6, 7, and 10 drew the Peleliu assignment while UDT 8 went to Angaur. The officers were almost all CEC and the enlisted were Seabees. At formation UDT 10 was assigned 5 officers and 24 enlisted that had trained as OSS Operational Swimmers (Maritime Unit: Operational Swimmer Group II). They were led by a Lt. A.O. Chote Jr., who became UDT 10's commanding officer. The men were multi-service: Army, Coast Guard, Marine Corps and Navy but the OSS was not allowed to operate in the Pacific Theater. Admiral Nimitz needed swimmers and did approve their transfer from the OSS to his operational and administrative control. Most of their OSS gear was stored as it was not applicable to UDT work however, their swimfins came with them. The other UDTs quickly adopted them. UDT 14 was the first all-Navy team (one of three from the Pacific fleet) even though its CO and XO were CEC and some of Team Able was incorporated. In the Philippines Leyte Gulf UDTs 10 & 15 reconnoitered beaches at Luzon, teams 3, 4, 5, & 8 were sent to Dulag and teams 6, 9, & 10 went to Tacloban. When UDT 3 returned to Maui the team was made the instructors of the school. Lt Crist was again made Training Officer. Under his direction training was broken into four 2-week blocks with an emphasis on swimming and reconnaissance. There were classes in night operations, unit control, coral and lava blasting in addition to bivouacking, small unit tactics and small arms. Lt Crist would be promoted to Lt Cmdr and the team would remain in Hawaii until April 1945. At that time the Seabees of UDT 3 were transferred to Fort Pierce to be the instructors there. In all they would train teams 12 to 22. Lt. Cmdr. Crist would be sent back to Hawaii. D-minus 2 at Iwo Jima UDTs 12, 13, 14, and 15 reconnoitered the beaches from twelve LCI(G) with just one man wounded. They did come under intense heavy fire that sank three of their LCI(G) with the others seriously damaged or disabled. The LCI(G) crews suffered more than the UDTs with the skipper of one boat earning a Medal of Honor. The next day a Japanese bomb hit UDT 15's APD, killing fifteen and wounding 23. It was the largest loss suffered by the UDTs during the war. On D-plus 2 the beachmaster requested help. There were so many broached or damaged landing craft and the beach was so clogged with war debris that there was no place for landing craft to get ashore. Lt Cmdr. E. Hochuli of UDT 12 volunteered his team to go deal with the problem and teams 13 and 14 were ordered to go with. Lt Cmdr. Vincent Moranz of UDT 13 was "reluctant, and radioed that his men ... were not salvage-men. It is reported that Capt. (Bull) Hanlon, Underwater Demolition Operations Commanding Officer radioed back that he did not want anything salvaged, he wanted that beach cleared." The difference in attitude between Hochuli and Moranz would be remembered in the unit awards. The three teams worked for five days clearing the waters edge. While the teams all did the same job under the same conditions the Navy gave them different unit awards: UDT 12 a PUC, UDT 14 a NUC and UDT 13 nothing. The USMC ground commanders felt that every man that set foot on the island during the assault had an award coming. The Navy did not share this point of view, besides UDT 13 not a single USN beach party received a unit award either. On D-plus 2, when the UDTs set foot on beaches that were under a USMC assault, any unit award they received should have come under the USMC award protocol. The USMC Iwo Jima PUC/NUC was a mass award with the PUC going to assault units and the NUC going to support units. UDTs also served at Eniwetok, Ulithi, Leyte, Lingayen Gulf, Zambales, Labuan, and Brunei Bay. At Lingayen UDT 9 was aboard the when she was hit by a Kamikaze. It cost the team one officer, 7 enlisted, 3 MIA and 13 wounded. Okinawa to the end of the war The largest UDT operation of WWII was the invasion of Okinawa, involving teams 7, 11, 12, 13, 14, 16, 17, and 18 (nearly 1,000 men). All prior missions had been in warm tropic waters but, the waters around Okinawa were cool enough that long immersion could cause hypothermia and severe cramps. Since thermal protection for swimmers was not available, UDTs were at risk to these hazards working around Okinawa. Operations included both real reconnaissance and demolition at the landing beaches, and feints to create the illusion of landings in other locations. Pointed poles set into the coral reef protected the beaches on Okinawa. Teams 11 and 16 were sent in to blast the poles. The charges took out all of UDT 11's targets and half of UDT 16's. UDT 16 aborted the operation due to the death of one of their men; hence, their mission was considered a failure. UDT 11 went back the next day and took out the remaining poles after-which the team remained to guide landing-craft to the beach. By war's end 34 teams had been formed with teams 1–21 having actually been deployed. The Seabees provided half of the men in the teams that saw service. The U.S. Navy did not publicize the existence of the UDTs until post war and when they did they gave credit to Lt. Commander Kauffman and the Seabees. During WWII the Navy did not have a rating for the UDTs nor did they have an insignia. Those men with the CB rating on their uniforms considered themselves Seabees that were doing underwater demolition. They did not call themselves "UDTs" or "Frogmen" but rather "Demolitioneers" which had carried over from the NCDUs and LtCdr Kauffmans recruiting them from the Seabee dynamiting and demolition school. UDTs had to meet the military's standard age guidelines, Seabees older could not volunteer. In preparation for the invasion of Japan the UDTs created a cold water training center and mid-1945 UDTs had to meet a "new physical standard". UDT 9 lost 70% of the team to this change. The last UDT demolition operation of the war was on 4 July 1945 at Balikpapan, Borneo. The UDTs continued to prepare for the invasion of Japan until VJ Day when the need for their services ceased. With the draw-down from the war two half-strength UDTs were retained, one on each coast: UDT Baker and UDT Easy. However, the UDTs were the only special troops that avoided complete disbandment after the war, unlike the OSS Maritime Unit, the VAC Recon Battalion, and several Marine recon missions. In 1942 the Seabees became a completely new branch of the United States War Department. The Marine Corps provided both training and an organizational model. Something that either was not shared or the Seabees chose to ignore or considered not important was the keeping of logs, journals and records. The Seabees brought this record keeping approach with to the NCDUs and UDTs. After World War II Japan occupation On 20 August 1945 embarked UDT 21 at Guam as a component of the U.S. occupation force heading for Japan. Nine days later UDT 21 became the first U.S military unit to set foot on Japanese home soil when it reconned the beaches at Futtsu-misaki Point in Tokyo Bay. Their assessment was that the area was well suited for landing U.S. amphibious forces. UDT 21 made a large sign to greet the Marines on the beach. Team 21 was all fleet and the sign said greetings from "USN" UDT 21. The next day Begor took UDT 21 to Yokosuka Naval Base. There the team cleared the docks for the first U.S. warship to dock in Japan, . The team remained in Tokyo Bay until 8 Sept when it was tasked with locating remaining Kamikaze and two-man submarines at Katsura Wan, Uchiura Wan at Suruga Bay, Sendai, Onohama Shipyards and Choshi. Orders arrived for Begor to return the team to San Diego on 27 September. From 21 to 26 September UDT 11 was at Nagasaki and reported men getting sick from the stench. China With the war over thousands of Japanese troops remained in China. The issue was given to the Marine's III Marine Amphibious Corps. UDT 9 was assigned to Operation Beleaguer to recon the landings of the 1st Marine Division at Taku and Qingdao the first two weeks of October 1945. On their way to China the Navy had UDT 8 carry out a mission at Jinaen, Korea 8–27 September 1945. When UDT 9 arrived back in the States it was made one of the two post-War teams and redesignated UDT Baker. UDT 8 was also sent to China and was at Taku, Yantai, and Qingdao. Operation Crossroads Bikini atoll was chosen for the site of the nuclear tests of Operation Crossroads."In March 1946, Project Y scientists from Los Alamos decided that the analysis of a sample of water from the immediate vicinity of the nuclear detonation was essential if the tests were to be properly evaluated. After consideration of several proposals to accomplish this, it was finally decided to employ drone boats of the type used by Naval Combat Demolition Units in France during the war". UDT Easy, later named UDT 3, was given the designation TU 1.1.3 for the Operation and was assigned the control and maintenance of the drone boats. On 27 April, 7 officers and 51 enlisted men embarked the at the Seabee base Port Hueneme, CA, for transit to Bikini. At Bikini the drones were controlled from the Begor. Once a water sample was taken the drone would return to the Begor to be hosed down for decontamination. After a Radiation Safety Officer had taken a Geiger counter reading and the OK given, the UDTs would board with a radiation chemist to retrieve the sample. Begor came to have the reputation as the most contaminated boat in the fleet. A major issue afterwards was the treatment of the dislocated natives. In November 1948, the Bikinians were relocated to the uninhabited Island of Kili, however that island was located inside a coral reef that had no channel for access to the sea. In the spring of 1949, the governor of the Trust Territories, Marshall Group requested the U.S. Navy blast a channel to change this. That task was given to the Seabees on Kwajalin whose CO quickly determined this was actually a UDT project. He sent a request to CINCPACFLT who forwarded it to COMPHIBPAC. This ultimately resulted in the sending of UDT 3 on a Civic action program that turned out better than politicians could have hoped. The King of the Bikinians held a send off feast for the UDTs the night before they departed. Submersible Operations Post WWII the UDTs continued to research new techniques for underwater and shallow-water operations. One area was the use of SCUBA equipment. Dr. Chris Lambertsen had developed the Lambertsen Amphibious Respiratory Unit (LARU), an oxygen rebreather, which was used by the Maritime Unit of the OSS. In October 1943, he demonstrated it to LtCmdr. Kauffman, but was told the device was not applicable to current UDT operations. Dr. Lambertsen and the OSS continued to work on closed-circuit oxygen diving and combat swimming. When the OSS was dissolved in 1945, Lambertsen retained the LARU inventory. He later demonstrated the LARU to Army Engineers, the Coast Guard, and the UDTs. In 1947, he demonstrated the LARU to LtCmdr. Francis "Doug" Fane, then a senior UDT commander. LtCmdr. Fane was enthusiastic for new diving techniques. He pushed for the adoption of rebreathers and SCUBA gear for future operations, but the Navy Experimental Diving Unit and the Navy Dive School, which used the old "hard-hat" diving apparatus, declared the new equipment be too dangerous. Nonetheless, LtCmdr. Fane invited Dr. Lambertsen to NAB Little Creek, Virginia in January 1948 to demonstrate and train UDT personnel in SCUBA operations. This was the first-ever SCUBA training for USN divers. Following this training, Lcdr. Fane and Dr. Lambertsen demonstrated new UDT capabilities with a successful lock-out and re-entry from , an underway submarine, to show the Navy's need for this capability. LtCmdr. Fane then started the classified "Submersible Operations" or SUBOPS platoon with men drawn from UDT 2 and 4 under the direction of Lieutenant (junior grade) Bruce Dunning. LtCmdr. Fane also brought the conventional "Aqua-lung" open-circuit SCUBA system into use by the UDTs. Open-circuit SCUBA is less useful to combat divers, as the exhausted air produces a tell-tale trail of bubbles. However, in the early 1950s, the UDTs decided they preferred open-circuit SCUBA, and converted entirely to it. The remaining stock of LARUs was supposedly destroyed in a beach-party bonfire. Later on, the UDT reverted to closed-circuit SCUBA, using improved rebreathers developed by Dr. Lambertsen. It was at this time that the UDTs, led by LtCmdr. Fane, established training facilities at Saint Thomas in the Virgin Islands. The UDTs also began developing weapons skills and procedures for commando operations on land in coastal regions. The UDTs started experiments with insertion/extraction by helicopter, jumping from a moving helicopter into the water or rappelling like mountain climbers to the ground. Experimentation developed a system for emergency extraction by plane called "Skyhook". Skyhook utilized a large helium balloon and cable rig with harness. A special grabbing device on the nose of a C-130 enabled a pilot to snatch the cable tethered to the balloon and lift a person off the ground. Once airborne, the crew would winch the cable in and retrieve the personnel though the back of the aircraft. Training this technique was discontinued following the death of a SEAL at NAB Coronado during a training exercise. Teams still utilize the Skyhook for equipment extraction and retain the combat capability for personnel if needed. Korean War During the Korean War, the UDTs operated on the coasts of North Korea, with their efforts initially focused on demolitions and mine disposal. Additionally, the UDT accompanied South Korean commandos on raids in the North to demolish railroad tunnels and bridges. The higher-ranking officers of the UDT frowned upon this activity because it was a non-traditional use of the Naval forces, which took them too far from the water line. Due to the nature of the war, the UDT maintained a low operational profile. Some of the better-known missions include the transport of spies into North Korea, and the destruction of North Korean fishing nets. A more traditional role for the UDT was in support of Operation CHROMITE, the amphibious landing at Inchon. UDT 1 and UDT 3 divers went in ahead of the landing craft, scouting mud flats, marking low points in the channel, clearing fouled propellers, and searching for mines. Four UDT personnel acted as wave-guides for the Marine landing. The UDT assisted in clearing mines in Wonsan harbor, under fire from enemy shore batteries. Two minesweepers were sunk in these operations. A UDT diver dove on the wreck of , the first U.S. combat operation using SCUBA gear. The Korean War was a period of transition for the men of the UDT. They tested their previous limits and defined new parameters for their special style of warfare. These new techniques and expanded horizons positioned the UDT well to assume an even broader role as war began brewing to the south in Vietnam. NASA Initially, the splashdown of U.S. crewed space capsules were unassisted. That changed quickly after the second crewed flight; when Mercury 11 hit the water following reentry, the hatch blew and she sank, nearly drowning Gus Grissom. All Mercury, Gemini, and Apollo space capsules were subsequently met by UDTs 11 or 12 upon splashdown. Before the hatch was opened, the UDTs would attach a flotation collar to the capsule and liferaft for the astronauts to safely exit the craft. Vietnam War The Navy entered the Vietnam War in 1958, when the UDTs delivered a small watercraft far up the Mekong River into Laos. In 1961, naval advisers started training South Vietnamese personnel in South Vietnam. The men were called the Liên Đoàn Người Nhái (LDNN) or Vietnamese Frogmen, which translates as "Frogmen Team". UDT teams carried out hydrographic surveys in South Vietnam's coastal waters and reconnaissance missions of harbors, beaches and rivers often under hazardous conditions and enemy fire. Later, the UDTs supported the Amphibious Ready Groups operating on South Vietnam's rivers. UDTs manned riverine patrol craft and went ashore to demolish obstacles and enemy bunkers. They operated throughout South Vietnam, from the Mekong Delta (Sea Float), the Parrot's Beak and French canal AO's through I Corps and the Song Cui Dai estuary south of Da Nang. Birth of Navy SEALs In the mid-1950s, the Navy saw how the UDT's mission had expanded to a broad range of "unconventional warfare", but also that this clashed with the UDT's traditional focus on maritime operations swimming, boat, and diving operations. It was therefore decided to create a new type of unit that would build on the UDT's elite qualities and water-borne expertise, but would add land combat skills, including parachute training and guerrilla/counterinsurgency operations. These new teams would come to be known as the US Navy SEALs, an acronym for Sea, Air, and Land. Initially there was a lag in the unit's creation until President John F. Kennedy took office. Kennedy recognized the need for unconventional warfare, and supported the use of special operations forces against guerrilla activity. The Navy moved forward to establish its new special operations force and in January 1962 commissioned SEAL Team ONE in NAB Coronado and SEAL Team TWO at NAB Little Creek. In 1964, Boat Support Unit ONE was established, designed to directly support NSW operations, and was initially outfitted primarily by UDTs and newly established SEALs. UDT-11 & 12 were still active on the west coast and UDT-21 & 22 on the east coast. The SEALs quickly earned a reputation for valor and stealth in Vietnam, where they conducted clandestine raids in perilous territory. Reorganization From 1974–1975, UDT 13 was redesignated; some personnel established Underwater Construction Teams, while others joined the special boat detachment. In May 1983, the remaining UDT teams were reorganized as SEAL teams. UDT 11 became SEAL Team Five and UDT 12 became Seal Delivery Vehicle Team One. UDT 21 became SEAL Team Four and UDT 22 became Seal Delivery Vehicle Team Two. A new team, SEAL Team Three was established in October 1983. Since then, teams of SEALs have taken on clandestine missions in war-torn regions around the world, tracking high-profile targets such as Panama's Manuel Noriega and Colombian drug lord Pablo Escobar, and playing integral roles in the wars in Iraq and Afghanistan. Badge For those who served in an Underwater Demolition Team, the U.S. Navy authorized the Underwater Demolition operator badge in 1970. However, the UDT badge was phased out in 1971, a few months after it appeared, as was the silver badge for enlisted UDT/SEAL frogmen. After that, SEAL and UDT operators, both officer and enlisted, all wore the same gold Trident, as well as gold Navy jump wings. Unit awards The UDTs have received several unit citations and commendations. Members who participated in actions that merited the award are authorized to wear the medal or ribbon associated with the award on their uniform. Awards and decorations of the United States Armed Forces have different categories, (i.e. Service, Campaign, Unit, and Personal). Unit Citations are distinct from the other decorations. Naval Combat Demolition Force O (Omaha beach) Normandy   Presidential Unit Citation Normandy Naval Combat Demolition Force U (Utah beach): Normandy   Navy Unit Commendation: Normandy UDT 1   Navy Unit Commendation: Korea UDT 4   Navy Unit Commendation: Guam   Navy Unit Commendation: Leyte   Navy Unit Commendation: Okinawa UDT 7   Navy Unit Commendation: Marianas   Navy Unit Commendation: Western Carolinas UDT 11   Presidential Unit Citation: Okinawa   Presidential Unit Citation: Bruni Bay, Borneo   Presidential Unit Citation: Balikpapan, Burneo   Navy Unit Commendation 1966   Navy Meritorious Unit Commendation 1968   Navy Unit Commendation 1969   Presidential Unit Citation 1969   Navy Meritorious Unit Commendation 1969   Navy Meritorious Unit Commendation 1969   Navy Meritorious Unit Commendation 1970 Republic of Vietnam Civil Actions Medal Unit Citation Republic of Vietnam Gallantry Cross with Palm Unit Award Coast Guard Meritorious Unit Commendation UDT 12   Presidential Unit Citation: Iwo Jima   Presidential Unit Citation: Okimawa   Navy Unit Commendation 1966   Navy Meritorious Unit Commendation: Vietnam 1967   Navy Meritorious Unit Commendation: Vietnam 1967   Navy Meritorious Unit Commendation: Vietnam 1968   Navy Meritorious Unit Commendation: Vietnam 1968   Navy Meritorious Unit Commendation: Vietnam 1969 Republic of Vietnam Gallantry Cross with Palm Unit Award Operation Eagle Pull Operation Frequent Wind   Humanitarian Service Medal 1979 Boat People UDT 13   Navy Meritorious Unit Commendation: Vietnam 1969 Republic of Vietnam Gallantry Cross with Palm Unit Award 1970 UDT 14   Navy Unit Commendation: Luzon   Navy Unit Commendation: Iwo Jima   Navy Unit Commendation: Okinawa UDT 21 Navy Expeditionary Medal   Navy Meritorious Unit Commendation: Vietnam   Navy Meritorious Unit Commendation: Vietnam   Navy Meritorious Unit Commendation: Vietnam   Navy Meritorious Unit Commendation: Vietnam UDT 22   Navy Meritorious Unit Commendation: Vietnam 1969 OPNAV NOTICE 1650, MASTER LIST OF UNIT AWARDS AND CAMPAIGN MEDALS Fiction The Frogmen (1951), starring Dana Andrews and Richard Widmark. World War II film based on the Underwater Demolition Teams. Contemporary UDT members appear in several sequences. Underwater Warrior (1958) directed by Andrew Marton is based on the memoirs of Lieutenant-Commander Francis Douglas Fane, Naked Warriors. See also References Further reading Best, Herbert. The Webfoot Warriors; The Story of UDT, the U.S. Navy's Underwater Demolition Team. New York: John Day Co, 1962. Fane, Francis Douglas, and Don Moore. The Naked Warriors: The Story of the U.S. Navy's Frogmen. Annapolis, MD: Naval Institute Press, 1995. Navy, U.S. Handbook of Naval Combat Underwater Demolition Team Training(1944). Paradise, CA: Loose Cannon Enterprises, 2019. O'Dell, James Douglas. The Water Is Never Cold: The Origins of the U.S. Navy's Combat Demolition Units, UDTs, and SEALs. Washington, DC: Brassey's, 2000. Young, Darryl. SEALs, UDT, Frogmen: Men Under Pressure. New York: Ivy Books, 1994. Milligan, Benjamin H. By Water Beneath The Walls. New York: Bantam Books, 2021. ISBN 978-0-553-39219-7 External links Navy UDT-SEAL Museum NavyFrogMen.com U. S. Naval Special Warfare Archives Pritzker Military Museum & Library "TNT Divers" Popular Mechanics, November 1945, pp. 72–73, one of earliest articles on WW2 UDT units. Military engineering Armed forces diving Special operations units and formations of the United States Navy Frogman operations
Underwater Demolition Team
[ "Engineering" ]
8,646
[ "Construction", "Military engineering" ]
606,411
https://en.wikipedia.org/wiki/Walter%20Bradford%20Cannon
Walter Bradford Cannon (October 19, 1871 – October 1, 1945) was an American physiologist, professor and chairman of the Department of Physiology at Harvard Medical School. He coined the term "fight or flight response", and developed the theory of homeostasis. He popularized his theories in his book The Wisdom of the Body, first published in 1932. Life and career Cannon was born on October 19, 1871, in Prairie du Chien, Wisconsin, the son of Colbert Hanchett Cannon and his wife Wilma Denio. His sister Ida Maud Cannon (1877-1960) became a noted hospital social worker at Massachusetts General Hospital. In his autobiography The Way of an Investigator, Cannon counts himself among the descendants of Jacques de Noyon, a French Canadian explorer and coureur des bois. His Calvinist family was intellectually active, including readings from James Martineau, John Fiske (philosopher), and James Freeman Clarke. Cannon's curiosity also led him to Thomas Henry Huxley, John Tyndall, George Henry Lewes, and William Kingdon Clifford. A high school teacher, Mary Jeannette Newson, became his mentor. "Miss May" Newson motivated him and helped him take his academic skills into Harvard University in 1892. Upon finishing his undergraduate studies in 1896, he entered Harvard Medical School. He started using X-rays to study the physiology of digestion while working with Henry P. Bowditch. In 1900 he received his medical degree. After graduation, Cannon was hired by William Townsend Porter at Harvard as an instructor in the Department of Physiology while continuing his digestion study. Cannon was promoted to an assistant professor of physiology in 1902. He was a close friend of the physicist, G. W. Pierce, and together they founded the Wicht Club with other young instructors for social and professional purposes. In 1906, Cannon succeeded Bowditch as the Higginson Professor and chairman of the Department of Physiology at Harvard Medical School until 1942. From 1914 to 1916, Cannon was also President of the American Physiological Society. He was married to Cornelia James Cannon, a best-selling author and feminist reformer. On July 19, 1901, during their honeymoon in Montana, they were the first people to reach the summit of the unclimbed southwest peak (2657 m or 8716 ft) of Goat Mountain, between Lake McDonald and Logan Pass. That area is now Glacier National Park. The peak was subsequently named, Mount Cannon, by the United States Geological Survey The couple had five children; A son, Dr. Bradford Cannon, a military plastic surgeon and radiation researcher. The daughters were Wilma Cannon Fairbank (who was married to John K. Fairbank), Linda Cannon Burgess, Helen Cannon Bond, and Marian Cannon Schlesinger, a painter and author living in Cambridge, Massachusetts. His actions and his statements may infer his philosophy of life. Born into a Calvinistic family, he broke away from religious authoritarianism and became independent from his prior dogma. Later in life, he states that naturally occurring events are what makes for a useful end. He took on the role of a naturalist where believed that the body and mind are inseparable as an organismic unit. The explanations of his work should enable man to live more wisely, happily, and intelligently without the interjection of supernatural interference. E. Digby Baltzell said that Dr. Cannon was once offered a job at the Mayo Clinic for twice his Harvard salary. Cannon declined, saying "I don't need twice as much money. All I need is fifty cents for a haircut once a month, and fifty cents a day to get lunch." Cannon was elected to the American Academy of Arts and Sciences in 1906, the American Philosophical Society in 1908, and the United States National Academy of Sciences in 1914. Cannon supported animal experimentation and opposed the arguments of anti-vivisectionists. In 1911, he authored a booklet for the American Medical Association criticizing the arguments of anti-vivisectionists. Walter Cannon died on October 1, 1945, in Franklin, New Hampshire. Work Walter Cannon began his career in science as a Harvard undergraduate in the year 1892. Henry Pickering Bowditch, who had worked with Claude Bernard, directed the laboratory in physiology at Harvard. Here Cannon began his research: he used the newly discovered x-rays to study the mechanism of swallowing and the motility of the stomach. Withi his first experiments, he was able to watch the course of a button down a dog's esophagus. He said in his autobiography, The Way of an Investigator, "The whole purpose of my effort was to see the peristaltic waves to learn their effects. Only after some time did I note that the absence of activity was accompanied by signs of perturbation, and when serenity was restored the waves promptly reappeared." He demonstrated deglutition in a goose at the APS meeting in December 1896 and published his first paper on this research in the first issue of the American Journal of Physiology in January 1898. In 1945 Cannon summarized his career in physiology by describing his focus at different ages: Age 26 – 40: digestion and the bismuth meal Age 40 – 46: bodily effects of emotional excitement Age 46 – 51: wound shock investigations Age 51 – 59: stable states of the organism Age 59 – 68: chemical mediation of nerve impulses (collaboration with Arturo Rosenblueth) Age 68 + : chemical sensitivity of nerve-isolated organs Scientific contributions Use of salts of heavy metals in X-rays He was one of the first researchers to mix salts of heavy metals (including bismuth subnitrate, bismuth oxychloride, and barium sulfate) into foodstuffs to improve the contrast of x-ray images of the digestive tract. The barium meal is a modern derivative of this research. Fight or flight In 1915, he coined the term fight or flight to describe an animal's response to threats in Bodily Changes in Pain, Hunger, Fear and Rage: An Account of Recent Researches into the Function of Emotional Excitement. He asserted that not only physical emergencies, such as blood loss from trauma but also psychological emergencies, such as antagonistic encounters between members of the same species, evoke the release of adrenaline into the bloodstream. As per Cannon, adrenaline exerts several important effects on different body organs, all of which maintain homeostasis in fight-or-flight situations. For example, in the skeletal muscle of the limbs, adrenaline relaxes blood vessels which increases local blood flow. Adrenaline constricts blood vessels in the skin and minimizes blood loss from physical trauma. Adrenaline also releases the key metabolic fuel, glucose, from the liver into the bloodstream. However, the fact that aggressive attack and fearful escape both involve adrenaline release into the bloodstream does not imply an equivalence of “fight” with “flight” from a physiological or biochemical point of view. Wound shock As a military physician in World War I he discovered that the blood of shocked men was acidic. As a member of the British Medical Research Council's Special Committee on Shock and Allied Conditions, he advocated treating shocked wounded by infusing sodium bicarbonate to neutralize the acid. He and William Bayliss infused acid into an anesthetized cat, which died. However, a second trial done with Bayliss and Henry Dale failed to produce shock. The shock was successfully treated by infusing saline containing some larger molecules. Homeostasis He developed the concept of homeostasis from the earlier idea of Claude Bernard of milieu interieur, and popularized it in his book The Wisdom of the Body. Cannon presented four tentative propositions to describe the general features of homeostasis: Constancy in an open system requires mechanisms that act to maintain this system, just like our bodies. Cannon based this proposition on insights into steady states such as glucose concentrations, body temperature, and acid-base balance. Steady-state conditions require that any tendency toward change automatically meets with factors that resist change. An increase in blood sugar results in thirst as the body attempts to dilute the concentration of sugar in the extracellular fluid. The regulating system that determines the homeostatic state consists of many cooperating mechanisms acting simultaneously or successively. Blood sugar is regulated by insulin, glucagon, and other hormones that control its release from the liver or its uptake by the tissues. Homeostasis does not occur by chance, but is the result of organized self-government. The Sympathoadrenal System Cannon proposed the existence and functional unity of the sympathoadrenal (or “sympathoadrenomedullary” or “sympathico-adrenal”) system. He theorized that the sympathetic nervous system and the adrenal gland work together as a unit to maintain homeostasis in emergencies. To identify and quantify adrenaline release during stress, beginning in about 1919 Cannon exploited an ingenious experimental setup. He would surgically excise the nerves supplying the heart of a laboratory animal such as a dog or cat. Then he would subject the animal to a stressor and record the heart rate response. With the nerves to the heart removed, he could deduce that if the heart rate increased in response to the perturbation, then the increase in heart rate must have resulted from the actions of a hormone. Finally, he would compare the results of an animal with intact adrenal glands with those in an animal from which he had removed the adrenal glands. From the difference in the heart rate between the two animals, he could further infer that the hormone responsible for the increase in heart rate came from the adrenal glands. Moreover, the amount of increase in the heart rate provided a measure of the amount of hormone released. Cannon became so convinced that the sympathetic nervous system and adrenal gland functioned as a unit that in the 1930s that he formally proposed that the sympathetic nervous system uses the same chemical messenger—adrenaline—as does the adrenal gland. Cannon’s notion of a unitary sympathoadrenal system persists to this day. Researchers in the area have come to question the validity of the notion of a unitary sympathoadrenal system, although clinicians often continue to lump together the two components. Cannon-Bard theory Cannon developed the Cannon-Bard theory with physiologist Philip Bard to try to explain why people feel emotions first and then act upon them. Dry mouth He put forward the Dry Mouth Hypothesis, stating that people get thirsty because their mouths get dry. He experimented on two dogs. He made incisions in their throats and inserted small tubes. Any water swallowed would go through their mouths and out by the tubes, never reaching their stomachs. He found out that these dogs would lap up the same amount of water as control dogs. Publication Cannon wrote several books and articles. 1910, A Laboratory Course in Physiology, Harvard University Press 6th ed. 1927. 1910, 'Medical Control of Vivisection' 1911, Some Characteristics of Antivivisection Literature 1911, The Mechanical Factors of Digestion 1915, Bodily Changes in Pain, Hunger, Fear and Rage 1920, Bodily Changes in Pain, Hunger, Fear and Rage (2 ed.) 1923, Traumatic Shock 1926, 'Physiological Regulation of Normal States' 1932, The Wisdom of the Body 1933, Some modern extensions of Beaumont's studies on Alexis St. Martin 1937, Digestion and Health 1937, Autonomic Neuro-effector Systems, with Arturo Rosenblueth 1942, '"Voodoo" Death' 1945, The Way of an Investigator: a scientist's experiences in medical research See also Cannon-Washburn Hunger Experiment (1912) References Further reading Benison, Saul A., Clifford Barger, Elin L. Wolfe (1987) Walter B. Cannon: The Life and Times of a Young Scientist. Cannon, Bradford. "Walter Bradford Cannon: Reflections on the Man and His Contributions". International Journal of Stress Management, vol. 1, no. 2, 1994. Kuznick, Peter. "The Birth of Scientific Activism". Bulletin of the Atomic Scientists, December 1988 Schlesinger, Marian Cannon. Snatched from Oblivion: A Cambridge Memoir. Boston: Little, Brown and Company, 1979. Wolfe, Elin L., A. Clifford Barger, Saul Benison (2000) Walter B. Cannon, Science and Society. External links 6th APS President at the American Physiological Society Walter Bradford Cannon: Experimental Physiologist: 1871-1945 - biography at Harvard Square Library Chapter 9 of Explorers of the Body, by Steven Lehrer (contains information about X-ray experiments) The Walter Bradford Cannon papers can be found at The Center for the History of Medicine at the Countway Library, Harvard Medical School. Walter Bradford Cannon, Homeostasis (1932) W. B. Cannon (1915), Bodily changes in pain, hunger, fear, and rage, New York: D. Appleton and Company 1871 births 1945 deaths American physiologists Cyberneticists Foreign members of the Royal Society Harvard College alumni Harvard Medical School alumni Harvard Medical School faculty Honorary members of the USSR Academy of Sciences People from Franklin, New Hampshire People from Prairie du Chien, Wisconsin Vivisection activists Writers from Massachusetts Writers from Wisconsin Members of the American Philosophical Society
Walter Bradford Cannon
[ "Chemistry" ]
2,727
[ "Vivisection activists", "Vivisection" ]
606,563
https://en.wikipedia.org/wiki/Bit%20bucket
In computing jargon, the bit bucket (or byte bucket) is where lost computerized data has gone, by any means; any data which does not end up where it is supposed to, being lost in transmission, a computer crash, or the like, is said to have gone to the bit bucket – that mysterious place on a computer where lost data goes, as in: History Originally, the bit bucket was the container on teletype machines or IBM key punch machines into which chad from the paper tape punch or card punch was deposited; the formal name is "chad box" or (at IBM) "chip box". The term was then generalized into any place where useless bits go, a useful computing concept known as the null device. The term bit bucket is also used in discussions of bit shift operations. The bit bucket is related to the first in never out buffer and write-only memory, in a joke datasheet issued by Signetics in 1972. In a 1988 April Fool's article in Compute! magazine, Atari BASIC author Bill Wilkinson presented a POKE that implemented what he called a "WORN" (Write Once, Read Never) device, "a close relative of the WORM". In programming languages the term is used to denote a bitstream which does not consume any computer resources, such as CPU or memory, by discarding any data "written" to it. In .NET Framework-based languages, it is the System.IO.Stream.Null. See also Black hole (networking) Waste container metaphors References External links Bit Bucket entry from The Jargon File (version 4.4.7) Computer jargon Punched card
Bit bucket
[ "Technology" ]
337
[ "Computing terminology", "Computer jargon", "Natural language and computing" ]
1,018,614
https://en.wikipedia.org/wiki/Tool-assisted%20speedrun
A tool-assisted speedrun or tool-assisted superplay (TAS; ) is generally defined as a speedrun or playthrough composed of precise inputs recorded with tools such as video game emulators. Tool-assisted speedruns are generally created with the goal of creating theoretically perfect playthroughs. This may include the fastest possible route to complete a game or showcasing new optimizations to existing world records. TAS requires research into the theoretical limits of the games and their respective competitive categories. The fastest categories have no restrictions and often involve a level of gameplay impractical or impossible for a human player, and those made according to real-time attack rules serve to research the limits of human players. The TAS developer has full control over the game's movement, per video frame, to record a sequence of fully precise inputs. Other tools include save states and branches, rewriting recorded inputs, splicing together best sequences, macros, and scripts to automate gameplay actions. These tools grant TAS creators precision and accuracy beyond a human player. History The term was coined during early Doom speedrunning. When Andy "Aurican" Kempling released a modified version of the Doom source code that made it possible to record demos in slow motion and in several sessions, it was possible for the first players to start recording tool-assisted demos. In a few months, in June 1999, Finnish Esko Koskimaa, Swedish Peo Sjöblom, and Israeli Yonatan Donner opened the first site to share these demos, "Tools-Assisted Speedruns". In 2003, a video of a Japanese player named Morimoto completing the NES game Super Mario Bros. 3 in 11 minutes and performing stunts started floating around the Internet. The video was controversial, because not many people knew about tool-assisted speedruns, especially for the Nintendo Entertainment System. The video was not clearly labeled as such, so many people considered an emulator cheating. It inspired Joel "Bisqwit" Yliluoma to start the NESvideos website for TAS for the NES, and it was renamed TASVideos. Tool-assisted speedruns have been made for some ROM hacks and for published games. In 2014, the speedrunning application TASBot was developed, capable of direct controller input. Method Creating a tool-assisted speedrun is the process of finding the optimal set of inputs to fulfill a given criterion — usually completing a game as fast as possible. No limits are imposed on the tools used for this search, but the result has to be a set of timed key-presses that, when played back on the actual console, achieves the target criterion. The basic method used to construct such a set of inputs is to record one's input while playing the game on an emulator, all the while saving and loading the emulator's state repeatedly to test out various possibilities and only keep the best result. To make this more precise, the game is slowed down. Initially, it was common to slow down to some low fraction of normal speed. However, due to advances in the field, it is now expected that the game is paused during recording, with emulation advanced one frame at a time to eliminate any mistakes made due to the urgency. The use of savestates facilitates luck manipulation, which uses player input as entropy to make favorable outcomes. Examples include making the ideal piece drop in Tetris, or getting a rare item drop from a defeated enemy. Re-recording emulators Tool-assisted speedrunning relies on the same series of inputs being played back at different times always giving the same results. The emulation must be deterministic with regard to the saved inputs, and random seeds must not change. Otherwise, a speedrun that was optimal on one playback might not even complete it on a second playback. This desynchronization occurs when the state of the emulated machine at a particular time index no longer corresponds with that which existed at the same point in the movie's production. Desyncs can also be caused by incomplete savestates, which cause the emulated machine to be restored in a state different from that which existed when it was saved. Desyncs can also occur when a user attempts to match inputs from an input file downloaded from TASVideos and fail to match the correct enemy reactions due to bad AI or RNG. Verification Some players have fraudulently recorded speedruns, either by creating montages of other speedrun or altering the playing time, posting them as TAS or RTA. Because tool-assisted speedruns can account for all aspects of the game code, including its inner workings, and press buttons precisely and accurately, they can be used to help verify whether an unassisted speedrun record is legitimate. One of the best-known cases is Billy Mitchell, whose Donkey Kong and Pac-Man Guinness records were revoked in 2018, because he used the emulator MAME. In 2018, the world record for Dragster by Todd Rogers was removed from Twin Galaxies and Guinness records after an experiment showed that his 5.51 second time was impossible to achieve even with a TAS. Examples In Super Mario Bros., the current Famicom and NES human-theory world record, created by Maru, stands at 4:57.54 (4:54.265 in RTA timing). In Super Mario Bros. 3, arbitrary code execution along with credits warp allows injecting a hack that simulates a Unix-like console, providing extra features to Mario. The current TAS standing at 216 milliseconds (13 frames) was performed by exploiting a small bug with the Famicom and NES hardware in which the CPU makes many extra "read" requests from one of the controller inputs, registering many more button presses than have occurred; the A button is mashed at a rate of 8 kilohertz (8000 times per second), performing the credits warp glitch. In Super Mario World, arbitrary code execution allows injection of playable versions of Flappy Bird, Pong, Snake, and Super Mario Bros. See also Time attack — a mode which allows the player to finish a game (or a part of it) as fast as possible, saving record times. Score attack — the attempt to reach a record logged point value in a game. Electronic sports — video games that are played as competitive sports. Piano roll Meta Runner — a web series inspired by the tool assisted speedruns. References External links TASVideos tool-assisted speedruns and resources Speedrunning Video game terminology Cheating in video games
Tool-assisted speedrun
[ "Technology" ]
1,362
[ "Computing terminology", "Video game terminology" ]
1,018,637
https://en.wikipedia.org/wiki/Beach%20nourishment
Beach nourishment (also referred to as beach renourishment, beach replenishment, or sand replenishment) describes a process by which sediment, usually sand, lost through longshore drift or erosion is replaced from other sources. A wider beach can reduce storm damage to coastal structures by dissipating energy across the surf zone, protecting upland structures and infrastructure from storm surges, tsunamis and unusually high tides. Beach nourishment is typically part of a larger integrated coastal zone management aimed at coastal defense. Nourishment is typically a repetitive process because it does not remove the physical forces that cause erosion; it simply mitigates their effects. The first nourishment project in the United States was at Coney Island, New York in 1922 and 1923. It is now a common shore protection measure used by public and private entities. History The first nourishment project in the U.S. was constructed at Coney Island, New York in 1922–1923. Before the 1970s, nourishment involved directly placing sand on the beach and dunes. Since then more shoreface nourishments have been carried out, which rely on the forces of the wind, waves and tides to further distribute the sand along the shore and onto the beaches and dunes. The number and size of nourishment projects has increased significantly due to population growth and projected relative sea-level rise. Erosion Beach erosion is a specific subset of coastal erosion, which in turn is a type of bioerosion which alters coastal geography through beach morphodynamics. There are numerous incidences of the modern recession of beaches, mainly due to a gradient in longshore drift and coastal development hazards. Causes of erosion Beaches can erode naturally or due to human impact (beach theft/sand mining). Erosion is a natural response to storm activity. During storms, sand from the visible beach submerges to form sand bars that protect the beach. Submersion is only part of the cycle. During calm weather, smaller waves return sand from bars to the visible beach surface in a process called accretion. Some beaches do not have enough sand available for coastal processes to respond naturally to storms. When not enough sand is available, the beach cannot recover after storms. Many areas of high erosion are due to human activities. Reasons can include: seawalls locking up sand dunes, coastal structures like ports and harbors that prevent longshore transport, and dams and other river management structures. Continuous, long-term renourishment efforts, especially in cuspate-cape coastlines, can play a role in longshore transport inhibition and downdrift erosion. These activities interfere with the natural sediment flows either through dam construction (thereby reducing riverine sediment sources) or construction of littoral barriers such as jetties, or by deepening of inlets; thus preventing longshore transport of sediment. Types of shoreline protection approaches The coastal engineering for the shoreline protection involves: Soft engineering: Beach nourishment is a type of soft approach which preserves beach resources and avoids the negative effects of hard structures. Instead, nourishment creates a “soft” (i.e., non-permanent) structure by creating a larger sand reservoir, pushing the shoreline seaward. Hard engineering: Beach evolution and beach accretion can be facilitated by the four main types of hard engineering structures in coastal engineering are, namely seawall, revetment, groyne or breakwater. Most commonly used hard structures are seawall and series of "headland breakwater" (breakwater connected to the shore with groyne). Managed retreat, the shoreline is left to erode, while buildings and infrastructure are relocated further inland. Approach Assessment Advantages Widens the beach. Protects structures behind beach. Protects from storms. Increases land value of nearby properties. Grows economy through tourism and recreation. Expands habitat. Practical, environmentally-friendly approach to address erosional pressure. Encourages vegetation growth to help stabilize tidal flats. Disadvantages Added sand may erode because of storms or lack of up-drift sand sources. Expensive and requires repeated application. Restricted access during nourishment. Destroys or buries marine life. Difficulty finding appropriate materials. Considerations Costs Nourishment is typically a repetitive process, as nourishment mitigates the effects of erosion, but does not remove the causes. A benign environment increases the interval between nourishment projects, reducing costs. Conversely, high erosion rates may render nourishment financially impractical. In many coastal areas, the economic impacts of a wide beach can be substantial. Since 1923, the U.S. has spent $9 billion to rebuild beaches. One of the most notable example is the –long shoreline fronting Miami Beach, Florida, which was replenished over the period 1976–1981. The project cost approximately US$86 million and revitalized the area's economy. Prior to nourishment, in many places the beach was too narrow to walk along, especially during high tide. In 1998 an overview was made of all known beach nourishment projects in the USA (418 projects). The total volume of all these nourishments was 648 million cubic yards (495 m3) with a total cost of US$3387 million (adjusted to price level 1996). This is US$6.84 per m3. Between 2000 and 2020 the price per m3 has gone up considerably in the USA (see table below), while in Europe the price has gone down. Around the North Sea prices are much lower. In 2000 an inventory was made by the North Sea Coastal Management Group. From the Netherlands more detailed data are available, see below in the section on Dutch case studies. The price for nourishments in areas without an available dredging fleet is often in the order of €20 - €30 per cubic meter. Storm damage reduction A wide beach is a good energy absorber, which is significant in low-lying areas where severe storms can impact upland structures. The effectiveness of wide beaches in reducing structural damage has been proven by field studies conducted after storms and through the application of accepted coastal engineering principles. Environmental impact Beach nourishment has significant impacts on local ecosystems. Nourishment may cause direct mortality to sessile organisms in the target area by burying them under the new sand. The seafloor habitat in both source and target areas are disrupted, e.g. when sand is deposited on coral reefs or when deposited sand hardens. Imported sand may differ in character (chemical makeup, grain size, non-native species) from that of the target environment. Light availability may be reduced, affecting nearby reefs and submerged aquatic vegetation. Imported sand may contain material toxic to local species. Removing material from near-shore environments may destabilize the shoreline, in part by steepening its submerged slope. Related attempts to reduce future erosion may provide a false sense of security that increases development pressure. Sea turtles Newly deposited sand can harden and complicate nest-digging for turtles. However, nourishment can provide more/better habitat for them, as well as for sea birds and beach flora. Florida addressed the concern that dredge pipes would suck turtles into the pumps by adding a special grill to the dredge pipes. Material used The selection of suitable material for a particular project depends upon the design needs, environmental factors and transport costs, considering both short and long-term implications. The most important material characteristic is the sediment grain size, which must closely match the native material. Excess silt and clay fraction (mud) versus the natural turbidity in the nourishment area disqualifies some materials. Projects with unmatched grain sizes performed relatively poorly. Nourishment sand that is only slightly smaller than native sand can result in significantly narrower equilibrated dry beach widths compared to sand the same size as (or larger than) native sand. Evaluating material fit requires a sand survey that usually includes geophysical profiles and surface and core samples. Some beaches were nourished using a finer sand than the original. Thermoluminescence monitoring reveals that storms can erode such beaches far more quickly. This was observed at a Waikiki nourishment project in Hawaii. Profile nourishment Beach Profile Nourishment describes programs that nourish the full beach profile. In this instance, "profile" means the slope of the uneroded beach from above the water out to sea. The Gold Coast profile nourishment program placed 75% of its total sand volume below low water level. Some coastal authorities overnourish the below water beach (aka "nearshore nourishment") so that over time the natural beach increases in size. These approaches do not permanently protect beaches eroded by human activity, which requires that activity to be mitigated. Project impact measurements Nourishment projects usually involve physical, environmental and economic objectives. Typical physical measures include dry beach width/height, post-storm sand volume, post-storm damage avoidance assessments and aqueous sand volume. Environmental measures include marine life distribution, habitat and population counts. Economic impacts include recreation, tourism, flood and "disaster" prevention. Many nourishment projects are advocated via economic impact studies that rely on additional tourist expenditure. This approach is however unsatisfactory. First, nothing proves that these expenditures are incremental (they could shift expenditures from other nearby areas). Second, economic impact does not account for costs and benefits for all economic agents, as cost benefit analysis does. Techniques for incorporating nourishment projects into flood insurance costs and disaster assistance remain controversial. The performance of a beach nourishment project is most predictable for a long, straight shoreline without the complications of inlets or engineered structures. In addition, predictability is better for overall performance, e.g., average shoreline change, rather than shoreline change at a specific location. Nourishment can affect eligibility in the U.S. National Flood Insurance Program and federal disaster assistance. Nourishment may have the unintended consequence of promoting coastal development, which increases risk of other coastal hazards. Other shoreline protection approaches Nourishment is not the only technique used to address eroding beaches. Others can be used singly or in combination with nourishment, driven by economic, environmental and political considerations. Human activities such as dam construction can interfere with natural sediment flows (thereby reducing riverine sediment sources.) Construction of littoral barriers such as jetties and deepening of inlets can prevent longshore sediment transport. Hard engineering or structural approach The structural approach attempts to prevent erosion. Armoring involves building revetments, seawalls, detached breakwaters, groynes, etc. Structures that run parallel to the shore (seawalls or revetments) prevent erosion. While this protects structures, it doesn't protect the beach that is outside the wall. The beach generally disappears over a period that ranges from months to decades. Groynes and breakwaters that run perpendicular to the shore protect it from erosion. Filling a breakwater with imported sand can stop the breakwater from trapping sand from the littoral stream (the ocean running along the shore.) Otherwise the breakwater may deprive downstream beaches of sand and accelerate erosion there. Armoring may restrict beach/ocean access, enhance erosion of adjacent shorelines, and requires long-term maintenance. Managed retreat Managed retreat moves structures and other infrastructure inland as the shoreline erodes. Retreat is more often chosen in areas of rapid erosion and in the presence of little or obsolete development. Soft engineering approaches Beach dewatering Beaches grow and shrink depending on tides, precipitation, wind, waves and current. Wet beaches tend to lose sand. Waves infiltrate dry beaches easily and deposit sandy sediment. Generally a beach is wet during falling tide, because the sea sinks faster than the beach drains. As a result, most erosion happens during falling tide. Beach drainage (beach dewatering) using Pressure Equalizing Modules (PEMs) allow the beach to drain more effectively during falling tide. Fewer hours of wet beach translate to less erosion. Permeable PEM tubes inserted vertically into the foreshore connect the different layers of groundwater. The groundwater enters the PEM tube allowing gravity to conduct it to a coarser sand layer, where it can drain more quickly. The PEM modules are placed in a row from the dune to the mean low waterline. Distance between rows is typically but this is project-specific. PEM systems come in different sizes. Modules connect layers with varying hydraulic conductivity. Air/water can enter and equalize pressure. PEMs are minimally invasive, typically covering approximately 0.00005% of the beach. The tubes are below the beach surface, with no visible presence. PEM installations have been installed on beaches in Denmark, Sweden, Malaysia and Florida. The effectiveness of beach dewatering has not been proven convincingly on life-sized beaches, in particular for the sand beach case. Dewatering systems have been shown to lower very significantly the watertable but other morphodynamical effects generally overpower any stabilizing effect of dewatering for fine sediments, although some mixed results on upper beach accretion associated to erosion in middle and lower have been reported. This is in line with the current knowledge of swash-groundwater sediment dynamics which states that the effects of in/exfiltration flows through sand beds in the swash zone associated to modification of swash boundary layer and relative weight of the sediment and overall volume loss of the swash tongue are generally lower than other drivers, at least for fine sediments such as sand Recruitment Appropriately constructed and sited fences can capture blowing sand, building/restoring sand dunes, and progressively protecting the beach from the wind, and the shore from blowing sand. Dynamic revetment Another approach is to create dynamic revetment, a berm using unmortared, unsorted rocks (cobbles). Seeds scattered among the cobbles can germinate to anchor the cobbles in place. Sand can collect and recreate a sandy beach. Leaving the rocks loose allows them to migrate and settle in a stable location. Separately, near the highest average waterline, a second berm around a meter in height can accelerate the recovery. This approach was employed at Washaway Beach in North Cove, Washington. Once the berms were in place, in one year the beach expanded by some 15 meters, and continued to grow. Projects in Washington, California, Europe, and Guam have adopted aspects of the techniques. Projects The setting of a beach nourishment project is key to design and potential performance. Possible settings include a long straight beach, an inlet that may be either natural or modified and a pocket beach. Rocky or seawalled shorelines, that otherwise have no sediment, present unique problems. Cancun, Mexico Hurricane Wilma hit the beaches of Cancun and the Riviera Maya in 2005. The initial nourishment project was unsuccessful at a cost of $19 million, leading to a second round that began in September 2009 and was scheduled to complete in early 2010 with a cost of $70 million. The project designers and the government committed to invest in beach maintenance to address future erosion. Project designers considered factors such as the time of year and sand characteristics such as density. Restoration in Cancun was expected to deliver of sand to replenish of coastline. Northern Gold Coast, Queensland, Australia Gold Coast beaches in Queensland, Australia have experienced periods of severe erosion. In 1967 a series of 11 cyclones removed most of the sand from Gold Coast beaches. The Government of Queensland engaged engineers from Delft University in the Netherlands to advise them. The 1971 Delft Report outlined a series of works for Gold Coast Beaches, including beach nourishment and an artificial reef. By 2005 most of the recommendations had been implemented. The Northern Gold Coast Beach Protection Strategy (NGCBPS) was an A$10 million investment. NGCBPS was implemented between 1992 and 1999 and the works were completed between 1999 and 2003. The project included dredging of compatible sand from the Gold Coast Broadwater and delivering it through a pipeline to nourish of beach between Surfers Paradise and Main Beach. The new sand was stabilized by an artificial reef constructed at Narrowneck out of huge geotextile sand bags. The new reef was designed to improve wave conditions for surfing. A key monitoring program for the NGCBPS is the ARGUS coastal camera system. Netherlands Background More than one-quarter of the Netherlands is below sea level. The coastline along the North Sea (approx. ) is protected against flooding by natural sand dunes (only in the estuaries and behind the barrier islands there are no dunes). This coastline is eroding for centuries; in the 19th and beginning of 20th centuries it was tried to stop erosion by construction of groynes, which was costly and not very successful. Beach nourishment was more successful, but there were questions on the method of funding. In the Coastal Memorandum of 1990 the government decided, after a very detailed study, that all erosion along the full Dutch coastline would be compensated by artificial beach nourishment. The shoreline is closely monitored by yearly recording of the cross section at points apart, to ensure adequate protection. Where long-term erosion is identified, beach nourishment using high-capacity suction dredgers is deployed. In 1990 the Dutch government has decided to compensate in principle all coastal erosion by nourishment. This policy is still ongoing and successful. All costs are covered by the National Budget. A novel beach nourishment strategy was implemented in South Holland, where a new beach form was created using vast quantities of sand with the expectation that the sand would be distributed by natural processes to nourish the beach over many years (see Sand engine). Basic Coastline The basic coastline in the Netherlands is a representation of the low water line of 1990. This line is used to identify coastal erosion and coastal growth and to take measures if necessary. In the Coastal Memorandum, the Dutch Government decides to maintain the 1990 coastline by beach nourishment. The coastline in question is the low-water line. For practical application, the definition of this does not appear to be unambiguous, which is why the Memorandum also defines the momentary coastline (also called instantaneous coastline) (MKL) and basic Coastline (BKL). Each year, the shoreline to be tested ( TKL) is determined on the basis of the MKL, and if it threatens to come inland from the BKL, a sand nourishment is carried out. Definition of the instantaneous coastline The problem with the low water line mentioned in the 1990 Coastal Memorandum is that the height of the average low tide is well defined, but the position in the horizontal direction is not. See the attached figure, here the beach profile crosses three times the low water line. In fact, it is also not important to maintain a line, but to maintain the amount of sand in the active beach profile. To determine this volume, two heights are used, the average low water level (glw) and the height of the dune foot (dv). The height of the dune foot is basically determined by finding the intersection of the steep slope of the dune front and of the dry beach. In general, this theoretical dune foot point will be slightly below the sand. It is very difficult to redefine the height of the dune foot every year. Some administrators define the dune foot line as a certain elevation line, on which the dune foot usually lies. In relatively unalterable coastal sections, this is an acceptable approach. The method of determining the MKL is such that it is not very sensitive to the precise choice of the value dv. The location of the dune foot is thus determined by the height above NAP (National Datum, approx. Mean Sea Level) and the distance from that elevation line to the administrative coastline (Xdv). This administrative line has no physical meaning, but is simply the basis for survey work. The recipe for calculating the position of the MKL is: Determine the location of the dune foot The height of the average low water (glw) is determined The height h of the dune foot above average low water is calculated The sand volume A is calculated; A is the volume of sand seaward of the dune foot and above the level (glw-h) The position of the momentary coastline (SKL) is defined in relation to the national beach pile line as: (A/2h) - Xdv The background of this method is that the thickness of the sand layer to be taken must be a function of the measuring wave height; however, it is unknown. But because the elevation of the dune foot is also a function of the measuring wave height, the value h is a good representation of the effect of both tide and wave influences. For the determination of the beach profiles, the so-called JarKus profiles are measured along the coastline. These profiles are roughly 250 metres apart and are measured annually from around 800 meters in the sea to just behind the dunes. These measurements are available throughout the coast from 1965 onwards. From the period from about 1850 there are also profile soundings available in some places, but these are often slightly shifted compared to the jarkus rowing and are therefore more difficult to analyse. In the case of groynes, the sounding is carried out exactly in the middle between the groynes. The Basic Coastline (BKL) The Basic Coastline is by definition the coastline of 1 January 1990. But of course there are no measurements made on exactly that date, moreover, there are always variations in the measurements. The BKL is therefore determined by taking the beach measurements of the approximately 10 years prior to 1990 and by determining the MKL for each of those years. These values are placed in a graph, a regression line is determined. Where this regression line cuts the date 1-1-1990 lies the basic coastline BKL. In principle, the location of the BKL is immutable. In very special cases, where the coast is substantially altered by a work, it can be decided to shift the BKL. This is not based on a technical or morphological calculation, but actually a political decision. An example of this is the Hondsbossche Zeewering, as sea dike near the village of Petten, where the BKL was actually on the toe of the dike. Due to the construction of a new artificial dune in front of this dike (the Hondsbossche Duinen), a piece of dune was added, of which the intention is to preserve it. So there is the BKL shifted seaward. The coastline to be tested (TKL)  Within the framework of the coastal policy is determined annually whether nourishment is required in a given coastal sector. This is done by determining the coastline (TKL) to be tested before the reference date. This is determined in the same way as the BKL, namely by a regression analysis of the MKL values of the previous years. See the attached graph. In this example, a supplementation was carried out in 1990, causing the MKL to shift far seawards. The number of years over which the regression analysis can be carried out is therefore somewhat limited. If there are too few years available, a regression line is usually adopted parallel to the previous regression line (so it is assumed that the erosion before and after supplementation is approximately the same). By the way, the first year after supplementation is often more than average due to adjustment effects. In this case, it appears that the TKL is still just satisfactory for 1995 and is no longer satisfactory for 1996. In principle, a supplement at this location would be required in the course of 1995. Now the decision to supplement does not depend on a single BKL exceedance, but only if multiple profiles are threatened to become negative. In order to assess this, coastal maps are issued annually by Rijkswaterstaat. These maps indicate whether the coast is growing or eroding with a dark green or light green block. A red block indicates that in that place the TKL has exceeded the BKL, and that something has to happen there. A red hatched indicator means that the TKL has exceeded the BKL, but this coastal section has an accreting tendency, so no urgent works are needed Beach nourishment design A beach nourishment to broaden the beach and maintain the coastline can be designed using mathematical calculation models or on the basis of beach measurements. In the Netherlands, Belgium and Germany, a nourishment design is mainly based on measurement, while mathematical models are mainly used elsewhere. A nourishment design for coastal maintenance and beach widening can be made much more reliable based on measurement data, provided that they are present. If there are no good, long-term series of measurements of the beach profile, one must make the design using calculation models. In the Netherlands, the coast has been measured annually for years (JarKus measurements) and therefore the very reliable method based on measurements is used in the Netherlands for the design of supplements to prevent erosion. Use of measurements for nourishment design To compensate for coastal erosion, the design of a supplementation is actually very simple, every year the same amount of sand has to be applied as erosion disappears annually. The assumption is that there is no significant change in the wave climate and the orientation of the coastline. With most nourishments, this is a correct assumption. In case of substantial changes in the coastal orientation, this method is therefore not always usable (e.g. in the design of the sand engine). In practice, the length of the nourishment must be 20-40 times the width in order to apply this method. In short, the method consists of the following steps: Make sure there are enough measured profiles (at least 10 years). Use these profiles to calculate the annual sand loss (in m3/year) for a coastal section. Multiply this amount by an appropriate lifetime (e.g. 5 years). Add a loss factor (order 40%). Place this amount of sand somewhere on the beach between the low water line and the dune foot. To determine the amount of sand in the profile, the same method can be used as used for the basic Coastline. Given the fact that the instantaneous coastline has been measured for the necessary years and thus the decline of this coastline, determining the loss of sand is quite simple. Suppose the decline of the MKL is 5 m/year, then the annual sand loss is 5*(2h) m3 per year per linear meter of coastline. Here is 2h the height of the active beach profile. Along the Dutch coast, h is near Hoek van Holland in the order of 4 m, so in the above example the erosion would be 40 m3 per year per linear meter of coast. For a nourishment with a length of 4 km and a lifespan of 5 years is therefore 40*4000*5 = 80 000 m3. Because there is extra sand loss immediately after construction, a good amount is 1.4 *80000 = 112 000 m3. This is a seaward shift of 1.4*5*5= 35 m. In the practice of beach nourishments (from 1990 onwards), this method appears to work very well. Analyses of nourishments in northern Germany also show that this is a reliable method. The starting point is that the grain size of the nourishment sand is equal to the original beach sand. If this is not the case, it must be corrected. In case of finer sand in the win area, the volume of the nourishment will need to increase. Use of mathematical models for nourishment design Single line model For relatively wide and short nourishment (such as the sand motor), a single-line model can be used. In this model, the coast is represented by a single line (e.g. the instantaneous coastline) and a constant profile along the entire coastline. For each profile, the orientation of the coast is given, and in each profile the sand transport is calculated by the surf induced current. If in a profile 1 the sand transport is larger than in a profile 2, there will be between profile 1 and 2 sedimentation, for details about the model. As there is sedimentation, the coastal orientation will change, and thus also the transport of sand. This makes it possible to calculate the coastline change. A classic example is the calculation of a relatively short and wide supplementation with straight waves. The single-line model can very well predict how such supplementation can develop over time. The Unibest calculation model of Deltares is an example of a single-line model. Field models In highly two-dimensional situations, e.g. at a tidal inlet or the mouth of an estuary, or if the nourishment itself has a strong two-dimensional character (as with the Sand Engine), an approach with profile measurements is not possible. A single-line model is often inappropriate. In these cases, a two-dimensional sand transport model is made (usually with models such as Delft3D from Deltares in the Netherlands or Mike 21 of DHI in Denmark). In such a model, the bed of the area is introduced as a depth map. Then there is a tidal flow calculation and a wave penetration calculation. After that, the sand transport is calculated at each mesh-point and from the difference in sand transport between the different mesh-points, the sedimentation and erosion is calculated in all boxes. It can then be assessed whether a nourishment behaves as intended. The problem with this type of model is that (apart from the fairly long computation times for the computer) the results are rather sensitive to inaccuracies in the input. For example, at the edge of the model, the water levels and flow rates must be properly entered, and the wave climate must be well known. Also variations in the sand composition (grain size) have a great influence. Channel wall nourishment At some places along the Dutch coast tidal channels are very near to the beach. In the years from around 1990 these beaches were also nourished in the classical way, but the problem was that the width of the beach is small. So the amount of sand to be placed is limited, resulting in a short lifetime of the nourishment. It was found that in such cases it is more effective to nourish the landward wall of the channel, and in some cases uses sand from the seaward side of the channel as borrow area. This is in fact moving the tidal channel further from the coastline (chapter 4) Foreshore nourishments Instead of directly supplying the beach, it is also possible to supple the foreshore (underwater bank). The advantage of this is that the implementation of the nourishment is cheaper, and there is no direct effect of the work on the use of the beach. The sand is then transported over time by the waves from deeper water to the coast. A foreshore nourishment is calculated just like a beach nourishment, but the use of measurement data with beach profiles is then less easy, as a foreshore nourishment does not give a new beach line. Therefore, in those cases, a single-line model or a field model is usually used. In the period 1990-2020 in total 236 million cubic meters has been nourished, mainly as beach nourishment. However after 2004 more focus has been on foreshore nourishment. In 2006 the costs of some nourishment were analysed in detail. This resulted in: F= Foreshore, B= Beach nourishment, B+F is combination; Price level 2006, excluding VAT. Hawaii Waikiki Hawaii planned to replenish Waikiki beach in 2010. Budgeted at $2.5 million, the project covered in an attempt to return the beach to its 1985 width. Prior opponents supported this project, because the sand was to come from nearby shoals, reopening a blocked channel and leaving the overall local sand volume unchanged, while closely matching the "new" sand to existing materials. The project planned to apply up to of sand from deposits located offshore at a depth of . The project was larger than the prior recycling effort in 2006-07, which moved . Maui Maui, Hawaii illustrated the complexities of even small-scale nourishment projects. A project at Sugar Cove transported upland sand to the beach. The sand allegedly was finer than the original sand and contained excess silt that enveloped coral, smothering it and killing the small animals that lived in and around it. As in other projects, on-shore sand availability was limited, forcing consideration of more expensive offshore sources. A second project, along Stable Road, that attempted to slow rather than halt erosion, was stopped halfway toward its goal of adding of sand. The beaches had been retreating at a "comparatively fast rate" for half a century. The restoration was complicated by the presence of old seawalls, groins, piles of rocks and other structures. This project used sand-filled geotextile tube groins that were originally to remain in place for up to 3 years. A pipe was to transport sand from deeper water to the beach. The pipe was anchored by concrete blocks attached by fibre straps. A video showed the blocks bouncing off the coral in the current, killing whatever they touched. In places the straps broke, allowing the pipe to move across the reef, "planing it down". Bad weather exacerbated the damaging movement and killed the project. The smooth, cylindrical geotextile tubes could be difficult to climb over before they were covered by sand. Supporters claimed that 2010's seasonal summer erosion was less than in prior years, although the beach was narrower after the restoration ended than in 2008. Authorities were studying whether to require the project to remove the groins immediately. Potential alternatives to geotextile tubes for moving sand included floating dredges and/or trucking in sand dredged offshore. A final consideration was sea level rise and that Maui was sinking under its own weight. Both Maui and Hawaii Island surround massive mountains (Haleakala, Mauna Loa, and Mauna Kea) and were expanding a giant dimple in the ocean floor, some below the mountain summits. The Outer Banks The Outer Banks off the coast of North Carolina and southeastern Virginia include a number of towns. Five of the six town have undergone beach nourishment since 2011. The projects were as follows: Duck, North Carolina: the beach nourishment took place in 2017 and cost an estimated $14,057,929. Southern Shores, North Carolina - the estimated costs for the Southern Shores project was approximately $950,000 and was completed in 2017. There is a proposed additional project to widen the beaches in 2022 with an estimated cost of between $9 million and $13.5 million. Kitty Hawk, North Carolina - the beach nourishment project in Kitty Hawk was completed in 2017 and included 3.58 miles of beaches running from the Southern Shores to Kitty Hawk and cost $18.2 million. Kill Devil Hills, North Carolina - the beach nourishment project was completed in 2017. Nags Head, North Carolina - The town's first beach nourishment project took place in 2011 and cost between $36 million and $37 million. The renourishment project in 2019 cost an estimated $25,546,711. Upcoming Projects - the towns of Duck, Southern Shores, Kitty Hawk and Kill Devil Hills have secured a contract with Coastal Protection Engineering for tentative re-nourishment projects scheduled for 2022. United States Florida - Ninety PEMs (Pressure Equalizing Modules) were installed in February 2008 at Hillsboro Beach. After 18 months the beach had expanded significantly. Most of the PEMs were removed in 2011. Beach volume expanded by 38,500 cubic yards over 3 years compared to an average annual loss of 21,000. New Jersey - Over decades, the U.S. Army Corps of Engineers has pour millions of cubic yards of sand slurry along the Jersey Shore. Costs for the project are shared by the Army Corps of Engineers, the state, and local municipalities. Although New Jersey's coastline is 1% of the U.S. coastline, from 1922 to 2022, more than $2.6 billion was expended on beach replenishment projects in the state, about 20% of the nation's total spending on beach replenishment. "Dredge and fill" operations began in 1989. Justifications for the projects, controversial within New Jersey, have included flood control, prevention of damage to waterfront residences, and protection of summer tourism along the shore, as well as public access to beaches. Critics, such as the Sierra Club and Surfrider Foundation, have argued that beach renourishment in the state is wasteful since the sand often washes away quickly; they argue for alternative policies to mitigate the effects of climate change, storm surges and rising sea levels, and argue that renourishment is effectively a subsidy for wealthy homeowners. Hong Kong The beach in Gold Coast was built as an artificial beach in the 1990s with HK$60m. Sands are supplied periodically, especially after typhoons, to keep the beach viable. See also Beach erosion and accretion Beach evolution Beach morphodynamics Raised beach Modern recession of beaches Paleoshoreline Integrated coastal zone management Coastal management, to prevent coastal erosion and creation of beach Coastal and oceanic landforms Coastal development hazards Coastal erosion Coastal geography Coastal engineering Coastal and Estuarine Research Federation (CERF) Sedimentation enhancing strategies Erosion Bioerosion Blowhole Natural arch Wave-cut platform Longshore drift Deposition (sediment) Coastal sediment supply Sand dune stabilization Submersion References External links Beach nourishment /NOAA and NOS / Main Page Beach Nourishment with Emphasis on Geological Characteristics Affecting Project Performance ARGUS Beach Nourishment Monitoring Program at the University of New South Wales USGS assessments and mapping of sand and gravel resources in U.S. offshore environments Beach nourishment, Coastal Care.org Coastal engineering Oceans
Beach nourishment
[ "Engineering" ]
7,878
[ "Coastal engineering", "Civil engineering" ]
1,018,642
https://en.wikipedia.org/wiki/Seawall
A seawall (or sea wall) is a form of coastal defense constructed where the sea, and associated coastal processes, impact directly upon the landforms of the coast. The purpose of a seawall is to protect areas of human habitation, conservation, and leisure activities from the action of tides, waves, or tsunamis. As a seawall is a static feature, it will conflict with the dynamic nature of the coast and impede the exchange of sediment between land and sea. Seawall designs factor in local climate, coastal position, wave regime (determined by wave characteristics and effectors), and value (morphological characteristics) of landform. Seawalls are hard engineering shore-based structures that protect the coast from erosion. Various environmental issues may arise from the construction of a seawall, including the disruption of sediment movement and transport patterns. Combined with a high construction cost, this has led to increasing use of other soft engineering coastal management options such as beach replenishment. Seawalls are constructed from various materials, most commonly reinforced concrete, boulders, steel, or gabions. Other possible construction materials include vinyl, wood, aluminum, fiberglass composite, and biodegradable sandbags made of jute and coir. In the UK, seawall also refers to an earthen bank used to create a polder, or a dike construction. The type of material used for construction is hypothesized to affect the settlement of coastal organisms, although the precise mechanism has yet to be identified. Types A seawall works by reflecting incident wave energy back into the sea, thus reducing the energy available to cause erosion. Seawalls have two specific weaknesses. Wave reflection from the wall may result in hydrodynamic scour and subsequent lowering of the sand level of the fronting beach. Seawalls may also accelerate the erosion of adjacent, unprotected coastal areas by affecting the littoral drift process. Different designs of man-made tsunami barriers include building reefs and forests to above-ground and submerged seawalls. Starting just weeks after the disaster, in January 2005, India began planting Casuarina and coconut saplings on its coast as a natural barrier against future disasters like the 2004 Indian Ocean earthquake. Studies have found that an offshore tsunami wall could reduce tsunami wave heights by up to 83%. The appropriate seawall design relies on location-specific aspects, including surrounding erosion processes. There are three main types of seawalls: vertical, curved, stepped, and mounds (see table below). Natural barriers A report published by the United Nations Environment Programme (UNEP) suggests that the tsunami of 26 December 2004 caused less damage in the areas where natural barriers were present, such as mangroves, coral reefs or coastal vegetation. A Japanese study of this tsunami in Sri Lanka used satellite imagery modelling to establish the parameters of coastal resistance as a function of different types of trees. Natural barriers, such as coral reefs and mangrove forests, prevent the spread of tsunamis and the flow of coastal waters and mitigated the flood and surge of water. Trade-offs A cost-benefit approach is an effective way to determine whether a seawall is appropriate and whether the benefits are worth the expense. Besides controlling erosion, consideration must be given to the effects of hardening a shoreline on natural coastal ecosystems and human property or activities. A seawall is a static feature which can conflict with the dynamic nature of the coast and impede the exchange of sediment between land and sea. The table below summarizes some positive and negative effects of seawalls which can be used when comparing their effectiveness with other coastal management options, such as beach nourishment. Generally, seawalls can be a successful way to control coastal erosion, but only if they are constructed well and out of materials that can withstand the force of ongoing wave energy. Some understanding is needed of the coastal processes and morphodynamics specific to the seawall location. Seawalls can be very helpful; they can offer a more long-term solution than soft engineering options, additionally providing recreation opportunities and protection from extreme events as well as everyday erosion. Extreme natural events expose weaknesses in the performance of seawalls, and analyses of these can lead to future improvements and reassessment. Issues Sea level rise Sea level rise creates an issue for seawalls worldwide as it raises both the mean normal water level and the height of waves during extreme weather events, which the current seawall heights may be unable to cope with. The most recent analyses of long, good-quality tide gauge records (corrected for GIA and when possible for other vertical land motions by the Global Positioning System, GPS) indicate a mean rate of sea level rise of 1.6–1.8 mm/yr over the twentieth century. The Intergovernmental Panel on Climate Change (IPCC) (1997) suggested that sea level rise over the next 50 – 100 years will accelerate with a projected increase in global mean sea level of +18 cm by 2050 AD. This data is reinforced by Hannah (1990) who calculated similar statistics including a rise of between +16-19.3 cm throughout 1900–1988. Superstorm Sandy of 2012 is an example of the devastating effects rising sea levels can cause when mixed with a perfect storm. Superstorm Sandy sent a storm surge of 4–5 m onto New Jersey's and New York's barrier island and urban shorelines, estimated at $70 billion in damage. This problem could be overcome by further modeling and determining the extension of height and reinforcement of current seawalls which needs to occur for safety to be ensured in both situations. Sea level rise also will cause a higher risk of flooding and taller tsunamis. Hydrostatic water pressure Seawalls, like all retaining walls, must relieve the buildup of water pressure. Water pressure buildup is caused when groundwater is not drained from behind the seawall. Groundwater against a seawall can be from the area's natural water-table, rain percolating into the ground behind the wall and waves overtopping the wall. The water table can also rise during periods of high water (high tide). Lack of adequate drainage can cause the seawall to buckle, move, bow, crack, or collapse. Sinkholes may also develop as the escaping water pressure erodes soil through or around the drainage system. Extreme events Extreme events also pose a problem as it is not easy for people to predict or imagine the strength of hurricane or storm-induced waves compared to normal, expected wave patterns. An extreme event can dissipate hundreds of times more energy than everyday waves, and calculating structures that will stand the force of coastal storms is difficult and, often the outcome can become unaffordable. For example, the Omaha Beach seawall in New Zealand was designed to prevent erosion from everyday waves only, and when a storm in 1976 carved out ten meters behind the existing seawall, the whole structure was destroyed. Ecosystem impacts The addition of seawalls near marine ecosystems can lead to increased shadowing effects in the waters surrounding the seawall. Shadowing reduces the light and visibility within the water, which may disrupt the distribution as well as foraging capabilities of certain species. The sediment surrounding seawalls tends to have less favorable physical properties (Higher calcification levels, less structural organization of crystalline structure, low silicon content, and less macroscale roughness) when compared to natural shorelines, which can present issues for species that reside on the seafloor. The Living Seawalls project, which was launched in Sydney, Australia, in 2018, aims to help many of the marine species in Sydney Harbour to flourish, thus enhancing its biodiversity, by modifying the design of its seawalls. It entails covering parts of the seawalls with specially-designed tiles that mimic natural microhabitats - with crevices and other features that more closely resemble natural rocks. In September 2021, the Living Seawalls project was announced as a finalist for the international environment award the Earthshot Prize. Since 2022 it has become part of Project Restore, under the auspices of the Sydney Institute of Marine Science. Other issues Some further issues include a lack of long-term trend data of seawall effects due to a relatively short duration of data records; modeling limitations and comparisons of different projects and their effects being invalid or unequal due to different beach types; materials; currents; and environments. Lack of maintenance is also a major issue with seawalls. In 2013, more than 5,000 feet (1,500 m) of seawall was found to be crumbling in Punta Gorda, Florida. Residents of the area pay hundreds of dollars each year for a seawall repair program. The problem is that most of the seawalls are over a half-century old and are being destroyed by only heavy downpours. If not kept in check, seawalls lose effectiveness and become expensive to repair. History and examples Seawall construction has existed since ancient times. In the first century BCE, Romans built a seawall or breakwater at Caesarea Maritima creating an artificial harbor (Sebastos Harbor). The construction used Pozzolana concrete which hardens in contact with seawater. Barges were constructed and filled with the concrete. They were floated into position and sunk. The resulting harbor/breakwater/seawall is still in existence today – more than 2000 years later. The oldest known coastal defense is believed to be a 100-meter row of boulders in the Mediterranean Sea off the coast of Israel. Boulders were positioned in an attempt to protect the coastal settlement of Tel Hreiz from sea rise following the last glacial maximum. Tel Hreiz was discovered in 1960 by divers searching for shipwrecks, but the row of boulders was not found until storms cleared a sand cover in 2012. More recently, seawalls were constructed in 1623 in Canvey Island, UK, when great floods of the Thames estuary occurred, prompting the construction of protection for further events in this flood-prone area. Since then, seawall design has become more complex and intricate in response to an improvement in materials, technology, and an understanding of how coastal processes operate. This section will outline some key case studies of seawalls in chronological order and describe how they have performed in response to tsunamis or ongoing natural processes and how effective they were in these situations. Analyzing the successes and shortcomings of seawalls during severe natural events allows their weaknesses to be exposed, and areas become visible for future improvement. Canada The Vancouver Seawall is a stone seawall constructed around the perimeter of Stanley Park in Vancouver, British Columbia. The seawall was constructed initially as waves created by ships passing through the First Narrows eroding the area between Prospect Point and Brockton Point. Construction of the seawall began in 1917, and since then this pathway has become one of the most used features of the park by both locals and tourists and now extends 22 km in total. The construction of the seawall also provided employment for relief workers during the Great Depression and seamen from on Deadman's Island who were facing punishment detail in the 1950s (Steele, 1985). Overall, the Vancouver Seawall is a prime example of how seawalls can simultaneously provide shoreline protection and a source of recreation which enhances human enjoyment of the coastal environment. It also illustrates that although shoreline erosion is a natural process, human activities, interactions with the coast, and poorly planned shoreline development projects can accelerate natural erosion rates. India On December 26, 2004, towering waves of the 2004 Indian Ocean earthquake tsunami crashed against India's south-eastern coastline killing thousands. However, the former French colonial enclave of Pondicherry escaped unscathed. This was primarily due to French engineers who had constructed (and maintained) a massive stone seawall during the time when the city was a French colony. This 300-year-old seawall effectively kept Pondicherry's historic center dry even though tsunami waves drove water above the normal high-tide mark. The barrier was initially completed in 1735 and over the years, the French continued to fortify the wall, piling huge boulders along its coastline to stop erosion from the waves pounding the harbor. At its highest, the barrier running along the water's edge reaches about above sea level. The boulders, some weighing up to a ton, are weathered black and brown. The seawall is inspected every year and whenever gaps appear or the stones sink into the sand, the government adds more boulders to keep it strong. The Union Territory of Pondicherry recorded around 600 deaths from the huge tsunami waves that struck India's coast after the mammoth underwater earthquake (which measured 9.0 on the moment magnitude scale) off Indonesia, but most of those killed were fishermen who lived in villages beyond the artificial barrier which reinforces the effectiveness of seawalls. Japan At least 43 percent of Japan's coastline is lined with concrete seawalls or other structures designed to protect the country against high waves, typhoons, or even tsunamis. During the 2011 Tōhoku earthquake and tsunami, the seawalls in most areas were overwhelmed. In Kamaishi, waves surmounted the seawall – the world's largest, erected a few years ago in the city's harbor at a depth of , a length of and a cost of $1.5 billion – and eventually submerged the city center. The risks of dependence on seawalls were most evident in the crisis at the Fukushima Dai-ichi and Fukushima Dai-ni nuclear power plants, both located along the coast close to the earthquake zone, as the tsunami washed over walls that were supposed to protect the plants. Arguably, the additional defense provided by the seawalls presented an extra margin of time for citizens to evacuate and also stopped some of the full force of energy which would have caused the wave to climb higher in the backs of coastal valleys. In contrast, the seawalls also acted in a negative way to trap water and delay its retreat. The failure of the world's largest seawall, which cost $1.5 billion to construct, shows that building stronger seawalls to protect larger areas would have been even less cost-effective. In the case of the ongoing crisis at the nuclear power plants, higher and stronger seawalls should have been built if power plants were to be built at that site. Fundamentally, the devastation in coastal areas and a final death toll predicted to exceed 10,000 could push Japan to redesign its seawalls or consider more effective alternative methods of coastal protection for extreme events. Such hardened coastlines can also provide a false sense of security to property owners and local residents as evident in this situation. Seawalls along the Japanese coast have also been criticized for cutting settlements off from the sea, making beaches unusable, presenting an eyesore, disturbing wildlife, and being unnecessary. United States After 2012's Hurricane Sandy, New York City Mayor Bill de Blasio invested $3,000,000,000 in a hurricane restoration fund, with part of the money dedicated to building new seawalls and protection from future hurricanes. A New York Harbor Storm-Surge Barrier has been proposed, but not voted on or funded by Congress or the State of New York. In Florida, tiger dams are used to protect homes near the coast. See also Breakwater (structure) Mole (architecture) General: Related types of walls: Specific walls: (Constantinople seawalls) References External links Channel Coastal Observatory – Seawalls Seawalls and defences on the Isle of Wight MEDUS (Maritime Engineering Division University Salerno) "Japan may rethink seawalls after tsunami", New York Times, March 14, 2011 General overview of residential and small commercial steel seawall construction Coastal engineering
Seawall
[ "Engineering" ]
3,204
[ "Coastal engineering", "Civil engineering" ]
1,018,652
https://en.wikipedia.org/wiki/Breakwater%20%28structure%29
A breakwater is a permanent structure constructed at a coastal area to protect against tides, currents, waves, and storm surges. Breakwaters have been built since antiquity to protect anchorages, helping isolate vessels from marine hazards such as wind-driven waves. A breakwater, also known in some contexts as a jetty or a mole, may be connected to land or freestanding, and may contain a walkway or road for vehicle access. Part of a coastal management system, breakwaters are installed parallel to the shore to minimize erosion. On beaches where longshore drift threatens the erosion of beach material, smaller structures on the beach may be installed, usually perpendicular to the water's edge. Their action on waves and current is intended to slow the longshore drift and discourage mobilisation of beach material. In this usage they are more usually referred to as groynes. Purposes Breakwaters reduce the intensity of wave action in inshore waters and thereby provide safe harbourage. Breakwaters may also be small structures designed to protect a gently sloping beach to reduce coastal erosion; they are placed offshore in relatively shallow water. An anchorage is only safe if ships anchored there are protected from the force of powerful waves by some large structure which they can shelter behind. Natural harbours are formed by such barriers as headlands or reefs. Artificial harbours can be created with the help of breakwaters. Mobile harbours, such as the D-Day Mulberry harbours, were floated into position and acted as breakwaters. Some natural harbours, such as those in Plymouth Sound, Portland Harbour, and Cherbourg, have been enhanced or extended by breakwaters made of rock. Types Types of breakwaters include vertical wall breakwater, mound breakwater and mound with superstructure or composite breakwater. A breakwater structure is designed to absorb the energy of the waves that hit it, either by using mass (e.g. with caissons), or by using a revetment slope (e.g. with rock or concrete armour units). In coastal engineering, a revetment is a land-backed structure whilst a breakwater is a sea-backed structure (i.e. water on both sides). Rubble Rubble mound breakwaters use structural voids to dissipate the wave energy. Rubble mound breakwaters consist of piles of stones more or less sorted according to their unit weight: smaller stones for the core and larger stones as an armour layer protecting the core from wave attack. Rock or concrete armour units on the outside of the structure absorb most of the energy, while gravels or sands prevent the wave energy's continuing through the breakwater core. The slopes of the revetment are typically between 1:1 and 1:2, depending upon the materials used. In shallow water, revetment breakwaters are usually relatively inexpensive. As water depth increases, the material requirements—and hence costs—increase significantly. Caisson Caisson breakwaters typically have vertical sides and are usually erected where it is desirable to berth one or more vessels on the inner face of the breakwater. They use the mass of the caisson and the fill within it to resist the overturning forces applied by waves hitting them. They are relatively expensive to construct in shallow water, but in deeper sites they can offer a significant saving over revetment breakwaters. An additional rubble mound is sometimes placed in front of the vertical structure in order to absorb wave energy and thus reduce wave reflection and horizontal wave pressure on the vertical wall. Such a design provides additional protection on the sea side and a quay wall on the inner side of the breakwater, but it can enhance wave overtopping. Wave absorbing caisson A similar but more sophisticated concept is a wave-absorbing caisson, including various types of perforation in the front wall. Such structures have been used successfully in the offshore oil-industry, but also on coastal projects requiring rather low-crested structures (e.g. on an urban promenade where the sea view is an important aspect, as seen in Beirut and Monaco). In the latter, a project is presently ongoing at the Anse du Portier including 18 wave-absorbing high caissons. Wave attenuator Wave attenuators consist of concrete elements placed horizontally one foot under the free surface, positioned along a line parallel to the coast. Wave attenuators have four slabs facing the sea, one vertical slab, and two slabs facing the land; each slab is separated from the next by a space of . The row of four sea-facing and two land-facing slabs reflects offshore wave by the action of the volume of water located under it which, made to oscillate under the effect of the incident wave, creates waves in phase opposition to the incident wave downstream from the slabs. Membrane Breakwaters A submerged flexible mound breakwater can be employed for wave control in shallow water as an advanced alternative to the conventional rigid submerged designs. Further to the fact that, the construction cost of the submerged flexible mound breakwaters is less than that of the conventional submerged breakwaters, ships and marine organisms can pass them, if being deep enough. These marine structures reduce the collided wave energy and prevent the generation of standing waves. Breakwater armour units As design wave heights get larger, rubble mound breakwaters require larger armour units to resist the wave forces. These armour units can be formed of concrete or natural rock. The largest standard grading for rock armour units given in CIRIA 683 "The Rock Manual" is 10–15 tonnes. Larger gradings may be available, but the ultimate size is limited in practice by the natural fracture properties of locally available rock. Shaped concrete armour units (such as Dolos, Xbloc, Tetrapod, etc.) can be provided in up to approximately 40 tonnes (e.g. Jorf Lasfar, Morocco), before they become vulnerable to damage under self weight, wave impact and thermal cracking of the complex shapes during casting/curing. Where the very largest armour units are required for the most exposed locations in very deep water, armour units are most often formed of concrete cubes, which have been used up to ~195 tonnes for the tip of the breakwater at Punta Langosteira near La Coruña, Spain. Preliminary design of armour unit size is often undertaken using the Hudson's equation, Van der Meer and more recently Van Gent et al.; these methods are all described in CIRIA 683 "The Rock Manual" and the United States Army Corps of Engineers Coastal engineering manual (available for free online) and elsewhere. For detailed design the use of scaled physical hydraulic models remains the most reliable method for predicting real-life behavior of these complex structures. Unintended consequences Breakwaters are subject to damage and overtopping in severe storms. Some may also have the effect of creating unique types of waves that attract surfers, such as The Wedge at the Newport breakwater. Sediment effects The dissipation of energy and relative calm water created in the lee of the breakwaters often encourage accretion of sediment (as per the design of the breakwater scheme). However, this can lead to excessive salient build up, resulting in tombolo formation, which reduces longshore drift shoreward of the breakwaters. This trapping of sediment can cause adverse effects down-drift of the breakwaters, leading to beach sediment starvation and increased coastal erosion. This may then lead to further engineering protection being needed down-drift of the breakwater development. Sediment accumulation in the areas surrounding breakwaters can cause flat areas with reduced depths, which changes the topographic landscape of the seabed. Salient formations as a result of breakwaters are a function of the distance the breakwaters are built from the coast, the direction at which the wave hits the breakwater, and the angle at which the breakwater is built (relative to the coast). Of these three, the angle at which the breakwater is built is most important in the engineered formation of salients. The angle at which the breakwater is built determines the new direction of the waves (after they've hit the breakwaters), and in turn the direction that sediment will flow and accumulate over time. Environmental effects The reduced heterogeneity in sea floor landscape introduced by breakwaters can lead to reduced species abundance and diversity in the surrounding ecosystems. As a result of the reduced heterogeneity and decreased depths that breakwaters produce due to sediment build up, the UV exposure and temperature in surrounding waters increase, which may disrupt surrounding ecosystems. Construction of detached breakwaters There are two main types of offshore breakwater (also called detached breakwater): single and multiple. Single, as the name suggests, means the breakwater consists of one unbroken barrier, while multiple breakwaters (in numbers anywhere from two to twenty) are positioned with gaps in between (). The length of the gap is largely governed by the interacting wavelengths. Breakwaters may be either fixed or floating, and impermeable or permeable to allow sediment transfer shoreward of the structures, the choice depending on tidal range and water depth. They usually consist of large pieces of rock (granite) weighing up to 10–15 tonnes each, or rubble-mound. Their design is influenced by the angle of wave approach and other environmental parameters. Breakwater construction can be either parallel or perpendicular to the coast, depending on the shoreline requirements. Notable locations UK – The Sound, Plymouth; Sea Palling, Norfolk; Elmer, West Sussex; Brixham, Devon; South Gare, North Yorkshire United States – Long Beach, California; Santa Monica, California; Winthrop Beach, Massachusetts; Colonial Beach, Virginia; Rockland, Maine Japan – Central Breakwater, Tokyo; Ishizaki (檜山石崎郵便局), Hokkaido Prefecture; Kaike, Tottori Prefecture India – Offshore Breakwater at Udangudi for captive coal jetty; Marine Drive, Mumbai; Vizhinjam International Seaport Thiruvananthapuram, Trivandrum UAE - Palm Islands, Dubai; Corniche, Abu Dhabi Hong Kong – Kai Tak Airport; Hong Kong International Airport Kuwait - Sabah Al Ahmad Sea City Australia - Newcastle, NSW Pakistan - Manora Breakwater, Port of Karachi See also Seawall Mole (architecture) Powell River Giant Hulks breakwater Phoenix breakwaters Port References USACE (1984) – Shore protection manual (Volume I and II) N.W.H. Allsop (2002) – Breakwaters, coastal structures and coastlines. External links USGS Oblique Aerial Photography — Coastal Erosion from El-Niño Winter Storms October, 1997 & April, 1998 Channel Coastal Observatory — Breakwaters Gallery Shapes of breakwater armour units and year of their introduction SeaBull Marine, Inc. — Shoreline Erosion Reversal Systems WaveBrake – Wave attenuation specialists IAS Breakwater in Facebook Coastal engineering Coastal construction
Breakwater (structure)
[ "Engineering" ]
2,243
[ "Construction", "Coastal engineering", "Coastal construction", "Civil engineering" ]
1,018,676
https://en.wikipedia.org/wiki/Field%20of%20sets
In mathematics, a field of sets is a mathematical structure consisting of a pair consisting of a set and a family of subsets of called an algebra over that contains the empty set as an element, and is closed under the operations of taking complements in finite unions, and finite intersections. Fields of sets should not be confused with fields in ring theory nor with fields in physics. Similarly the term "algebra over " is used in the sense of a Boolean algebra and should not be confused with algebras over fields or rings in ring theory. Fields of sets play an essential role in the representation theory of Boolean algebras. Every Boolean algebra can be represented as a field of sets. Definitions A field of sets is a pair consisting of a set and a family of subsets of called an algebra over that has the following properties: : as an element: Assuming that (1) holds, this condition (2) is equivalent to: Any/all of the following equivalent conditions hold: : : : : In other words, forms a subalgebra of the power set Boolean algebra of (with the same identity element ). Many authors refer to itself as a field of sets. Elements of are called points while elements of are called complexes and are said to be the admissible sets of A field of sets is called a σ-field of sets and the algebra is called a σ-algebra if the following additional condition (4) is satisfied: Any/both of the following equivalent conditions hold: : for all : for all Fields of sets in the representation theory of Boolean algebras Stone representation For an arbitrary set its power set (or, somewhat pedantically, the pair of this set and its power set) is a field of sets. If is finite (namely, -element), then is finite (namely, -element). It appears that every finite field of sets (it means, with finite, while may be infinite) admits a representation of the form with finite ; it means a function that establishes a one-to-one correspondence between and via inverse image: where and (that is, ). One notable consequence: the number of complexes, if finite, is always of the form To this end one chooses to be the set of all atoms of the given field of sets, and defines by whenever for a point and a complex that is an atom; the latter means that a nonempty subset of different from cannot be a complex. In other words: the atoms are a partition of ; is the corresponding quotient set; and is the corresponding canonical surjection. Similarly, every finite Boolean algebra can be represented as a power set – the power set of its set of atoms; each element of the Boolean algebra corresponds to the set of atoms below it (the join of which is the element). This power set representation can be constructed more generally for any complete atomic Boolean algebra. In the case of Boolean algebras which are not complete and atomic we can still generalize the power set representation by considering fields of sets instead of whole power sets. To do this we first observe that the atoms of a finite Boolean algebra correspond to its ultrafilters and that an atom is below an element of a finite Boolean algebra if and only if that element is contained in the ultrafilter corresponding to the atom. This leads us to construct a representation of a Boolean algebra by taking its set of ultrafilters and forming complexes by associating with each element of the Boolean algebra the set of ultrafilters containing that element. This construction does indeed produce a representation of the Boolean algebra as a field of sets and is known as the Stone representation. It is the basis of Stone's representation theorem for Boolean algebras and an example of a completion procedure in order theory based on ideals or filters, similar to Dedekind cuts. Alternatively one can consider the set of homomorphisms onto the two element Boolean algebra and form complexes by associating each element of the Boolean algebra with the set of such homomorphisms that map it to the top element. (The approach is equivalent as the ultrafilters of a Boolean algebra are precisely the pre-images of the top elements under these homomorphisms.) With this approach one sees that Stone representation can also be regarded as a generalization of the representation of finite Boolean algebras by truth tables. Separative and compact fields of sets: towards Stone duality A field of sets is called separative (or differentiated) if and only if for every pair of distinct points there is a complex containing one and not the other. A field of sets is called compact if and only if for every proper filter over the intersection of all the complexes contained in the filter is non-empty. These definitions arise from considering the topology generated by the complexes of a field of sets. (It is just one of notable topologies on the given set of points; it often happens that another topology is given, with quite different properties, in particular, not zero-dimensional). Given a field of sets the complexes form a base for a topology. We denote by the corresponding topological space, where is the topology formed by taking arbitrary unions of complexes. Then is always a zero-dimensional space. is a Hausdorff space if and only if is separative. is a compact space with compact open sets if and only if is compact. is a Boolean space with clopen sets if and only if is both separative and compact (in which case it is described as being descriptive) The Stone representation of a Boolean algebra is always separative and compact; the corresponding Boolean space is known as the Stone space of the Boolean algebra. The clopen sets of the Stone space are then precisely the complexes of the Stone representation. The area of mathematics known as Stone duality is founded on the fact that the Stone representation of a Boolean algebra can be recovered purely from the corresponding Stone space whence a duality exists between Boolean algebras and Boolean spaces. Fields of sets with additional structure Sigma algebras and measure spaces If an algebra over a set is closed under countable unions (hence also under countable intersections), it is called a sigma algebra and the corresponding field of sets is called a measurable space. The complexes of a measurable space are called measurable sets. The Loomis-Sikorski theorem provides a Stone-type duality between countably complete Boolean algebras (which may be called abstract sigma algebras) and measurable spaces. A measure space is a triple where is a measurable space and is a measure defined on it. If is in fact a probability measure we speak of a probability space and call its underlying measurable space a sample space. The points of a sample space are called sample points and represent potential outcomes while the measurable sets (complexes) are called events and represent properties of outcomes for which we wish to assign probabilities. (Many use the term sample space simply for the underlying set of a probability space, particularly in the case where every subset is an event.) Measure spaces and probability spaces play a foundational role in measure theory and probability theory respectively. In applications to Physics we often deal with measure spaces and probability spaces derived from rich mathematical structures such as inner product spaces or topological groups which already have a topology associated with them - this should not be confused with the topology generated by taking arbitrary unions of complexes. Topological fields of sets A topological field of sets is a triple where is a topological space and is a field of sets which is closed under the closure operator of or equivalently under the interior operator i.e. the closure and interior of every complex is also a complex. In other words, forms a subalgebra of the power set interior algebra on Topological fields of sets play a fundamental role in the representation theory of interior algebras and Heyting algebras. These two classes of algebraic structures provide the algebraic semantics for the modal logic S4 (a formal mathematical abstraction of epistemic logic) and intuitionistic logic respectively. Topological fields of sets representing these algebraic structures provide a related topological semantics for these logics. Every interior algebra can be represented as a topological field of sets with the underlying Boolean algebra of the interior algebra corresponding to the complexes of the topological field of sets and the interior and closure operators of the interior algebra corresponding to those of the topology. Every Heyting algebra can be represented by a topological field of sets with the underlying lattice of the Heyting algebra corresponding to the lattice of complexes of the topological field of sets that are open in the topology. Moreover the topological field of sets representing a Heyting algebra may be chosen so that the open complexes generate all the complexes as a Boolean algebra. These related representations provide a well defined mathematical apparatus for studying the relationship between truth modalities (possibly true vs necessarily true, studied in modal logic) and notions of provability and refutability (studied in intuitionistic logic) and is thus deeply connected to the theory of modal companions of intermediate logics. Given a topological space the clopen sets trivially form a topological field of sets as each clopen set is its own interior and closure. The Stone representation of a Boolean algebra can be regarded as such a topological field of sets, however in general the topology of a topological field of sets can differ from the topology generated by taking arbitrary unions of complexes and in general the complexes of a topological field of sets need not be open or closed in the topology. Algebraic fields of sets and Stone fields A topological field of sets is called algebraic if and only if there is a base for its topology consisting of complexes. If a topological field of sets is both compact and algebraic then its topology is compact and its compact open sets are precisely the open complexes. Moreover, the open complexes form a base for the topology. Topological fields of sets that are separative, compact and algebraic are called Stone fields and provide a generalization of the Stone representation of Boolean algebras. Given an interior algebra we can form the Stone representation of its underlying Boolean algebra and then extend this to a topological field of sets by taking the topology generated by the complexes corresponding to the open elements of the interior algebra (which form a base for a topology). These complexes are then precisely the open complexes and the construction produces a Stone field representing the interior algebra - the Stone representation. (The topology of the Stone representation is also known as the McKinsey–Tarski Stone topology after the mathematicians who first generalized Stone's result for Boolean algebras to interior algebras and should not be confused with the Stone topology of the underlying Boolean algebra of the interior algebra which will be a finer topology). Preorder fields A preorder field is a triple where is a preordered set and is a field of sets. Like the topological fields of sets, preorder fields play an important role in the representation theory of interior algebras. Every interior algebra can be represented as a preorder field with its interior and closure operators corresponding to those of the Alexandrov topology induced by the preorder. In other words, for all : and Similarly to topological fields of sets, preorder fields arise naturally in modal logic where the points represent the possible worlds in the Kripke semantics of a theory in the modal logic S4, the preorder represents the accessibility relation on these possible worlds in this semantics, and the complexes represent sets of possible worlds in which individual sentences in the theory hold, providing a representation of the Lindenbaum–Tarski algebra of the theory. They are a special case of the general modal frames which are fields of sets with an additional accessibility relation providing representations of modal algebras. Algebraic and canonical preorder fields A preorder field is called algebraic (or tight) if and only if it has a set of complexes which determines the preorder in the following manner: if and only if for every complex , implies . The preorder fields obtained from S4 theories are always algebraic, the complexes determining the preorder being the sets of possible worlds in which the sentences of the theory closed under necessity hold. A separative compact algebraic preorder field is said to be canonical. Given an interior algebra, by replacing the topology of its Stone representation with the corresponding canonical preorder (specialization preorder) we obtain a representation of the interior algebra as a canonical preorder field. By replacing the preorder by its corresponding Alexandrov topology we obtain an alternative representation of the interior algebra as a topological field of sets. (The topology of this "Alexandrov representation" is just the Alexandrov bi-coreflection of the topology of the Stone representation.) While representation of modal algebras by general modal frames is possible for any normal modal algebra, it is only in the case of interior algebras (which correspond to the modal logic S4) that the general modal frame corresponds to topological field of sets in this manner. Complex algebras and fields of sets on relational structures The representation of interior algebras by preorder fields can be generalized to a representation theorem for arbitrary (normal) Boolean algebras with operators. For this we consider structures where is a relational structure i.e. a set with an indexed family of relations defined on it, and is a field of sets. The complex algebra (or algebra of complexes) determined by a field of sets on a relational structure, is the Boolean algebra with operators where for all if is a relation of arity then is an operator of arity and for all This construction can be generalized to fields of sets on arbitrary algebraic structures having both operators and relations as operators can be viewed as a special case of relations. If is the whole power set of then is called a full complex algebra or power algebra. Every (normal) Boolean algebra with operators can be represented as a field of sets on a relational structure in the sense that it is isomorphic to the complex algebra corresponding to the field. (Historically the term complex was first used in the case where the algebraic structure was a group and has its origins in 19th century group theory where a subset of a group was called a complex.) See also Notes References Goldblatt, R., Algebraic Polymodal Logic: A Survey, Logic Journal of the IGPL, Volume 8, Issue 4, p. 393-450, July 2000 Goldblatt, R., Varieties of complex algebras, Annals of Pure and Applied Logic, 44, p. 173-242, 1989 Naturman, C.A., Interior Algebras and Topology, Ph.D. thesis, University of Cape Town Department of Mathematics, 1991 Patrick Blackburn, Johan F.A.K. van Benthem, Frank Wolter ed., Handbook of Modal Logic, Volume 3 of Studies in Logic and Practical Reasoning, Elsevier, 2006 External links Algebra of sets, Encyclopedia of Mathematics. Boolean algebra Families of sets
Field of sets
[ "Mathematics" ]
3,065
[ "Boolean algebra", "Mathematical logic", "Combinatorics", "Fields of abstract algebra", "Basic concepts in set theory", "Families of sets" ]
1,018,774
https://en.wikipedia.org/wiki/Joseph%20M.%20Acaba
Joseph Michael Acabá (born May 17, 1967) is an American educator, hydrogeologist, and NASA astronaut. In May 2004, he became the first person of Puerto Rican ancestry to be named as a NASA astronaut candidate, when he was selected as a member of NASA Astronaut Training Group 19. He completed his training on February 10, 2006, and was assigned to STS-119, which flew from March 15 to 28, 2009, to deliver the final set of solar arrays to the International Space Station. He is the first person of Puerto Rican origin, and the twelfth of fifteen people of Ibero-american heritage to have flown to space. Acabá served as a flight engineer aboard the International Space Station, having launched on May 15, 2012. He arrived at the space station on May 17 and returned to Earth on September 17, 2012. Acaba returned to the International Space Station in 2017 as a member of Expedition 53/54. In 2023, Acaba was appointed the Chief of the Astronaut Office. Early life and education Acaba's parents, Ralph and Elsie Acabá, from Hatillo, Puerto Rico, moved in the mid-1960s to Inglewood, California, where he was born. They later moved to Anaheim, California. Since his childhood, Acaba enjoyed reading, especially science fiction. In school, he excelled in both science and math. As a child, his parents constantly exposed him to educational films, but it was the 8-mm film showing astronaut Neil Armstrong's Moon landing that intrigued him about outer space. During his senior year in high school, Acaba became interested in scuba diving and became a certified scuba diver through a job training program at his school. This experience inspired him to further his academic education in geology. In 1985, he graduated with honors from Esperanza High School in Anaheim. In 1990, Acaba received his bachelor's degree in geology from the University of California, Santa Barbara, and in 1992, he earned his master's degree in geology from the University of Arizona. Acaba was a sergeant in the United States Marine Corps Reserve where he served for six years. He also worked as a hydrogeologist in Los Angeles, California. Acaba spent two years in the United States Peace Corps and trained over 300 teachers in the Dominican Republic in modern teaching methodologies. He then served as island manager of the Caribbean Marine Research at Lee Stocking Island in the Exumas, Bahamas. Upon his return to the United States, Acaba moved to Florida, where he became shoreline revegetation coordinator in Vero Beach. He taught one year of science and math in high school and four years at Dunnellon Middle School. He also briefly taught at Melbourne High School in Melbourne, Florida. Upon his return to Florida in fall 2012, Acaba began coursework in the College of Education at Texas Tech University. He earned his Master of Education, curriculum and instruction from Texas Tech University in 2015. NASA career On May 6, 2004, Acaba and ten other people were selected from 99 applicants by NASA as astronaut candidates. NASA's administrator, Sean O'Keefe, in the presence of John Glenn, announced the members of the "19th group of Astronaut Candidates", an event which has not been repeated since 1958 when the original group of astronauts was presented to the world. Acaba, who was selected as an Educator Mission Specialist, completed his astronaut training on February 10, 2006, along with the other ten astronaut candidates. Upon completion of his training, Acaba was assigned to the Hardware Integration Team in the International Space Station branch, working technical issues with European Space Agency (ESA) hardware. STS-119 Acaba was assigned to the crew of STS-119 as mission specialist educator, which was launched on March 15, 2009, at 7:43 p.m., after NASA engineers repaired a leaky gas venting system the previous week, to deliver the final set of solar arrays to the International Space Station. Acaba, who carried on his person a Puerto Rican flag, requested that the crew be awakened on March 19 (Day 5) with the Puerto Rico folklore song "Qué Bonita Bandera" (What a Beautiful Flag) referring to the Puerto Rican flag, written in 1971 by Florencio Morales Ramos (Ramito) and sung by Jose Gonzalez and Banda Criolla. On March 20, he provided support to the first mission spacewalk. On March 21, he performed a spacewalk with Steve Swanson in which he helped to successfully unfurl the final "wings" of the solar array that will augment power to the ISS. 2 days later, Acaba performed his second EVA of the mission, with crew member Ricky Arnold. The main task of the EVA was to help move the CETA carts outside of the station to a different location. On March 28 the and its seven-person crew safely touched down on runway 15 at NASA's Kennedy Space Center in Florida at 3:14 p.m. EDT. Acaba said he was amazed at the views from the space station. Expedition 31/32 On May 15, 2012, Acaba was one of three crew members launching from Kazakhstan aboard the Soyuz TMA-04M spacecraft to the International Space Station. He and his fellow crew members, Gennady Padalka and Sergei Revin, arrived and docked with the space station two days after launch, on May 17 at 4:36 UTC. Acaba, Padalka, and Revin returned to Earth on September 17, 2012, after nearly 125 days in space. Between space missions Acaba served as the Branch Chief of the International Space Station Operations branch. The office is responsible for mission preparation and on-orbit support of space station crews. Until being selected as a flight engineer for Expedition 54\Expedition 55 Acaba served as Director of Operations Russia in Star City supporting crew training in Soyuz and Russian Segment systems. In September 2019 Acaba served as cavenaut in ESA CAVES training (between Italy and Slovenia) spending six nights underground simulating a mission exploring another planet. Expedition 53/54 In 2017 it was announced that Acaba would return to the ISS for his third mission, onboard Soyuz MS-06. The Soyuz vehicle was originally slated to launch with a crew of 2, due to the Russian crew cuts on the ISS for 2017, however, at short notice, it was decided that the 3rd seat would be filled by an experienced astronaut and would be funded by Roscosmos to cancel out owed debts. Acaba's backup for the mission was Shannon Walker, who was scheduled to fly as prime crew on Soyuz MS-12 as part of Expedition 59/60, although as of December 2018, she is not assigned to that crew Acaba launched on Soyuz MS-06 on September 12, 2017, performing a 6-hour rendezvous with the ISS. On October 20, 2017, Acaba and Randy Bresnik performed an EVA to continue with the lubrication of the new end effector on the robotic arm and to install new cameras. The duration was 6 hours and 49 minutes. During the mission Acaba's home in Houston was flooded by Hurricane Harvey and Hurricane Maria struck his native Puerto Rico. Statistics Chief of the Astronaut Office In February 2023, Acaba became Chief of the Astronaut Office at NASA. Acaba replaced Drew Feustel who was acting chief after Reid Wiseman stepped down from the position. Recognition On March 18, 2008, Acaba was honored by the Senate of Puerto Rico, which sponsored his first trip to the Commonwealth of Puerto Rico since being selected for space flight. During his visit, which was announced by then President of the Puerto Rican Senate, Kenneth McClintock, he met with schoolchildren at the Capitol, as well as at the Bayamón, Puerto Rico Science Park, which includes a planetarium and several surplus NASA rockets among its exhibits. Acaba returned to Puerto Rico on June 1, 2009. During his visit, he was presented with a proclamation by Governor Luis Fortuño. He spent seven days on the island and came into contact with over 10,000 persons, most of them schoolchildren. He received the Ana G. Mendez University System Presidential Medal and an Honorary Doctorate from the Polytechnic University of Puerto Rico, where he inaugurated a flight simulator on February 7, 2013, during one of his visits to Puerto Rico to promote the study of math and science among students, as well as to visit his relatives. Caras Magazine named him one of the most influential and exciting Puerto Ricans of 2012. See also References External links Spacefacts biography of Joseph Acaba NASA biography Video of NASA HQ Social event December 2012 1967 births American educators American expatriates in the Dominican Republic American people of Puerto Rican descent Aquanauts Crew members of the International Space Station Educator astronauts Esperanza High School alumni Hispanic and Latino American educators Hispanic and Latino American military personnel Hispanic and Latino American scientists Living people Military personnel from California NASA civilian astronauts People from Inglewood, California Puerto Rican aviators Puerto Rican United States Marines Scientists from Anaheim, California Space Shuttle program astronauts Spacewalkers Texas Tech University alumni United States Marine Corps astronauts United States Marine Corps reservists United States Marines University of Arizona alumni University of California, Santa Barbara alumni
Joseph M. Acaba
[ "Astronomy" ]
1,874
[ "Educator astronauts", "Astronomy education" ]
1,018,816
https://en.wikipedia.org/wiki/Thermoprotei
The Thermoprotei is a class of the Thermoproteota. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). See also List of Archaea genera References Further reading Scientific journals Scientific books External links Archaea classes Thermoproteota
Thermoprotei
[ "Biology" ]
84
[ "Archaea", "Archaea stubs" ]
1,018,849
https://en.wikipedia.org/wiki/Perseus%20Arm
The Perseus Arm is one of two major spiral arms of the Milky Way galaxy. The second major arm is called the Scutum–Centaurus Arm. The Perseus Arm begins from the distal end of the long Milky Way central bar. Previously thought to be 13,000 light-years away, it is now thought to lie 6,400 light years from the Solar System. Overview The Milky Way is a barred spiral galaxy with two major arms and a number of minor arms or spurs. The Perseus Spiral Arm, with a radius of approximately 10.7 kiloparsecs, is located between the minor Cygnus and Carina–Sagittarius Arms. It is named after the Perseus constellation in the direction of which it is seen from Earth. Recently, scientists in two large radio astronomy projects, the Bar and Spiral Structure Legacy (BeSSeL) Survey and the Japanese VLBI Exploration of Radio Astrometry (VERA), have made great efforts over about 20 years to measure the trigonometric parallaxes toward about 200 water vapor () and methanol () masers in massive star-forming regions in the Milky Way. They have employed these parallax measurements to delineate the forms of spiral arms from the Galactic longitude 2 to 240 degrees and extended the spiral arm traces into the portion of the Milky Way seen from the Southern Hemisphere using tangencies along some arms based on carbon monoxide emission. The image clearly presents the Milky Way as a barred spiral galaxy with fairly symmetric four major arms and some extra arm segments and spurs. The Perseus Arm is one of the four major arms. The arm is the length of more than 60,000 lr and the width of about 1,000 lr and the spiral extension in the pitch angle near 9 degree. There is speculation that the local spur known as the Orion–Cygnus Arm, which includes the Solar System and Earth and is located inside of the Perseus Arm, or is a branch of it, but this is unconfirmed. The Perseus Arm contains the Double Cluster and a number of Messier objects: The Crab Nebula (M1) Open Cluster M36 Open Cluster M37 Open Cluster M38 Open Cluster M52 Open Cluster M103. See also Galactic disc List of Messier objects References External links Messier Objects in the Milky Way Galaxy (SEDS) Milky Way arms Galactic astronomy Spiral galaxies
Perseus Arm
[ "Astronomy" ]
496
[ "Galactic astronomy", "Astronomical sub-disciplines" ]
1,018,854
https://en.wikipedia.org/wiki/Scutum%E2%80%93Centaurus%20Arm
The Scutum–Centaurus Arm, also known as Scutum-Crux arm, is a long, diffuse curving streamer of stars, gas and dust that spirals outward from the proximate end of the Milky Way's central bar. The Milky Way has been posited since the 1950s to have four spiral arms; numerous studies contest or nuance this number. In 2008, observations using the Spitzer Space Telescope failed to show the expected density of red clump giants in the direction of the Sagittarius and Norma arms. In January 2014, a 12-year study into the distribution and lifespan of massive stars and a 2013-reporting study of the distribution of masers and open clusters both found corroboratory, though would not state irrefutable, evidence for four principal spiral arms. The Scutum–Centaurus Arm lies between the minor Carina–Sagittarius Arm and the minor Norma Arm. The Scutum–Centaurus Arm starts near the core as the Scutum Arm, then gradually turns into the Centaurus Arm. The region where the Scutum–Centaurus Arm connects to the bar of the galaxy is rich in star-forming regions and open clusters. In 2006 a large cluster of new stars containing 14 red supergiant stars was discovered there and named RSGC1. In 2007 a cluster of approximately 50,000 newly formed stars named RSGC2 was located only a few hundred light years from RSGC1. It is thought to be less than 20 million years old and contains 26 red supergiant stars, the largest grouping of such stars known. Other clusters in this region include RSGC3 and Alicante 8. See also Galactic disc References Milky Way arms Galactic astronomy Spiral galaxies
Scutum–Centaurus Arm
[ "Astronomy" ]
361
[ "Galactic astronomy", "Astronomical sub-disciplines" ]
1,018,868
https://en.wikipedia.org/wiki/Astrology%20and%20astronomy
Astrology and astronomy were archaically treated together (), but gradually distinguished through the Late Middle Ages into the Age of Reason. Developments in 17th century philosophy resulted in astrology and astronomy operating as independent pursuits by the 18th century. Whereas the academic discipline of astronomy studies observable phenomena beyond the Earth's atmosphere, the pseudoscience of astrology uses the apparent positions of celestial objects as the basis for divination. Overview In pre-modern times, most cultures did not make a clear distinction between the two disciplines, putting them both together as one. In ancient Babylonia, famed for its astrology, there were not separate roles for the astronomer as predictor of celestial phenomena, and the astrologer as their interpreter; both functions were performed by the same person. In ancient Greece, pre-Socratic thinkers such as Anaximander, Xenophanes, Anaximenes, and Heraclides speculated about the nature and substance of the stars and planets. Astronomers such as Eudoxus (contemporary with Plato) observed planetary motions and cycles, and created a geocentric cosmological model that would be accepted by Aristotle. This model generally lasted until Ptolemy, who added epicycles to explain the retrograde motion of Mars. (Around 250 BC, Aristarchus of Samos postulated a proto-heliocentric theory, which would not be reconsidered for nearly two millennia (Copernicus), as Aristotle's geocentric model continued to be favored.) The Platonic school promoted the study of astronomy as a part of philosophy because the motions of the heavens demonstrate an orderly and harmonious cosmos. In the third century BC, Babylonian astrology began to make its presence felt in Greece. Astrology was criticized by Hellenistic philosophers such as the Academic Skeptic Carneades and Middle Stoic Panaetius. However, the notions of the Great Year (when all the planets complete a full cycle and return to their relative positions) and eternal recurrence were Stoic doctrines that made divination and fatalism possible. In the Hellenistic world, the Greek words 'astrologia' and 'astronomia' were often used interchangeably, but they were conceptually not the same. Plato taught about 'astronomia' and stipulated that planetary phenomena should be described by a geometrical model. The first solution was proposed by Eudoxus. Aristotle favored a physical approach and adopted the word 'astrologia'. Eccentrics and epicycles came to be thought of as useful fictions. For a more general public, the distinguishing principle was not evident and either word was acceptable. For the Babylonian horoscopic practice, the words specifically used were 'apotelesma' and 'katarche', but otherwise it was subsumed under the aristotelian term 'astrologia'. In his compilatory work Etymologiae, Isidore of Seville noted explicitly the difference between the terms astronomy and astrology (Etymologiae, III, xxvii) and the same distinction appeared later in the texts of Arabian writers. Isidore identified the two strands entangled in the astrological discipline and called them astrologia naturalis and astrologia superstitiosa. Astrology was widely accepted in medieval Europe as astrological texts from Hellenistic and Arabic astrologers were translated into Latin. In the late Middle Ages, its acceptance or rejection often depended on its reception in the royal courts of Europe. Not until the time of Francis Bacon was astrology rejected as a part of scholastic metaphysics rather than empirical observation. A more definitive split between astrology and astronomy in the West took place gradually in the seventeenth and eighteenth centuries, when astrology was increasingly thought of as an occult science or superstition by the intellectual elite. Because of their lengthy shared history, it sometimes happens that the two are confused with one another even today. Many contemporary astrologers, however, do not claim that astrology is a science, but think of it as a form of divination like the I-Ching, an art, or a part of a spiritual belief structure (influenced by trends such as Neoplatonism, Neopaganism, Theosophy, and Hinduism). Distinguishing characteristics The primary goal of astronomy is to understand the physics of the universe. Astrologers use astronomical calculations for the positions of celestial bodies along the ecliptic and attempt to correlate celestial events (astrological aspects, sign positions) with earthly events and human affairs. Astronomers consistently use the scientific method, naturalistic presuppositions and abstract mathematical reasoning to investigate or explain phenomena in the universe. Astrologers use mystical or religious reasoning as well as traditional folklore, symbolism and superstition blended with mathematical predictions to explain phenomena in the universe. The scientific method is not consistently used by astrologers. Astrologers practice their discipline geocentrically and they consider the universe to be harmonious, changeless and static, while astronomers have employed the scientific method to infer that the universe is without a center and is dynamic, expanding outward per the Big Bang theory. Astrologers believe that the position of the stars and planets determine an individual's personality and future. Astronomers study the actual stars and planets, but have found no evidence supporting astrological theories. Psychologists study personality, and while there are many theories of personality, no mainstream theories in that field are based on astrology. (The Myers-Briggs personality typology, based on the works of Carl Jung, has four major categories that correspond to the astrological elements of fire, air, earth, and water. This theory of personality is used by career counselors and life coaches but not by psychologists.) Both astrologers and astronomers see Earth as being an integral part of the universe, that Earth and the universe are interconnected as one cosmos (not as being separate and distinct from each other). However, astrologers philosophically and mystically portray the cosmos as having a supernatural, metaphysical and divine essence that actively influences world events and the personal lives of people. Astronomers, as members of the scientific community, cannot use in their scientific articles explanations that are not derived from empirically reproducible conditions, irrespective of their personal convictions. Historical divergence For a long time the funding from astrology supported some astronomical research, which was in turn used to make more accurate ephemerides for use in astrology. In Medieval Europe the word Astronomia was often used to encompass both disciplines as this included the study of astronomy and astrology jointly and without a real distinction; this was one of the original Seven Liberal Arts. Kings and other rulers generally employed court astrologers to aid them in the decision making in their kingdoms, thereby funding astronomical research. University medical students were taught astrology as it was generally used in medical practice. Astronomy and astrology diverged over the course of the 17th through 19th centuries. Copernicus did not practice astrology (nor empirical astronomy; his work was theoretical), but the most important astronomers before Isaac Newton were astrologers by profession—Tycho Brahe, Johannes Kepler, and Galileo Galilei. Also relevant here was the development of better timekeeping instruments, initially for aid in navigation; improved timekeeping made it possible to make more exact astrological predictions—predictions which could be tested, and which consistently proved to be false. By the end of the 18th century, astronomy was one of the major sciences of the Enlightenment model, using the recently codified scientific method, and was altogether distinct from astrology. Astrology and Zodiac Signs in the Modern Age Astrology is considered by many philosophers and astronomers to be a false representation of the universe that individuals may use to associate the movement of the celestial bodies to their own ideas of human life and spirituality. Although many scholars consider astrology to be a pseudoscience, those that believe in zodiac signs and their meanings will argue the opposite, and these followers will support their claims with explanations for how and why the universe is connected to the human condition. The most popular and well-known form of astrology is seen in horoscopes that people are exposed to through social media, popular news outlets, and digital media. The horoscopes allow people interested in astrology and zodiac signs to associate planets like Mars to human emotions such as drive and courage, and further increase the notion that these planets and their motions have an effect on their daily lives. Although astrology was considered factual predictions in ancient science, in modern times it is used as a spiritual belief system for many people. Ancient forms of astrology often combined with astronomy, but eventually split into separate paths during the time of Copernicus, Kepler, and Galileo. Zodiac signs in modern times are constructed from constellations seen across the earth, and they are used to associate human emotions and tendencies with the stars and heavenly bodies. In some ways, astrology has become somewhat of a pseudo-religion, due to the emphasis put on the meanings of constellations and how they relate to each individual. One may "judge" another person based on their zodiac sign, simply because there are unique listed traits carried by each sign, which reflect on the person who it refers to. The signs that are attributed to individuals are based on the time of year that each individual was born in. For example, people born between about 20 April and 20 May will carry the zodiac sign of Taurus, and those born between about 23 July and 22 August carry the Leo sign. Despite the many individuals that consider zodiac astrology to be factual, many consider the horoscope meanings to be false and simply participate in this modern astrology for enjoyment. Lastly, zodiac signs and astrology in the modern era are very different from the astrology of the ancient world. The minimal technology, knowledge, and expertise of the ancient world allowed for the combination of astrology and astronomy to become the generally accepted explanation for the universe and its impact on human lives. Whereas in current times, astrology and astronomy are extremely different. Zodiac signs and horoscopes are a product of cultural developments (such as the internet) that allow for easy access to information on the horoscopes through social media, tabloids and news outlets that benefit from promoting these aspects of astrology. Many individuals that are interested in horoscopes are not aware that the signs and their respective dates are inaccurate, and do not have any basis in science. Due to the "trendy" nature of zodiac signs and their popularity, it is widely recognized as part of global culture. * Scorpio is not visible through the full period. Instead, the constellation Ophiuchus is visible during this time and so is a proposed 13th zodiac sign. See also History of astronomy History of astrology Natal chart Panchangam The Sophia Centre Treatise on the Astrolabe References Further reading External links The Geoffrey Chaucer Page: Astrology & Astronomy, Harvard University An Astronomer Looks at Astrology History of astronomy Astronomy Ancient astronomy Philosophy of astronomy History of science
Astrology and astronomy
[ "Astronomy", "Technology" ]
2,251
[ "Astrology", "Philosophy of astronomy", "History of astronomy", "History of science", "History of science and technology", "Ancient astronomy" ]
1,018,876
https://en.wikipedia.org/wiki/Norma%20Arm
The Norma Arm is a minor spiral arm of the Milky Way extending from and around its central hub region. The inner portion of the Arm is called the Norma Arm in narrow meaning. The outer end of it is identified either with the Cygnus Arm (not to be confused with local and minor Orion-Cygnus Arm), which lies outside the Perseus Arm, or the Outer Arm, which is located farther away from the center of the Galaxy than the Cygnus Arm. The Norma Arm begins from the Galactic Center, and extends outward to a radius of . It is named for the Norma constellation, through which the Arm as seen from Earth passes. Like many other galaxies of similar type, the Milky Way consists of a large mass of stars shaped into the form of a relatively flat disc by gravity. The disc is rotating, with the dense central body of stars moving at greater speeds than those toward the rim of the disc. As a result, the pattern of stars within the Galaxy as viewed from directly above or below the disc has formed into a spiral. Because of localised gravitational variations, the spiral pattern has itself formed several distinct 'spiral arms', where particularly large numbers of stars can be found. See also Galactic disc References Milky Way arms Galactic astronomy Spiral galaxies
Norma Arm
[ "Astronomy" ]
259
[ "Galactic astronomy", "Astronomical sub-disciplines" ]
1,018,951
https://en.wikipedia.org/wiki/List%20of%20convexity%20topics
This is a list of convexity topics, by Wikipedia page. Alpha blending - the process of combining a translucent foreground color with a background color, thereby producing a new blended color. This is a convex combination of two colors allowing for transparency effects in computer graphics. Barycentric coordinates - a coordinate system in which the location of a point of a simplex (a triangle, tetrahedron, etc.) is specified as the center of mass, or barycenter, of masses placed at its vertices. The coordinates are non-negative for points in the convex hull. Borsuk's conjecture - a conjecture about the number of pieces required to cover a body with a larger diameter. Solved by Hadwiger for the case of smooth convex bodies. Bond convexity - a measure of the non-linear relationship between price and yield duration of a bond to changes in interest rates, the second derivative of the price of the bond with respect to interest rates. A basic form of convexity in finance. Carathéodory's theorem (convex hull) - If a point x of Rd lies in the convex hull of a set P, there is a subset of P with d+1 or fewer points such that x lies in its convex hull. Choquet theory - an area of functional analysis and convex analysis concerned with measures with support on the extreme points of a convex set C. Roughly speaking, all vectors of C should appear as "averages" of extreme points. Complex convexity — extends the notion of convexity to complex numbers. Convex analysis - the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization. Convex combination - a linear combination of points where all coefficients are non-negative and sum to 1. All convex combinations are within the convex hull of the given points. Convex and Concave - a print by Escher in which many of the structure's features can be seen as both convex shapes and concave impressions. Convex body - a compact convex set in a Euclidean space whose interior is non-empty. Convex conjugate - a dual of a real functional in a vector space. Can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes. Convex curve - a plane curve that lies entirely on one side of each of its supporting lines. The interior of a closed convex curve is a convex set. Convex function - a function in which the line segment between any two points on the graph of the function lies above the graph. Closed convex function - a convex function all of whose sublevel sets are closed sets. Proper convex function - a convex function whose effective domain is nonempty and it never attains minus infinity. Concave function - the negative of a convex function. Convex geometry - the branch of geometry studying convex sets, mainly in Euclidean space. Contains three sub-branches: general convexity, polytopes and polyhedra, and discrete geometry. Convex hull (aka convex envelope) - the smallest convex set that contains a given set of points in Euclidean space. Convex lens - a lens in which one or two sides is curved or bowed outwards. Light passing through the lens is converged (or focused) to a spot behind the lens. Convex optimization - a subfield of optimization, studies the problem of minimizing convex functions over convex sets. The convexity property can make optimization in some sense "easier" than the general case - for example, any local minimum must be a global minimum. Convex polygon - a 2-dimensional polygon whose interior is a convex set in the Euclidean plane. Convex polytope - an n-dimensional polytope which is also a convex set in the Euclidean n-dimensional space. Convex set - a set in Euclidean space in which contains every segment between every two of its points. Convexity (finance) - refers to non-linearities in a financial model. When the price of an underlying variable changes, the price of an output does not change linearly, but depends on the higher-order derivatives of the modeling function. Geometrically, the model is no longer flat but curved, and the degree of curvature is called the convexity. Duality (optimization) Epigraph (mathematics) - for a function f : Rn→R, the set of points lying on or above its graph Extreme point - for a convex set S in a real vector space, a point in S which does not lie in any open line segment joining two points of S. Fenchel conjugate Fenchel's inequality Fixed-point theorems in infinite-dimensional spaces, generalise the Brouwer fixed-point theorem. They have applications, for example, to the proof of existence theorems for partial differential equations Four vertex theorem - every convex curve has at least 4 vertices. Gift wrapping algorithm - an algorithm for computing the convex hull of a given set of points Graham scan - a method of finding the convex hull of a finite set of points in the plane with time complexity O(n log n) Hadwiger conjecture (combinatorial geometry) - any convex body in n-dimensional Euclidean space can be covered by 2n or fewer smaller bodies homothetic with the original body. Hadwiger's theorem - a theorem that characterizes the valuations on convex bodies in Rn. Helly's theorem Hyperplane - a subspace whose dimension is one less than that of its ambient space Indifference curve Infimal convolute Interval (mathematics) - a set of real numbers with the property that any number that lies between two numbers in the set is also included in the set Jarvis march Jensen's inequality - relates the value of a convex function of an integral to the integral of the convex function John ellipsoid - E(K) associated to a convex body K in n-dimensional Euclidean space Rn is the ellipsoid of maximal n-dimensional volume contained within K. Lagrange multiplier - a strategy for finding the local maxima and minima of a function subject to equality constraints Legendre transformation - an involutive transformation on the real-valued convex functions of one real variable Locally convex topological vector space - example of topological vector spaces (TVS) that generalize normed spaces Macbeath regions Mahler volume - a dimensionless quantity that is associated with a centrally symmetric convex body Minkowski's theorem - any convex set in which is symmetric with respect to the origin and with volume greater than 2n d(L) contains a non-zero lattice point Mixed volume Mixture density Newton polygon - a tool for understanding the behaviour of polynomials over local fields Radon's theorem - on convex sets, that any set of d + 2 points in Rd can be partitioned into two disjoint sets whose convex hulls intersect Separating axis theorem Shapley–Folkman lemma - a result in convex geometry with applications in mathematical economics that describes the Minkowski addition of sets in a vector space Shephard's problem - a geometrical question Simplex - a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions Simplex method - a popular algorithm for linear programming Subdifferential - generalization of the derivative to functions which are not differentiable Supporting hyperplane - a hyperplane meeting certain conditions Supporting hyperplane theorem - that defines a supporting hyperplane Mathematics-related lists Mathematical analysis Topics
List of convexity topics
[ "Mathematics" ]
1,518
[ "Mathematical analysis" ]
1,019,002
https://en.wikipedia.org/wiki/Dirac%20measure
In mathematics, a Dirac measure assigns a size to a set based solely on whether it contains a fixed element x or not. It is one way of formalizing the idea of the Dirac delta function, an important tool in physics and other technical fields. Definition A Dirac measure is a measure on a set (with any -algebra of subsets of ) defined for a given and any (measurable) set by where is the indicator function of . The Dirac measure is a probability measure, and in terms of probability it represents the almost sure outcome in the sample space . We can also say that the measure is a single atom at ; however, treating the Dirac measure as an atomic measure is not correct when we consider the sequential definition of Dirac delta, as the limit of a delta sequence. The Dirac measures are the extreme points of the convex set of probability measures on . The name is a back-formation from the Dirac delta function; considered as a Schwartz distribution, for example on the real line, measures can be taken to be a special kind of distribution. The identity which, in the form is often taken to be part of the definition of the "delta function", holds as a theorem of Lebesgue integration. Properties of the Dirac measure Let denote the Dirac measure centred on some fixed point in some measurable space . is a probability measure, and hence a finite measure. Suppose that is a topological space and that is at least as fine as the Borel -algebra on . is a strictly positive measure if and only if the topology is such that lies within every non-empty open set, e.g. in the case of the trivial topology . Since is probability measure, it is also a locally finite measure. If is a Hausdorff topological space with its Borel -algebra, then satisfies the condition to be an inner regular measure, since singleton sets such as are always compact. Hence, is also a Radon measure. Assuming that the topology is fine enough that is closed, which is the case in most applications, the support of is . (Otherwise, is the closure of in .) Furthermore, is the only probability measure whose support is . If is -dimensional Euclidean space with its usual -algebra and -dimensional Lebesgue measure , then is a singular measure with respect to : simply decompose as and and observe that . The Dirac measure is a sigma-finite measure. Generalizations A discrete measure is similar to the Dirac measure, except that it is concentrated at countably many points instead of a single point. More formally, a measure on the real line is called a discrete measure (in respect to the Lebesgue measure) if its support is at most a countable set. See also Discrete measure Dirac delta function References Measures (measure theory)
Dirac measure
[ "Physics", "Mathematics" ]
575
[ "Measures (measure theory)", "Quantity", "Physical quantities", "Size" ]
1,019,052
https://en.wikipedia.org/wiki/Center%20for%20Alternatives%20to%20Animal%20Testing
The Johns Hopkins University Center for Alternatives to Animal Testing (CAAT) has worked with scientists, since 1981, to find new methods to replace the use of laboratory animals in experiments, reduce the number of animals tested, and refine necessary tests to eliminate pain and distress (the Three Rs as described in Russell and Burch's Principles of Humane Experimental Technique). CAAT is an academic, science-based center affiliated with the Johns Hopkins Bloomberg School of Public Health. CAAT promotes humane science by supporting the creation, development, validation, and use of alternatives to animals in research, product safety testing, and education. It is not an activist group; rather, it seeks to effect change by working with scientists in industry, government, and academia to find new ways to replace animals with non-animal methods, reduce the numbers of animals necessary, or refine methods to make them less painful or stressful to the animals involved. CAAT has offered grants since 1993 that fund development of non-animal in-vitro test methods that may replace the use of laboratory animals in certain tests. Starting in 2013, CAAT has co-sponsored an annual symposium with the Animal Welfare Information Center (National Agricultural Library, USDA) and the Office of Laboratory Animal Welfare (NIH) on the Three Rs. The first six symposia focused on the social housing of laboratory animals, since it has been shown that housing social species with other animals of their kind improves animal welfare. The most recent symposium, "7th Annual 3Rs Symposium: Practical Solutions and Success Stories," occurred virtually on June 4-5, 2020 and addressed topics throughout the spectrum of the Three Rs, including using brain organoids to study infectious diseases such as COVID-19 or Zika, using Grimace Scales to access animal pain, positive reinforcement training of lab animals, and using guidelines such as ARRIVE and PREPARE to design experiments that use fewer animals. CAAT holds an annual Summer School at Johns Hopkins School of Public Health in Baltimore, Maryland, for members of the laboratory animal community to share innovations and techniques in the 3Rs. See also Alternatives to animal testing Canadian Centre for Alternatives to Animal Methods List of animal rights groups Dr Hadwen Trust Henry Spira References External links 7th Annual 3Rs Symposium: Practical Solutions and Success Stories, June 4-5, 2020, Animal Welfare Information Center, National Agricultural Library (videos of presentations from the 7th annual 3Rs symposium) Selected Presentations: Symposiums on Social Housing of Laboratory Animals, Animal Welfare Information Center, National Agricultural Library (videos of presentations from the first six CAAT Social Housing symposia, 2013-2019) Animal research institutes Animal welfare organizations based in the United States Animal testing Anti-vivisection organizations Center for Alternatives to Animal Testing
Center for Alternatives to Animal Testing
[ "Chemistry" ]
555
[ "Animal testing" ]
1,019,092
https://en.wikipedia.org/wiki/Omphalotus%20illudens
Omphalotus illudens, commonly known as the eastern jack-o'lantern mushroom, is a large, orange mushroom that is often found in clumps on decaying stumps, buried roots, or at the base of hardwood trees in eastern North America. Its gills often exhibit a weak green bioluminescence when fresh. This green glow has been mentioned in several journal articles, which state that the phenomenon can persist up to 40–50 hours after the mushroom has been picked. It is believed that this display serves to attract insects to the mushroom's gills during nighttime, which can then distribute its spores across a wider area. Omphalotus illudens is sometimes confused with edible chanterelles, but can be distinguished by its thicker, fleshier appearance, tendency to form large clusters, and clearly separated caps when young. Unlike chanterelles, the Eastern jack-o'-lantern is poisonous to humans when eaten, whether raw or cooked, and typically causes vomiting, cramps, and diarrhea. Although some older literature claims the name is synonymous with Omphalotus olearius, phylogenetic analysis confirms the two as distinct species. Toxicity The poisonous chemical compounds illudin S and illudin M were isolated from Omphalotus illudens. In addition to their antibacterial and antifungal effects, illudins appear to be the cause of human toxicity when these mushrooms are eaten raw or cooked. Muscarine has also been indirectly implicated in toxicity, but modern studies to demonstrate its presence in O. illudens are needed. The cytotoxic effect of illudin is of interest for treating some cancers, but illudin itself is too poisonous to use directly so it must first be chemically modified. Inside human cells, illudin S reacts with DNA and creates a type of DNA damage that blocks transcription. This block can only be relieved by a repair system called nucleotide excision repair. Damage in non-transcribed DNA areas is left unrepaired by the cell. This property was exploited by the company MGI Pharma to develop an illudin-derivative called Irofulven for use as a cancer treatment. Its application is still in the experimental phase. Gallery See also List of bioluminescent fungi Omphalotus olivascens References Bioluminescent fungi illudens Poisonous fungi Fungus species
Omphalotus illudens
[ "Biology", "Environmental_science" ]
493
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
1,019,095
https://en.wikipedia.org/wiki/Route%20summit
A route summit is the highest point on a transportation route crossing higher ground. The term is often used in describing railway routes, less often in road transportation. In canal terminology, the highest pound on a route is called the summit pound. Examples of usage Rail Beattock Summit Stainmore Summit, formerly the second highest railway in England until its closure in 1962 Summit Tank - highest point Unanderra - Moss Vale Cullerin - highest point Sydney - Albury Shap Transport infrastructure
Route summit
[ "Physics" ]
97
[ "Physical systems", "Transport", "Transport infrastructure" ]
1,019,118
https://en.wikipedia.org/wiki/Crown%20group
In phylogenetics, the crown group or crown assemblage is a collection of species composed of the living representatives of the collection, the most recent common ancestor of the collection, and all descendants of the most recent common ancestor. It is thus a way of defining a clade, a group consisting of a species and all its extant or extinct descendants. For example, Neornithes (birds) can be defined as a crown group, which includes the most recent common ancestor of all modern birds, and all of its extant or extinct descendants. The concept was developed by Willi Hennig, the formulator of phylogenetic systematics, as a way of classifying living organisms relative to their extinct relatives in his "Die Stammesgeschichte der Insekten", and the "crown" and "stem" group terminology was coined by R. P. S. Jefferies in 1979. Though formulated in the 1970s, the term was not commonly used until its reintroduction in 2000 by Graham Budd and Sören Jensen. Contents of the crown group It is not necessary for a species to have living descendants in order for it to be included in the crown group. Extinct side branches on the family tree that are descended from the most recent common ancestor of living members will still be part of a crown group. For example, if we consider the crown-birds (i.e. all extant birds and the rest of the family tree back to their most recent common ancestor), extinct side branches like the dodo or great auk are still descended from the most recent common ancestor of all living birds, so fall within the bird crown group. One very simplified cladogram for birds is shown below: In this diagram, the clade labelled "Neornithes" is the crown group of birds: it includes the most recent common ancestor of all living birds and its descendants, living or not. Although considered to be birds (i.e. members of the clade Aves), Archaeopteryx and other extinct groups are not included in the crown group, as they fall outside the Neornithes clade, being descended from an earlier ancestor. An alternative definition does not require any members of a crown group to be extant, only to have resulted from a "major cladogenesis event". The first definition forms the basis of this article. Often, the crown group is given the designation "crown-", to separate it from the group as commonly defined. Both birds and mammals are traditionally defined by their traits, and contain fossil members that lived before the last common ancestors of the living groups or, like the mammal Haldanodon, were not descended from that ancestor although they lived later. Crown-Aves and Crown-Mammalia therefore differ slightly in content from the common definition of Aves and Mammalia. This has caused some confusion in the literature. Other groups under the crown group concept The cladistic idea of strictly using the topology of the phylogenetic tree to define groups necessitates other definitions than crown groups to adequately define commonly discussed fossil groups. Thus, a host of prefixes have been defined to describe various branches of the phylogenetic tree relative to extant organisms. Pan-group A pan-group or total group is the crown group and all organisms more closely related to it than to any other extant organisms. In a tree analogy, it is the crown group and all branches back to (but not including) the split with the closest branch to have living members. The Pan-Aves thus contain the living birds and all (fossil) organisms more closely related to birds than to crocodilians (their closest living relatives). The phylogenetic lineage leading back from Neornithes to the point where it merges with the crocodilian lineage, along with all side branches, constitutes pan-birds. In addition to non-crown group primitive birds like Archaeopteryx, Hesperornis and Confuciusornis, therefore, pan-group birds would include all dinosaurs and pterosaurs as well as an assortment of non-crocodilian animals like Marasuchus. Pan-Mammalia consists of all mammals and their fossil ancestors back to the phylogenetic split from the remaining amniotes (the Sauropsida). Pan-Mammalia is thus an alternative name for Synapsida. Stem groups A stem group is a paraphyletic assemblage composed of the members of a pan-group or total group, above, minus the crown group itself (and therefore minus all living members of the pan-group). This leaves primitive relatives of the crown groups, back along the phylogenetic line to (but not including) the last common ancestor of the crown group and their closest living relatives. It follows from the definition that all members of a stem group are extinct. The "stem group" is the most used and most important of the concepts linked to crown groups, as it offers a means to reify and name paraphyletic assemblages of fossils that otherwise do not fit into systematics based on living organisms. While often attributed to Jefferies (1979), Willmann (2003) traced the origin of the stem group concept to Austrian systematist Othenio Abel (1914), and it was discussed and diagrammed in English as early as 1933 by A. S. Romer. Alternatively, the term "stem group" is sometimes used in a wider sense to cover any members of the traditional taxon falling outside the crown group. Permian synapsids like Dimetrodon or Anteosaurus are stem mammals in the wider sense but not in the narrower one. Often, an (extinct) grouping is identified as belonging together. Later, it may be realized other (extant) groupings actually emerged within such grouping, rendering them a stem grouping. Cladistically, the new groups should then be added to the group, as paraphyletic groupings are not natural. In any case, stem groupings with living descendants should not be viewed as a cohesive group, but their tree should be further resolved to reveal the full bifurcating phylogeny. Examples of stem groups (in the wider sense) Stem birds perhaps constitute the most cited example of a stem group, as the phylogeny of this group is fairly well known. The following cladogram, based on Benton (2005), illustrates the concept: The crown group here is Neornithes, all modern bird lineages back to their last common ancestor. The closest living relatives of birds are crocodilians. If we follow the phylogenetic lineage leading to Neornithes to the left, the line itself and all side branches belong to the stem birds until the lineage merges with that of the crocodilians. In addition to non-crown group primitive birds like Archaeopteryx, Hesperornis and Confuciusornis, stem group birds include the dinosaurs and the pterosaurs. The last common ancestor of birds and crocodilians—the first crown group archosaur—was neither bird nor crocodilian and possessed none of the features unique to either. As the bird stem group evolved, distinctive bird features such as feathers and hollow bones appeared. Finally, at the base of the crown group, all traits common to extant birds were present. Under the widely used total-group perspective, the Crocodylomorpha would become synonymous with the Crocodilia, and the Avemetatarsalia would become synonymous with the birds, and the above tree could be summarized as An advantage of this approach is that declaring Theropoda to be birds (or Pan-aves) is more specific than declaring it to be a member of the Archosauria, which would not exclude it from the Crocodilia branch. Basal branch names such as Avemetatarsalia are usually more obscure. However, not so advantageous are the facts that "Pan-Aves" and "Aves" are not the same group, the circumscription of the concept of "Pan-Aves" (synonymous with Avemetatarsalia) is only evident by examination of the above tree, and calling both groups "birds" is ambiguous. Stem mammals are those in the lineage leading to living mammals, together with side branches, from the divergence of the lineage from the Sauropsida to the last common ancestor of the living mammals. This group includes the synapsids as well as mammaliaforms like the morganucodonts and the docodonts; the latter groups have traditionally and anatomically been considered mammals even though they fall outside the crown group mammals. Stem tetrapods are the animals belonging to the lineage leading to tetrapods from their divergence from the lungfish, our nearest relatives among the fishes. In addition to a series of lobe-finned fishes, they also include some of the early labyrinthodonts. Exactly what labyrinthodonts are in the stem group tetrapods rather than the corresponding crown group is uncertain, as the phylogeny of early tetrapods is not well understood. This example shows that crown and stem group definitions are of limited value when there is no consensus phylogeny. Stem arthropods constitute a group that has seen attention in connection with the Burgess Shale fauna. Several of the finds, including the enigmatic Opabinia and Anomalocaris have some, though not all, features associated with arthropods, and are thus considered stem arthropods. The sorting of the Burgess Shale fauna into various stem groups finally enabled phylogenetic sorting of this enigmatic assemblage and also allowed for identifying velvet worms as the closest living relatives of arthropods. Stem priapulids are other early Cambrian to middle Cambrian faunas, appearing in Chengjiang to Burgess Shale. The genus Ottoia has more or less the same build as modern priapulids, but phylogenetic analysis indicates that it falls outside the crown group, making it a stem priapulid. Plesion-group The name plesion has a long history in biological systematics, and plesion group has acquired several meanings over the years. One use is as "nearby group" (plesion means close to in Greek), i.e. sister group to a given taxon, whether that group is a crown group or not. The term may also mean a group, possibly paraphyletic, defined by primitive traits (i.e. symplesiomorphies). It is generally taken to mean a side branch splitting off earlier on the phylogenetic tree than the group in question. Palaeontological significance of stem and crown groups Placing fossils in their right order in a stem group allows the order of these acquisitions to be established, and thus the ecological and functional setting of the evolution of the major features of the group in question. Stem groups thus offer a route to integrate unique palaeontological data into questions of the evolution of living organisms. Furthermore, they show that fossils that were considered to lie in their own separate group because they did not show all the diagnostic features of a living clade, can nevertheless be related to it by lying in its stem group. Such fossils have been of particular importance in considering the origins of the tetrapods, mammals, and animals. The application of the stem group concept also influenced the interpretation of the organisms of the Burgess shale. Their classification in stem groups to extant phyla, rather than in phyla of their own, is thought by some to make the Cambrian explosion easier to understand without invoking unusual evolutionary mechanisms; however, application of the stem group concept does nothing to ameliorate the difficulties that phylogenetic telescoping poses to evolutionary theorists attempting to understand both macroevolutionary change and the abrupt character of the Cambrian explosion. Overemphasis on the stem group concept threatens to delay or obscure proper recognition of new higher taxa. Stem groups in systematics As originally proposed by Karl-Ernst Lauterbach, stem groups should be given the prefix "stem" (i.e. Stem-Aves, Stem-Arthropoda), however the crown group should have no prefix. The latter has not been universally accepted for known groups. A number of paleontologists have opted to apply this approach anyway. See also References Further reading Evolutionary biology Phylogenetics
Crown group
[ "Biology" ]
2,527
[ "Evolutionary biology", "Phylogenetics", "Bioinformatics", "Taxonomy (biology)" ]
1,019,188
https://en.wikipedia.org/wiki/Williamson%20amplifier
The Williamson amplifier is a four-stage, push-pull, Class A triode-output valve audio power amplifier designed by D. T. N. Williamson during World War II. The original circuit, published in 1947 and addressed to the worldwide do it yourself community, set the standard of high fidelity sound reproduction and served as a benchmark or reference amplifier design throughout the 1950s. The original circuit was copied by hundreds of thousands amateurs worldwide. It was an absolute favourite on the DIY scene of the 1950s, and in the beginning of the decade also dominated British and North American markets for factory-assembled amplifiers. The Williamson circuit was based on the 1934 Wireless World Quality Amplifier by Walter Cocking, with an additional error amplifier stage and a global negative feedback loop. Deep feedback, triode-connected KT66 power tetrodes, conservative choice of standing currents, and the use of wide-bandwidth output transformer all contributed to the performance of the Williamson. It had a modest output power rating of but surpassed all contemporary designs in having very low harmonic distortion and intermodulation, flat frequency response throughout the audible frequency range, and effective damping of loudspeaker resonances. The 0.1% distortion figure of the Williamson amplifier became the criterion for high fidelity performance that remains valid in the 21st century. The Williamson amplifier was sensitive to selection and matching of passive components and valves, and prone to unwanted oscillations at infrasonic and ultrasonic frequencies. Enclosing four valve stages and an output transformer in a negative feedback loop was a severe test of design, resulting in a very narrow phase margin or, quite often, no margin at all. Attempts to improve stability of the Williamson could not fix this fundamental flaw. For this reason, and due to high costs of required quality components, manufacturers soon abandoned the Williamson circuit in favour of inherently more stable, cheaper and efficient three-stage, ultralinear or pentode-output designs. Background In 1925 Edward W. Kellogg published the first comprehensive theory of audio power amplifier design. Kellogg proposed that the permissible level of harmonic distortion can reach 5%, provided that distortion rises smoothly rather than abruptly, and that it generates only low-order harmonics. Kellogg's work became the de facto industry standard of the interwar period, when most amplifiers were employed in cinemas. Early sound film and public address requirements were low, and customers were content with crude but efficient and affordable transformer-coupled, class B amplifiers. The best theatre amplifiers, built by Western Electric around their 300A and 300B power triodes, far exceeded the average level but were expensive and rare. By the middle of the 1930s Western Electric and RCA improved performance of their experimental audio equipment to a level approaching modern understanding of high fidelity, but none of these systems could be commercialized yet. They lacked sound sources of matching quality. Industry leaders of the 1930s agreed that the improvement of commercial amplifiers and loudspeakers would make sense only after the introduction of new physical media surpassing low-quality AM broadcasting and shellac records. The Great Depression, World War II and the post-war television boomconsecutively delayed this goal. Development of commercial audio equipment came to a standstill; the few enthusiasts seeking higher level of fidelity had to literally do it themselves. American DIYers experimented with novel beam tetrodes. Australians preferred traditional push-pull circuits built around directly-heated triodes and complex, expensive interstage transformers. British school of thought led by Walter Cocking of Wireless World leaned to push-pull, class A, RC-coupled triode output stages. RC coupling, as opposed to transformer coupling, argued Cocking, extended the amplifier's bandwidth beyond the required minimum of 10 kHz and improved its transient response. Tetrodes and pentodes were undesirable due to higher harmonic distortion and higher output impedance that failed to control fundamental resonance of the loudspeaker. Cocking wrote that Kellogg's 5% distortion limit was too high for quality amplification, and outlined a different set of requirements - the first definition of high fidelity. Instead of Kellogg's single figure of merit (harmonic distortion), Cocking set three simultaneous targets - low frequency distortion, low harmonic distortion, and low phase distortion. In 1934 Cocking published his first Quality Amplifier design - a two-stage, RC-coupled triode class A amplifier that achieved no more than 2–3% maximum distortion without using feedback. Feedback appeared in his 1943 Wartime Quality Amplifier, built around American 6V6 beam tetrodes; however, both the input stage and the output transformer were placed outside the feedback loop. Cocking's Quality Amplifier family became the foundation of post-war British and Australian audio industry, including the Williamson amplifier. Development In 1943, in the middle of World War II, twenty-year-old Scotsman Theo Williamson failed mathematics exam and was discharged from the University of Edinburgh. Theo was not physically fit for military service, so instead the authorities drafted him for mandatory civilian work at Marconi-Osram Valve. In April 1944 Williamson transferred from production line to Applications Laboratory of the company, where he had enough free time for his own DIY projects. Management did not object, and by the end of 1944 Williamson had conceived, built and tested the amplifier that would soon be known as the Williamson amplifier. Another wartime projects, a novel magnetic cartridge, would be commercialized in 1948 as the Ferranti ribbon pickup. Design targets Following Cocking's ideas, Williamson devised a different, much stricter set of fidelity requirements: Negligible non-linear distortion (sum of harmonic distortion and intermodulation products) up to the maximum rated output, at all audible frequencies from 10 to 20000 Hz; Linear frequency response and constant output power at all audible frequencies; Negligible phase shift within the audible frequency range; Good transient response which, in addition to above frequency and phase requirements, demands perfectly constant gain when handling complex waveforms and transients; Low output impedance and, inversely, high damping factor. At the very least, output impedance of an amplifier must be lower than the loudspeaker impedance; Output power of 15–20W for reproduction of orchestral music via a dynamic loudspeaker, or for a horn loudspeaker. Williamson reviewed contemporary amplifier configurations, and, just like Cocking, settled on a low distortion push-pull, class A, triode output stage. Unlike Cocking, Williamson believed that such a stage can deliver high fidelity sound only when the amplifier is governed by 20–30dB deep negative feedback loop (and thus the complete amplifier must have 20–30dB higher open loop gain to compensate the effect of feedback). Deep feedback inevitably causes sudden, harsh onset of distortion at overload but Williamson was content with this flaw. He argued that it is a price worth paying for an improvement in linearity at medium and high power levels. On the contrary, wrote Williamson, slow but steady rise of distortion to 3–5%, as advocated by Kellogg, is distinctly unwanted in a high fidelity system. Prototypes and tests Valve complement of the original Williamson amplifier was determined by scarce supply in wartime Britain. The two suitable and available output valves were either the PX25 triode, or a triode-connected KT66 beam tetrode. Williamson initially used the PX25, an already obsolete directly-heated triode introduced in 1932. In his second prototype, Williamson used the more efficient KT66, which became the valve of choice in post-war period. Powered from +500 V power supply, the KT66 prototype delivered 20 Watts at no more than 0.1% distortion. A less costly +425V power supply enabled 15 Watt output power at no more than 0.1% distortion; this arrangement became standard for the Williamson amplifier and defined its physical layout. The complete prototype system, including the amplifier, the experimental magnetic pickup and a Goodmans full-range speaker in an acoustical labyrinth enclosure, has proven to Williamson that a low distortion, deep feedback amplifier, indeed, sounded superior to amplifiers without feedback. The difference was particularly audible with the best available shellac records, despite the physical limitations of this low-fidelity format. The prototypes impressed the Marconi management, who granted Williamson unlimited access to the company's test facilities and introduced him to the people from Decca Records. The latter provided Williamson with precious, exclusive test material - sample records of the experimental Decca ffrr system, the first true high fidelity medium in the United Kingdom. These records, which exceeded any preexisting media in sound quality, helped Williamson with fine-tuning his prototypes. He was certain that he was now firmly on the right track, but neither Marconi, nor its parent the General Electric Company were willing to invest in mass production of amplifiers for the civilian market. The design was not interesting to company lawyers either, because it did not contain anything patentable. Williamson merely put together well-known circuits and solutions. Publication In February 1946 Williamson left Marconi, moved to Edinburgh and joined Ferranti. A few months later a senior Marconi salesman, who sought new means of promoting the KT66 to general public, noticed Williamson's 1944 report about his amplifier prototypes, and sent it for publication to Wireless World. Chief editor H. F. Smith knew Williamson for his earlier contributions; he contacted the author directly and requested a detailed article written specifically for the DIY readers. Williamson promptly responded, but for unknown reasons the publication, originally slotted for 1946, was delayed until April–May 1947. While the paper was waiting for print, the magazine had published the new version of Cocking's Quality amplifier. Cocking, as the technical editor of Wireless World, certainly had precedence; according to Peter Stinson, he was sceptical about the Williamson amplifier, believing that his own design needed no further improvements. By 1947 British industry had already released two amplifiers of comparable sound quality. Harold Leak announced production of his Leak Point One in September 1945; later in the same year Peter Walker published the first sketch of his distributed-load output stage that would become the Quad II production model. Leak and Walker tried to commercialize their ideas on the meagre post-war British market; their achievements were practically unknown outside of the United Kingdom. Williamson did the opposite: he donated his design to worldwide DIY community, thus securing lasting popular following. In August 1949 Williamson, responding to letters from the readers, published the "New Version" of this amplifier. The article dealt extensively with construction, tuning and troubleshooting issues, however, its main objective was to address stability issues reported in letters from the readers. Apart from the additional frequency compensation network, a biasing potentiometer and a new, indirectly-heated rectifier valve that was not available in 1947, the circuit remained the same. In October 1949 – January 1950 and May 1952 Williamson published a series of articles on matching preamplifier stages and brief "Replies to Queries" concerning assembly and testing. A collection of articles published by Williamson in 1947–1950 was printed as a standalone 36-page brochure in 1952, with a second edition in 1953. The Williamson amplifier itself, as described in the August 1949 issue of Wireless World, remained unchanged. Reception The Williamson amplifier was an instant success. The publication coincided with the resumption of television broadcasting, the beginning of FM broadcasting, the release of the first high fidelity gramophone records (Decca ffrr and the LP record), and the "discovery" of the captured German Magnetophon. The high fidelity media that did not exist in the 1930s became a reality, and the public wanted playback equipment of matching quality. Off-the-shelf amplifiers available in 1947 were not fit for the task. At the same time, electronic components markets were flooded with military surplus, including cheap American 6L6 and 807 power valves. For a while, DIY construction was the only way to obtain high fidelity amplification. Thousands of amateurs began copying the Williamson design; the required transformers and chassis were soon provided by industry. In September 1947 Australians R. H. Astor and Fritz Langford-Smith adapted the Williamson circuit for American 6SN7 and 807 valves; a 6L6 variant followed soon. British and Australian press was unanimously enthusiastic: "by far the best we have ever tested ... extraordinary linearity and lack of harmonic and intermodulation distortion", "amplifier to end [all] amplifiers", "absolute tops for obtaining natural reproduction" and so on. America lagged behind by about two years: first reviews appeared in the second half of 1949, and were just as complimentary. American companies adapted the circuit to locally available components, and soon began importing "premium" British valves and transformers, thus launching the market for British hi-fi in the United States. By the end of 1949 the Williamson amplifier became a universally recognized reference design, and a starting point for all valve designs employing global feedback. The spread of DIY construction and the abundance of publications addressed to the amateurs had a solid economic reason: factory-made electronics of the 1940s were too expensive. The industry has not yet reorganized for mass production of affordable consumer products. Home construction of valve electronics was relatively simple and promised considerable savings. The number of home-made Williamson amplifiers is estimated at least in hundreds of thousands; they absolutely dominated the DIY scene in English-speaking countries. Stereo has not been commercialized yet; almost all surviving Williamson amplifiers are monaural. Each one differs in minor details, assembly quality is usually inferior to factory-made models. In the 21st century these monaural amplifiers are commonly sold at online auctions, but finding a matching pair is almost impossible. Small-scale factory production in the United Kingdom began in February 1948; first big manufacturer, Rogers, announced production in October 1948. In the early 1950s the Williamson amplifier dominated factory production in both the United Kingdom and the United States; John Frieborn of Radio-Electronics wrote in 1953 that "since Williamson published the first description of his High-Quality Audio Amplifier, other audio designers had two apparent choices, beating him [Williamson] or joining him." Design features Specifications Tube complement, 1947 version: 4x L63 (each equivalent to 6J5), 2x KT66, 1x U52 directly-heated rectifier. The 1949 version also provided for the use of 6SN7 or B65 double triodes, and replaced rectifier with the 53KU indirectly-heated type; Output power and maximum distortion: 15W RMS at no more than 0.1% THD; Intermodulation: not specified (Williamson did not have the necessary test equipment); Frequency range: 10-20000Hz at ±0.2dB; 3-60000Hz at ±3dB; Phase shift within 10-20000Hz: "never exceeds a few degrees" at the extremes of audio spectrum; Noise and hum: -85 dBbelow maximum output, almost entirely consisting of mains frequency hum. Topology The Williamson amplifier is a four-stage, push-pull, class A triode valve amplifier built around a high quality, wideband output transformer. Its second (concertina-type phase splitter, V1B), third (driver, V2A and V2B) and fourth (output, V3 and V4) stages follow Cocking's Quality Amplifier circuit. The added first stage (V1A) is a dedicated error amplifier, which compensates for the loss of gain caused by negative feedback. Williamson optimized operating points of each stage for best linearity with sufficient overload reserve. The output stage is biased into pure class A; traditionally it used triode-connected beam tetrodes or pentodes. With American 807 or British KT66 valves (Williamson recommended the latter type) and specified power supply the amplifier delivered 15 watts of output power. Further increase in output, according to Williamson, required use of four output valves; his 1947 article mentions construction of a 70-watt prototype. The plate of the first stage and the grid of the phase splitter are connected directly. This configuration, known since 1940, was still uncommon in 1947; American designers considered it a novelty even in the early 1950s. Phase splitter, driver and output stage are capacitively coupled. Cathode bypass capacitors are absent: Williamson, like Cocking before him, tried to linearize open-loop performance of each stage, and deliberately sacrificed gain for linearity; he was also concerned with potential low-frequency instability introduced by added capacitances. The circuit in either 1947 or 1949 variant contains no electrolytic capacitors; its power supply uses a CLC π-filter with two 8 μF paper capacitors, with a further LC filter feeding the first three stages. Derivative designs of the 1950s often deviated from Williamson's recommendations while retaining his four-stage topology. According to Peter Stinson, this alone is not sufficient to be called a Williamson amplifier. A true Williamson amplifier must meet five criteria simultaneously: All four stages must use triodes; the output stage may use triode-connected tetrodes or pentodes; Output stage must operate in class A; Phase splitter must be directly coupled to the input stage; High-quality output transformer must conform to the original Williamson specification; Global negative feedback loop must be connected from transformer secondary to the cathode of the input triode, and be exactly 20 dB deep. Feedback The 20 dB (ten-to-one) feedback loop of the Williamson amplifier wraps around all four stages and the output transformer. According to Richard C. Hitchcock, "this is a severe test of design and is one of the outstanding features of the Williamson circuit." Williamson wrote that the depth of feedback can be easily increased from 20 to 30 dB, but the audible improvements of deeper feedback will be diminishingly low. All frequency compensation components are located in the first and second stages of the circuit: their local smoothing RC filters subtly alter frequency response at infrasonic frequencies. An additional RC-filter in the first stage, introduced by Williamson in the 1949 version, prevents oscillations at ultrasonic frequencies. Feedback voltage divider is connected to the transformer secondary, thus feedback depth is dependent on loudspeaker impedance, and setting it at precisely 20 dB requires altering the divider ratio. The voltage divider is purely resistive, with no capacitive or inductive frequency compensation components. According to Williamson, a capacitor shunting the upper leg of the divider is only necessary for inferior-quality transformers; if the transformer matches requirements set by Williamson, the capacitor is useless. Transformer Williamson was confident that the output transformer is the most critical component in any valve amplifier. Even before applying global feedback, the transformer is liable for at least four types of distortion. Their causes cannot be addressed simultaneously, and the designer must make a compromise between conflicting requirements. Global feedback partially suppresses distortion, but also tightens requirements to the bandwidth of the transformer. Stability theory predicted that an amplifier built to Williamson's specifications could only be stable if the bandwidth of its output transformer was no less than 2.5...160000 Hz. This was impractically wide for an audio amplifier, requiring an exceptionally large, complex and expensive transformer. Williamson, seeking a working solution, had to decrease phase margin to a bare minimum; even then, the required bandwidth had to be no less than 3,3...60000 Hz. Such a transformer, driven by a pair of triode-connected KT66, had to have primary winding inductance of at least 100 H, and leakage inductance of no more than 33 mH. These were extremely demanding specifications for the period, far exceeding anything available on the consumer market. The Williamson transformers had to be heavier, larger, more complex and more expensive than typical audio transformers, and yet they could only guarantee minimally acceptable stability. A wider phase margin, wrote Williamson, was highly desirable but required absolutely impractical values of primary inductance. Overload behaviour Valve amplifiers with capacitive coupling between the driver stage and the output stage do not clip in the same manner as transistor amplifiers (e.g. clamping output voltage to one of the supply rails). Instead, they choke when large signal swings intermittently attempt to bias the grids of the output valves above zero. Positively-biased grids begin conducting, but the coupling capacitors cannot delivered required current. Grid voltages do not reach target values, output waveform flattens. Feedback attempts to overcome choking by increasing driver voltage swing, but fails because coupling capacitors cannot physically pass direct current. Resulting distortion pattern, as Williamson proved with photocopies of oscillograms and Lissajous curves, is "of the desirable type", i.e. with abrupt onset of distortion at the extremes of otherwise highly linear response curves. Stability problem The first attempts to build the Williamson amplifier revealed its tendency to oscillate due to very narrow phase margin. Astor and Langford-Smith, who gave the Williamson excellent ratings, reported that "for fairly large outputs at low frequencies a high frequency oscillation about 60 kC/s [kHz] would commence and be accompanied by a pulsed output of some other frequency". The Australians, armed with first-class test equipment, suppressed the 60 kHz oscillation with small capacitors on screen grids, but could not identify and suppress the cause of "some other" oscillations. Later, technicians of the United States Naval Research Laboratory examined seven different commercially available Williamson amplifiers, and found that all of them oscillated at infrasonic frequencies of 2...3 Hz. Replacement of output transformers affected stability only at audio and ultrasonic frequencies. The best transformers displayed perfectly flat frequency response from 10 to 100,000 Hz, but were also prone to infrasonic "breathing". The worst transformers displayed prominent ultrasonic resonances that, however, did not cause sustained oscillations. Some "ringed" at relatively low frequencies of 30 to 50 kHz, others extended into 500...700 kHz range. Custom-built Williamson transformers were imperfect, but general-purpose, off-the-shelf transformers used by amateurs were far worse. Their resonances could only be tamed by narrowing the amplifier's bandwidth. The extent of stability problem in the DIY community remains unknown: the editors of Wireless Worlds were flooded with readers' letters, but preferred to redirect them to Williamson. What is known is that the inventor was compelled to revise and improve the design; he took a leave from his job at Ferranti and presented the second version of the Williamson in 1949. Williamson could not fix the fundamental stability problem; the "New version" was just barely stable. Independent analysis published in December 1950 proved that the revised Williamson amplifier remained prone to both infrasonic and ultrasonic oscillations. According to the analysis, infrasonic open-loop response of the Williamson amplifier is shaped by three high-pass filters: two interstage RC filters, each with a cutoff frequency of 6 Hz, and the output stage RL filter, formed by the valves' output impedances and the transformer's primary inductance. At zero input signal, the nonlinear RL filter has a cutoff frequency of 3 Hz. This combination of cutoff frequencies, wrapped inside a 2030dB frequency loop, is unstable. Williamson tried to suppress it with a compensation network, also serving as a smoothing filter. The transformer's nonlinearity also improved stability: at high signal currents effective inductance of the primary increased, causing a decrease in cutoff frequency and a rise in phase margin. The simplest solution was to spread apart cutoff frequencies of the RC filters, provided that the output transformer conforms to the Williamson specification. For example, the 1952 Ultralinear Williamson by David Hafler and Herbert Keroes had these frequencies set at 1.3 and 6 Hz. Precise analysis at ultrasonic frequencies is impossible due to the asymmetry of the phase splitter stage, and unknown parasitics and nonlinearities of the output stage. Depending on the chosen analysis model, open-loop response can be roughly approximated with a combination of either four or five low-pass filters. Different authors used different approaches and estimated somewhat different cutoff points of these filters, but in each case at least three of four or five cutoff frequencies were dangerously close to each other, which was a certain sign of instability. Williamson, again, fixed the problem with an RC compensation network, but even then phase margin remained dangerously low. DIYers had to tackle oscillations themselves: some added shunting capacitors to the screen grids, others tweaked layout and wiring, or deliberately narrowed the amplifier's bandwidth, negating the benefits of the original circuit. Component problem The Williamson amplifier was very sensitive to the quality and parameters of passive components and valves. Carbon and composition-type resistors generated excessive noise and caused harmonic distortion; American valves used as substitutes for the British types specified by Williamson, could not match their performance. Williamson warned that the KT66 has no direct substitutes, and should be preferred over any alternatives. Amateurs who copied the Williamson amplifier were unable to identify and fix its critical weak points. An amateur armed with an analogue multimeter could "see" infrasonic oscillations by watching the instrument needle, but fixing high-frequency issues required an oscilloscope with bandwidth of at least 1 or 2MHz bandwidth. In the 1950s bandwidths of many commercial oscilloscopes were too narrow for the task, and even these models were too expensive for the DIYers. Articles by professional engineers dealing with analysis and fine tuning of the Williamson amplifier were published relatively late, when the original DIY enthusiasm had already faded - in 1952, 1957, 1961. Martin Kiebert, who built professional-grade Williamson amplifiers for his laboratory at Bendix Corporation, identified five sources of distortion caused by inferior components other than the transformer: Excessive noise and electromagnetic interference caused by noisy carbon or composition-type resistors and incorrect layout of the first stage. Replacement of resistors specified by Williamson with wirewound resistors could improve signal-to-noise ratio by . Replacement of 6SN7 with low-noise 12AY7 could gain another ; Frequency and harmonic distortion caused by asymmetry of passive components in two sides of a push-pull circuit. Typical components of the 1950s had 20% tolerances, which was unacceptably high for the Williamson; The 6SN7 driver stage was often unable to properly swing the KT66 grids, causing excessive distortion. According to Kiebert, the American 5687 dual triode was clearly superior. According to Talbot Wright, the 6SN7 was not at fault - distortion was caused by incorrectly set standing current, and could be improved by a simple increase in bias voltage; Distortion in the feedback voltage divider. This critical function required low-distortion wirewound resistors; Distortion was clearly influenced by the choice of output valves, however, Kiebert could not identify any specific rules. Kiebert rated the design positively but warned the readers that following Williamson's instructions is possible only in a laboratory environment. The amplifier reveals its potential only with expensive, properly matched components that were out of reach of an average amateur. Even a perfectly built and tested Williamson amplifier would sooner or later need valve replacement, which would very likely cause an unexpected rise in distortion. Variants and derivatives After 1950 the industry produced numerous derivatives of the Williamson amplifier, often deviating significantly from the principles outlined by its creator. In 1950 Herbert Keroes shunted common cathode resistor of his 807 amplifier with a large electrolytic capacitor which, according to Keroes, significantly reduced distortion at high output power. Contrary to recommendations by Cocking and Williamson, Keroes and his partner David Hafler used cathode shunt capacitors in most of their designs; by 1956 this approach became de facto industry standard. In the same 1956 Hafler used fixed bias in his EL34 Williamson. Later, fixed bias became a staple of Soviet and Russian Williamson-like designs that employed exotic output valves like the 6C4C directly-heated triode, the GU-50 generator pentode or the 6P45S horizontal deflection tetrode. Throughout the 1950s, as prices of capacitors decreased, designers steadily increased their values. The original Williamson amplifier used paper capacitors; by 1952 Kiebert uses electrolytics; the 1955 reference design by Keroes used at least bypass capacitors; the 1961 budget amplifier by Wright employed a total of . Designers of the commercial Bell2200 amplifier (1953) replaced direct coupling of the first two stages with capacitive coupling; the Stromberg-CarlsonAR-425 (also 1953) use a tetrode-mode output stage in an otherwise familiar Williamson topology. Both Bell and Stromberg-Carson modifications further worsened stability, and required additional frequency compensation. Designers of the BogenDB20 (1953) went even further, and combined global and local negative feedback loops with positive feedback in the output stage. In December 1951 Hafler and Keroes began promoting the ultralinear stage - a method of distributing load between anode and screen grid of a pentode or tetrode, invented by Alan Blumlein in the 1930s. An ultralinear stage delivered 50% to 100% more output power than the same stage in triode connection, at roughly the same distortion, and cost less than a pure pentode or tetrode stage (the latter required a separate screen grid supply, the ultralinear did not need it). The first Ultralinear Williamson, employing a pair of 6L6 in a Williamson-like topology, delivered ; their second model, built around more powerful 807 tetrodes, delivered . Very soon the American public acquired taste to high-power amplification, and the industry launched the "race for Watts". By 1955 Hafler and Keroes, now working separately, were offering 60-Watt models employing pairs of 6550 tetrodes or quartets of KT66s. Thus in less than a decade, step by step, the industry abandoned the principles set by Williamson, but continued to use his name as a convenient free trademark. In the 21st century it is even used for amplifiers without global negative feedback; the only thing they have in common with the true Williamson amplifier is the four-stage topology. Following the success of Hafler and Keroes, American manufacturers like Eico, The Fisher, Harman/Kardon and Marantz disposed with "obsolete" power triodes and switched to ultralinear designs. Mullard, Britain's largest valve manufacturer and provider of reference designs to the European industry, publicly supported the novelty. Williamson's former employer, General Electric Company, followed suit and published a reference "30-Watt Williamson" design built around a pair of ultralinear-connected KT88. The original Williamson amplifier lost the race, just like alternative designs by Peter Walker and Frank McIntosh. In September 1952 Williamson and Walker (then business partners in the development of the Quad Electrostatic Loudspeaker) agreed that the ultralinear stage was, indeed, preferable in mass production. Williamson gradually stepped aside from audio engineering. He made his living by designing milling machines and flexible manufacturing systems, which later earned him election to the Royal Society, and never considered audio design a serious occupation for himself. In 1956 most production amplifiers in North America followed the Ultralinear Williamson template, but in the next few years it was retired, too. The new three-stage reference design combined phase splitter and driver functions in one valve, and thus cost proportionally less than four-stage amplifiers. Hafler's Dynaco Stereo 70, which followed this topology, became the most produced valve amplifier in history. North American consumer market was flooded with millions of similar, almost identical amplifiers and receivers claiming 25 to 20W per channel, as well as clones of less powerful British designs like the Mullard 5-10. Advertisements claimed that these models performed as well as the original Williamson, with higher output power and with guaranteed stability. The customers could not verify these claims, and had to rely to listening tests, hearsay and expert advice. The problem was partially addressed by the concept of subjective listening, advanced by Hafler and Keroes back in 1951: "Excellent measurements are a necessary but not a sufficient condition for the quality of sound. The listening test is one of most importance... the most stringent test of all". By the end of the 1960s subjectivist approach was adopted by the audiophiles and marketing people, who eagerly forgot about the objective principles devised by Williamson in the 1940s. Objectively, many deep-feedback valve designs of the 1950s matched or exceeded the 0.1% distortion rating of the Williamson amplifier, but none could significantly improve on this figure. Williamson had found that valve amplifier performance was limited mostly by the output transformer. Transistor amplifiers did not have this limitation, and yet it took around 15 years to bring their performance to the level attained by Williamson in 1947. Notes References Sources ; also reprinted as A collection of articles from the late 1940s and early 1950s, including: Vacuum tubes Valve amplifiers 1947 in technology 1947 works 1947 in the United Kingdom
Williamson amplifier
[ "Physics" ]
6,908
[ "Vacuum tubes", "Vacuum", "Matter" ]
1,019,199
https://en.wikipedia.org/wiki/Strange%20particle
A strange particle is an elementary particle with a strangeness quantum number different from zero. Strange particles are members of a large family of elementary particles carrying the quantum number of strangeness, including several cases where the quantum number is hidden in a strange/anti-strange pair, for example in the ϕ meson. The classification of particles, as mesons and baryons, follows the quark/anti-quark and three quark content respectively. Murray Gell-Mann recognized the group structure of elementary particle classification introducing the flavour SU(3) and strangeness as a new quantum number. See also Strange matter Strange quark Hyperon References Particle physics
Strange particle
[ "Physics" ]
134
[ "Particle physics stubs", "Particle physics" ]
1,019,232
https://en.wikipedia.org/wiki/Fechner%20color
The Fechner color effect is an illusion of color seen when looking at certain rapidly changing or moving black-and-white patterns. They are also called pattern induced flicker colors (PIFCs). The effect is most commonly demonstrated with a device known as Benham's top (also called Benham's disk). When the top is spun, arcs of pale color are visible at different places on the disk that forms its upper surface. The effect can also be seen in stroboscopic lights when flashes are set at certain critical speeds. Rotating fan blades, particularly aluminum ones, can also demonstrate the effect; as the fan accelerates or decelerates, the colors appear, drift, change and disappear. The stable running speed of the fan does not (normally) produce colors, suggesting that it is not an interference effect with the frequency of the illumination flicker. The effect was noted by Gustav Fechner and Hermann von Helmholtz and propagated to English-speakers through Charles Benham's invention of his top. Florence Winger Bagley was one of the early investigators of this phenomenon. The perceptual mechanism of Fechner color is not entirely understood. One possible reason people see colors may be that the color receptors in the human eye respond at different rates to red, green, and blue. Or, more specifically, that the latencies of the center and the surrounding mechanisms differ for the different types of color-specific ganglion cells. The phenomenon originates from neural activity in the retina and spatial interactions in the primary visual cortex, which plays a role in encoding low-level image features, such as edges and spatiotemporal frequency components. Research indicates that the blue–yellow opponent process accounts for all the different PIFCs. Research has been done into the use of Benham's top and other PIFCs as diagnostic tools for diseases of the eye and the visual track. It has shown particular promise in detecting optic neuritis. Benham's top Benham's top is named after the English newspaper-man, amateur scientist, and toymaker Charles Benham, who in 1895 sold a top painted with the pattern shown. Benham was inspired to propagate the Fechner color effect through his top after his correspondence with Gustav Theodor Fechner, who had observed and demonstrated the said effect. Benham's top made it possible for speakers of the English language to learn of the Fechner color effect, about which Fechner's original reports were written in German. See also Newton disc References External links Online Java demonstrations of Fechner color Benham's Disk Interactive version () by Michael Bach A more convincing interactive version from Michael Bach Optical illusions Color
Fechner color
[ "Physics" ]
549
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
1,019,358
https://en.wikipedia.org/wiki/Stefan%20Drzewiecki
Stefan Drzewiecki (Polish: ; ; ; 26 July 1844, Kunka (ru), Podolia, Russian Empire (today Kunka (uk), Ukraine) – 23 April 1938, Paris) was a Polish scientist, journalist, engineer, constructor and inventor, known for designing and constructing the world's first electric-powered submarine. He worked mainly in France and the Russian Empire. He built the first submarine in the world with electric battery-powered propulsion in 1884. He also independently developed the blade element theory (BET), a mathematical process used to determine the behavior of propellers. Early life Drzewiecki was born into Polish aristocratic (szlachta) family of national patriots. His grandfather Józef Drzewiecki served under generals Kościuszko and Dąbrowski. His father Karol Drzewiecki took part in the November Uprising against Russia. Young Stefan was sent by him away from partitioned Poland to complete his education in France. He graduated in mathematics from the École Centrale Paris and received his engineering diploma. At the age of 19, he returned to Poland to take part in the January Uprising (1863-1864) against Russia. A few years later, he came back to Paris to finish his study. Career In 1867, Drzewiecki made his first invention, a kilometre counter for horse-drawn carriages. After the fall of the Paris Commune in 1871, he left Paris for Vienna, where he settled at 2 Lindengasse and entirely focused on inventing. After the Vienna World's Fair in 1873, he travelled to St. Petersburg at the invitation of Grand Duke Konstantin, where he pursued a fruitful career as a mechanical engineer. His inventions from that period include an instrument which automatically drew on a map the route traveled by the ship at sea. Drzewiecki distinguished himself mainly in aviation and ship building. Beginning in 1877, during the Russo-Turkish War, he developed several models of propeller-driven submarines that evolved from single-person vessels to a four-man model. He personally took part in the war for which he received the Order of St. George (4th Class). In 1884, he converted 2 mechanical submarines, installed on each a 1 hp engine with the new, at the time, source of energy - batteries. On tests submarine went under the water against the flow of the Neva River, Russia at a rate of 4 knots. It was the first submarine in the world with electric battery-powered propulsion. He developed the theory of gliding flight, developed a method for the manufacture of ship and plane propellers (1892), and presented a general theory for screw-propeller thrust (1920). He is known for developing several models of early submarines for the Russian Navy. In 1902, Drzewiecki designed the submarine Pocztowyj, which was powered by two combustion engines that operated both underwater and after surfacing. He also devised a torpedo-launching system for ships and submarines that bears his name, the Drzewiecki drop collar. He also made an instrument that drew the precise routes of ships onto a nautical chart. His work Theorie générale de l'hélice (1920), was honored by the French Academy of Science as fundamental in the development of modern propellers. In it he presented a complete theory about the moving propeller based directly on the general laws of the resistance of fluids. Remembrance Streets named in honour of Drzewiecki are located in a number of Polish cities including Warsaw, Wrocław, Gdańsk, Poznań, Biała Podlaska, Mielec and Szczecin. In 1973, Drzewiecki and his submarine were featured on a postage stamp issued by the Polish Post in the PLN 2.70 denomination. In 1991, a monument commemorating Drzewiecki was unveiled in the Ukrainian port city of Odesa where the inventor stayed for a period of time and tested his submarine in 1878. In 2020, Drzewiecki was featured in a promotional publication titled Polacy światu. Znani i nieznani (Poles to the World. Known and Unknown) published by the Department of Cooperation with the Polish Diaspora and Poles Abroad of the Ministry of Foreign Affairs. See also List of Poles List of Polish inventors and discoverers Timeline of Polish science and technology List of pre-20th century submarines Notes Słownik polskich pionierów techniki pod redakcją Bolesława Orłowskiego. Katowice: Wydawnictwo „Śląsk”, 1986, s. 57. . Alfred Liebfeld, Polacy na szlakach techniki. Warszawa: Wydawnictwa Szkolne i Pedagogiczne, 1985, s. 215–225. . Krzysztof Kubiak, Wielki błękit wynalazców, biuletyn „Rzeczpospolitej” 11 grudnia 2010, Nr 47 Jerzy Pertek, Polscy pionierzy podwodnej żeglugi, seria wydawnicza Wydawnictwa Morskiego Miniatury Morskie zeszyt 3: Polskie tradycje morskie, s. 26–49. Blade element theory designed by William Froude (1878), David W. Taylor (1893) and Stefan Drzewiecki to determine the behavior of propellers. References 1844 births 1938 deaths Polish inventors Polish engineers Recipients of the Cross of St. George Submarine pioneers Marine engineers People from Vinnytsia Oblast Russian military personnel of the Russo-Turkish War (1877–1878)
Stefan Drzewiecki
[ "Engineering" ]
1,173
[ "Marine engineers", "Marine engineering" ]
1,019,406
https://en.wikipedia.org/wiki/Cuthill%E2%80%93McKee%20algorithm
In numerical linear algebra, the Cuthill–McKee algorithm (CM), named after Elizabeth Cuthill and James McKee, is an algorithm to permute a sparse matrix that has a symmetric sparsity pattern into a band matrix form with a small bandwidth. The reverse Cuthill–McKee algorithm (RCM) due to Alan George and Joseph Liu is the same algorithm but with the resulting index numbers reversed. In practice this generally results in less fill-in than the CM ordering when Gaussian elimination is applied. The Cuthill McKee algorithm is a variant of the standard breadth-first search algorithm used in graph algorithms. It starts with a peripheral node and then generates levels for until all nodes are exhausted. The set is created from set by listing all vertices adjacent to all nodes in . These nodes are ordered according to predecessors and degree. Algorithm Given a symmetric matrix we visualize the matrix as the adjacency matrix of a graph. The Cuthill–McKee algorithm is then a relabeling of the vertices of the graph to reduce the bandwidth of the adjacency matrix. The algorithm produces an ordered n-tuple of vertices which is the new order of the vertices. First we choose a peripheral vertex (the vertex with the lowest degree) and set . Then for we iterate the following steps while Construct the adjacency set of (with the i-th component of ) and exclude the vertices we already have in Sort ascending by minimum predecessor (the already-visited neighbor with the earliest position in R), and as a tiebreak ascending by vertex degree. Append to the Result set . In other words, number the vertices according to a particular level structure (computed by breadth-first search) where the vertices in each level are visited in order of their predecessor's numbering from lowest to highest. Where the predecessors are the same, vertices are distinguished by degree (again ordered from lowest to highest). See also Graph bandwidth Sparse matrix References Cuthill–McKee documentation for the Boost C++ Libraries. A detailed description of the Cuthill–McKee algorithm. symrcm MATLAB's implementation of RCM. reverse_cuthill_mckee RCM routine from SciPy written in Cython. Matrix theory Graph algorithms Sparse matrices
Cuthill–McKee algorithm
[ "Mathematics" ]
467
[ "Matrices (mathematics)", "Sparse matrices", "Mathematical objects", "Combinatorics" ]
1,019,659
https://en.wikipedia.org/wiki/Aleksander%20Jab%C5%82o%C5%84ski
Aleksander Jabłoński (born 26 February 1898 in Woskresenówka, in Imperial Russia; died 9 September 1980 in Skierniewice, Poland) was a Polish physicist and member of the Polish Academy of Sciences. His research was in molecular spectroscopy and photophysics. Life and career He was born on 26 February 1898 in Woskresenówka near Kharkiv in Imperial Russia. He attended Gymnasium high school in Kharkiv as well as a music school where he learned to play the violin under supervision of Konstanty Gorski. In 1916, he started to study physics at the University of Kharkiv. During the World War I he served in the Polish I Corps in Russia. After the war he settled in Warsaw in 1918. In 1919–1920 he fought for Poland against aggression by Soviet Russia (and was consequently decorated with the Polish Cross of Valour). Jabłoński initially studied the violin at Warsaw Conservatory, under the virtuoso Stanisław Barcewicz, but later switched to science. He received a Ph.D. from the University of Warsaw in 1930, writing a thesis On the influence of the change of the wavelength of excitation light on the fluorescence spectra. He then went to Friedrich-Wilhelms-Universität in Berlin, Germany for two years (1930–31) as a fellow of the Rockefeller Foundation. He worked with Peter Pringsheim at the FWU and later with Otto Stern in Hamburg. In 1934 Jabłoński returned to Poland to receive habilitation from the University of Warsaw. His thesis was On the influence of intermolecular interactions on the absorption and emission of light, the subject to which he would devote the rest of his life. He served as president of the Polish Physical Society between 1957 and 1961. Jabłoński was a pioneer of molecular photophysics, creating the concept of the "luminescent centre" and his own theories of concentrational quenching and depolarization of photoluminescence. He also worked on pressure broadening of emission spectra lines and was the first to recognize the analogy between pressure broadening and molecular spectra. This led to development of the quantum-mechanical pressure broadening theory. Fluorescence is illustrated schematically with the classical Jablonski diagram, first proposed by Jabłoński in 1933 to describe absorption and emission of light. In 1946, he settled in Toruń where he was appointed Head of the Faculty of Physics at the Nicolaus Copernicus University. Awards and honours Cross of Valour (1920) Fellow of the Rockefeller Foundation (1930–31) Golden Cross of Merit (1951) Marian Smoluchowski Medal (1968) Honorary degree of the University of Windsor (1973) Honorary degree of the Nicolaus Copernicus University in Toruń (1973) Honorary degree of the University of Gdańsk (1975) References Complete list of papers published by Professor Aleksander Jablonski A short biography of Aleksander Jabłoński Kompletna lista prac Aleksandra Jabłońskiego Aleksander Jabłoński fulltext articles in Kujawsko-Pomorska Digital Library Notes 1898 births 1980 deaths University of Warsaw alumni Humboldt University of Berlin alumni Academic staff of Nicolaus Copernicus University in Toruń 20th-century Polish physicists Spectroscopists
Aleksander Jabłoński
[ "Physics", "Chemistry" ]
701
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
1,019,760
https://en.wikipedia.org/wiki/Weight%20transfer
Weight transfer and load transfer are two expressions used somewhat confusingly to describe two distinct effects: the change in load borne by different wheels of even perfectly rigid vehicles during acceleration the change in center of mass (CoM) location relative to the wheels because of suspension compliance or cargo shifting or sloshing In the automobile industry, weight transfer customarily refers to the change in load borne by different wheels during acceleration. This would be more properly referred to as load transfer, and that is the expression used in the motorcycle industry, while weight transfer on motorcycles, to a lesser extent on automobiles, and cargo movement on either is due to a change in the CoM location relative to the wheels. This article uses this latter pair of definitions. Load transfer In wheeled vehicles, load transfer is the measurable change of load borne by different wheels during acceleration (both longitudinal and lateral). This includes braking, and deceleration (which is an acceleration at a negative rate). No motion of the center of mass relative to the wheels is necessary, and so load transfer may be experienced by vehicles with no suspension at all. Load transfer is a crucial concept in understanding vehicle dynamics. The same is true in bikes, though only longitudinally. Cause The major forces that accelerate a vehicle occur at the tires' contact patches. Since these forces are not directed through the vehicle's CoM, one or more moments are generated whose forces are the tires' traction forces at pavement level, the other one (equal but opposed) is the mass inertia located at the CoM and the moment arm is the distance from pavement surface to CoM. It is these moments that cause variation in the load distributed between the tires. Often this is interpreted by the casual observer as a pitching or rolling motion of the vehicles body. A perfectly rigid vehicle, without suspension that would not exhibit pitching or rolling of the body, still undergoes load transfer. However, the pitching and rolling of the body of a non-rigid vehicle adds some (small) weight transfer due to the (small) CoM horizontal displacement with respect to the wheel's axis suspension vertical travel and also due to deformation of the tires i.e. contact patch displacement relative to wheel. Lowering the CoM towards the ground is one method of reducing load transfer. As a result load transfer is reduced in both the longitudinal and lateral directions. Another method of reducing load transfer is by increasing the wheel spacings. Increasing the vehicle's wheelbase (length) reduces longitudinal load transfer while increasing the vehicle's track (width) reduces lateral load transfer. Most high performance automobiles are designed to sit as low as possible and usually have an extended wheelbase and track. One way to calculate the effect of load transfer, keeping in mind that this article uses "load transfer" to mean the phenomenon commonly referred to as "weight transfer" in the automotive world, is with the so-called "weight transfer equation": or where is the change in load borne by the front wheels, is the longitudinal acceleration, is the acceleration of gravity, is the center of mass height, is the wheelbase, is the total vehicle mass, and is the total vehicle weight. Weight transfer involves the actual (relatively small) movement of the vehicle CoM relative to the wheel axes due to displacement of the chassis as the suspension complies, or of cargo or liquids within the vehicle, which results in a redistribution of the total vehicle load between the individual tires. Center of mass Weight transfer occurs as the vehicle's CoM shifts during automotive maneuvers. Acceleration causes the sprung mass to rotate about a geometric axis resulting in relocation of the CoM. Front-back weight transfer is proportional to the change in the longitudinal location of the CoM to the vehicle's wheelbase, and side-to-side weight transfer (summed over front and rear) is proportional to the ratio of the change in the CoM's lateral location to the vehicle's track. Liquids, such as fuel, readily flow within their containers, causing changes in the vehicle's CoM. As fuel is consumed, not only does the position of the CoM change, but the total weight of the vehicle is also reduced. By way of example, when a vehicle accelerates, a weight transfer toward the rear wheels can occur. An outside observer might witness this as the vehicle visibly leans to the back, or squats. Conversely, under braking, weight transfer toward the front of the car can occur. Under hard braking it might be clearly visible even from inside the vehicle as the nose dives toward the ground (most of this will be due to load transfer). Similarly, during changes in direction (lateral acceleration), weight transfer to the outside of the direction of the turn can occur. Weight transfer is generally of far less practical importance than load transfer, for cars and SUVs at least. For instance in a 0.9g turn, a car with a track of 1650 mm and a CoM height of 550 mm will see a load transfer of 30% of the vehicle weight, that is the outer wheels will see 60% more load than before, and the inners 60% less. Total available grip will drop by around 6% as a result of this load transfer. At the same time, the CoM of the vehicle will typically move laterally and vertically, relative to the contact patch by no more than 30 mm, leading to a weight transfer of less than 2%, and a corresponding reduction in grip of 0.01%. Traction Load transfer causes the available traction at all four wheels to vary as the car brakes, accelerates, or turns. This bias to one pair of tires doing more "work" than the other pair results in a net loss of total available traction. The net loss can be attributed to the phenomenon known as tire load sensitivity. An exception is during positive acceleration when the engine power is driving two or fewer wheels. In this situation where all the tires are not being utilized load transfer can be advantageous. As such, the most powerful cars are almost never front wheel drive, as the acceleration itself causes the front wheels' traction to decrease. This is why sports cars usually have either rear wheel drive or all wheel drive (and in the all wheel drive case, the power tends to be biased toward the rear wheels under normal conditions). Rollover If (lateral) load transfer reaches the tire loading on one end of a vehicle, the inside wheel on that end will lift, causing a change in handling characteristic. If it reaches half the weight of the vehicle it will start to roll over. Some large trucks will roll over before skidding, while passenger vehicles and small trucks usually roll over only when they leave the road. Fitting racing tires to a tall or narrow vehicle and then driving it hard may lead to rollover. See also Anti-roll bar Bicycle and motorcycle dynamics Car handling References External links DOT rollover ratings by vehicle type Automotive engineering Vehicle dynamics de:Lastwechselreaktion fr:Transfert de masse
Weight transfer
[ "Engineering" ]
1,402
[ "Automotive engineering", "Mechanical engineering by discipline" ]
1,019,822
https://en.wikipedia.org/wiki/Ximelagatran
{{Drugbox | Verifiedfields = changed | Watchedfields = changed | verifiedrevid = 470633864 | IUPAC_name = ethyl 2-[[(1R)-1-cyclohexyl-2-[(2S)-2-[[4-(N-hydroxycarbamimidoyl)phenyl]methylcarbamoyl]azetidin-1-yl]-2-oxo-ethyl]amino]acetate | image = Ximelagatran.svg | width = 230 | tradename = Exanta | pregnancy_category = Uncategorized | legal_status = Withdrawn from market | routes_of_administration = Oral (tablets) | bioavailability = 20% | metabolism = to melagatran | elimination_half-life = 3–5 hours | excretion = Renal (80%) | IUPHAR_ligand = 6381 | CAS_number_Ref = | CAS_number = 192939-46-1 | ATC_prefix = B01 | ATC_suffix = AE05 | ChEMBL_Ref = | ChEMBL = 522038 | PubChem = 9574101 | DrugBank_Ref = | DrugBank = DB04898 | ChemSpiderID_Ref = | ChemSpiderID = 7848559 | UNII_Ref = | UNII = 49HFB70472 | KEGG_Ref = | KEGG = D01981 | C=24 | H=35 | N=5 | O=5 | molecular_weight_comment = (429 g/mol after conversion) | smiles = O=C(NCc1ccc(C(=N\O)\N)cc1)[C@H]3N(C(=O)[C@H](NCC(=O)OCC)C2CCCCC2)CC3 | StdInChI_Ref = | StdInChI = 1S/C24H35N5O5/c1-2-34-20(30)15-26-21(17-6-4-3-5-7-17)24(32)29-13-12-19(29)23(31)27-14-16-8-10-18(11-9-16)22(25)28-33/h8-11,17,19,21,26,33H,2-7,12-15H2,1H3,(H2,25,28)(H,27,31)/t19-,21+/m0/s1 | StdInChIKey_Ref = | StdInChIKey = ZXIBCJHYVWYIKI-PZJWPPBQSA-N }}Ximelagatran (Exanta or Exarta''', H 376/95) is an anticoagulant that has been investigated extensively as a replacement for warfarin that would overcome the problematic dietary, drug interaction, and monitoring issues associated with warfarin therapy. In 2006, its manufacturer AstraZeneca announced that it would withdraw pending applications for marketing approval after reports of hepatotoxicity (liver damage) during trials, and discontinue its distribution in countries where the drug had been approved (Germany, Portugal, Sweden, Finland, Norway, Iceland, Austria, Denmark, France, Switzerland, Argentina and Brazil). Method of action Ximelagatran, a direct thrombin inhibitor, was the first member of this class that can be taken orally. It acts solely by inhibiting the actions of thrombin. It is taken orally twice daily, and rapidly absorbed by the small intestine. Ximelagatran is a prodrug, being converted in vivo'' to the active agent melagatran. This conversion takes place in the liver and many other tissues through hydrolysis and dehydroxylation (replacing the ethyl and hydroxyl groups with hydrogen). Uses Ximelagatran was expected to replace warfarin and sometimes aspirin and heparin in many therapeutic settings, including deep venous thrombosis, prevention of secondary venous thromboembolism and complications of atrial fibrillation such as stroke. The efficacy of ximelagatran for these indications had been well documented, except for non valvular atrial fibrillation. An advantage, according to early reports by its manufacturer, was that it could be taken orally without any monitoring of its anticoagulant properties. This would have set it apart from warfarin and heparin, which require monitoring of the international normalized ratio (INR) and the partial thromboplastin time (PTT), respectively. A disadvantage recognised early was the absence of an antidote in case acute bleeding develops, while warfarin can be antagonised by prothrombin complex concentrate and/or vitamin K and heparin by protamine sulfate. Side effects Ximelagatran was generally well tolerated in the trial populations, but a small proportion (5–6%) developed elevated liver enzyme levels, which prompted the FDA to reject an initial application for approval in 2004. The further development was discontinued in 2006 following reports of hepatotoxicity. Subsequent analysis of Phase 2 clinical study data using extreme value modelling showed that the elevated liver enzyme levels observed in Phase 3 clinical studies could have been predicted; if this had been known at the time, it might have affected decisions on future development of the compound. A chemically different but pharmacologically similar substance, AZD-0837, was developed by AstraZeneca for similar indications. It is a prodrug of a potent, competitive, reversible inhibitor of free and fibrin-bound thrombin called ARH0637. The development of AZD-0837 has been discontinued. Due to a limitation identified in long-term stability of the extended-release AZD-0837 drug product, a follow-up study from ASSURE on stroke prevention in patients with non-valvular atrial fibrillation, was prematurely closed in 2010 after 2 years. There was also a numerically higher mortality against warfarin. In a Phase 2 trial for AF the mean serum creatinine concentration increased by about 10% from baseline in patients treated with AZD-0837, which returned to baseline after cessation of therapy. References External links Direct thrombin inhibitors Abandoned drugs Prodrugs Hepatotoxins Drugs developed by AstraZeneca Azetidines
Ximelagatran
[ "Chemistry" ]
1,420
[ "Chemicals in medicine", "Drug safety", "Prodrugs", "Abandoned drugs" ]
1,019,925
https://en.wikipedia.org/wiki/Level%20structure
In the mathematical subfield of graph theory a level structure of a rooted graph is a partition of the vertices into subsets that have the same distance from a given root vertex. Definition and construction Given a connected graph G = (V, E) with V the set of vertices and E the set of edges, and with a root vertex r, the level structure is a partition of the vertices into subsets Li called levels, consisting of the vertices at distance i from r. Equivalently, this set may be defined by setting L0 = {r}, and then, for i > 0, defining Li to be the set of vertices that are neighbors to vertices in Li − 1 but are not themselves in any earlier level. The level structure of a graph can be computed by a variant of breadth-first search: algorithm level-BFS(G, r): Q ← {r} for ℓ from 0 to ∞: process(Q, ℓ) // the set Q holds all vertices at level ℓ mark all vertices in Q as discovered Q' ← {} for u in Q: for each edge (u, v): if v is not yet marked: add v to Q' if Q' is empty: return Q ← Q' Properties In a level structure, each edge of G either has both of its endpoints within the same level, or its two endpoints are in consecutive levels. Applications The partition of a graph into its level structure may be used as a heuristic for graph layout problems such as graph bandwidth. The Cuthill–McKee algorithm is a refinement of this idea, based on an additional sorting step within each level. Level structures are also used in algorithms for sparse matrices, and for constructing separators of planar graphs. References Graph theory objects
Level structure
[ "Mathematics" ]
366
[ "Mathematical relations", "Graph theory", "Graph theory objects" ]
1,019,993
https://en.wikipedia.org/wiki/Thermus
Thermus is a genus of thermophilic bacteria. It is one of several bacteria belonging to the Deinococcota phylum. According to comparative analysis of 16S rRNA, this is one the most ancient group of bacteria. Thermus species can be distinguished from other genera in the family Thermaceae as well as all other bacteria by the presence of eight conserved signature indels found in proteins such as adenylate kinase and replicative DNA helicase as well as 14 conserved signature proteins that are exclusively shared by members of this genus. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and the National Center for Biotechnology Information (NCBI). Between all its species, T. thermophilus has a special importance as a model organism for basic and applied research. Species incertae sedis: "T. anatoliensis" Kacagan et al. 2016 "T. caldophilus" Taguchi et al. 1983 "T. eggertssonii" Peters 2008 "T. murrieta" Benner et al. 2006 "T. nonproteolyticus" 1992 "T. rehai" Lin et al. 2002 "T. yunnanensis" Gong et al. 2005 Habitats The strains of the genus Thermus are generally isolated from hydrothermal areas where the range of water temperature is 55–70 °C and that of pH is 5.0–10.5. The first isolate of the genus Thermus was isolated from hydrothermal areas in Yellowstone National Park. Later on more isolates were obtained from several hydrothermal areas worldwide, such as in Japan, Iceland, New Zealand, New Mexico or the Australian Artesian Basin. See also Bacteria Biotechnology Thermophiles Geyser List of bacteria genera List of bacterial orders References Deinococcota Thermophiles Thermozoa Organisms living on hydrothermal vents Bacteria genera
Thermus
[ "Biology" ]
412
[ "Organisms by adaptation", "Organisms by habitat", "Organisms living on hydrothermal vents" ]
1,020,021
https://en.wikipedia.org/wiki/Distance%20%28graph%20theory%29
In the mathematical field of graph theory, the distance between two vertices in a graph is the number of edges in a shortest path (also called a graph geodesic) connecting them. This is also known as the geodesic distance or shortest-path distance. Notice that there may be more than one shortest path between two vertices. If there is no path connecting the two vertices, i.e., if they belong to different connected components, then conventionally the distance is defined as infinite. In the case of a directed graph the distance between two vertices and is defined as the length of a shortest directed path from to consisting of arcs, provided at least one such path exists. Notice that, in contrast with the case of undirected graphs, does not necessarily coincide with —so it is just a quasi-metric, and it might be the case that one is defined while the other is not. Related concepts A metric space defined over a set of points in terms of distances in a graph defined over the set is called a graph metric. The vertex set (of an undirected graph) and the distance function form a metric space, if and only if the graph is connected. The eccentricity of a vertex is the greatest distance between and any other vertex; in symbols, It can be thought of as how far a node is from the node most distant from it in the graph. The radius of a graph is the minimum eccentricity of any vertex or, in symbols, The diameter of a graph is the maximum eccentricity of any vertex in the graph. That is, is the greatest distance between any pair of vertices or, alternatively, To find the diameter of a graph, first find the shortest path between each pair of vertices. The greatest length of any of these paths is the diameter of the graph. A central vertex in a graph of radius is one whose eccentricity is —that is, a vertex whose distance from its furthest vertex is equal to the radius, equivalently, a vertex such that . A peripheral vertex in a graph of diameter is one whose eccentricity is —that is, a vertex whose distance from its furthest vertex is equal to the diameter. Formally, is peripheral if . A pseudo-peripheral vertex has the property that, for any vertex , if is as far away from as possible, then is as far away from as possible. Formally, a vertex is pseudo-peripheral if, for each vertex with , it holds that . A level structure of the graph, given a starting vertex, is a partition of the graph's vertices into subsets by their distances from the starting vertex. A geodetic graph is one for which every pair of vertices has a unique shortest path connecting them. For example, all trees are geodetic. The weighted shortest-path distance generalises the geodesic distance to weighted graphs. In this case it is assumed that the weight of an edge represents its length or, for complex networks the cost of the interaction, and the weighted shortest-path distance is the minimum sum of weights across all the paths connecting and . See the shortest path problem for more details and algorithms. Algorithm for finding pseudo-peripheral vertices Often peripheral sparse matrix algorithms need a starting vertex with a high eccentricity. A peripheral vertex would be perfect, but is often hard to calculate. In most circumstances a pseudo-peripheral vertex can be used. A pseudo-peripheral vertex can easily be found with the following algorithm: Choose a vertex . Among all the vertices that are as far from as possible, let be one with minimal degree. If then set and repeat with step 2, else is a pseudo-peripheral vertex. See also Distance matrix Resistance distance Betweenness centrality Centrality Closeness Degree diameter problem for graphs and digraphs Metric graph Notes Graph theory Metric geometry
Distance (graph theory)
[ "Mathematics" ]
766
[ "Discrete mathematics", "Graph theory", "Combinatorics", "Mathematical relations", "Graph distance" ]
1,020,537
https://en.wikipedia.org/wiki/Bow%20shock
In astrophysics, bow shocks are shock waves in regions where the conditions of density and pressure change dramatically due to blowing stellar wind. Bow shock occurs when the magnetosphere of an astrophysical object interacts with the nearby flowing ambient plasma such as the solar wind. For Earth and other magnetized planets, it is the boundary at which the speed of the stellar wind abruptly drops as a result of its approach to the magnetopause. For stars, this boundary is typically the edge of the astrosphere, where the stellar wind meets the interstellar medium. Description The defining criterion of a shock wave is that the bulk velocity of the plasma drops from "supersonic" to "subsonic", where the speed of sound cs is defined by where is the ratio of specific heats, is the pressure, and is the density of the plasma. A common complication in astrophysics is the presence of a magnetic field. For instance, the charged particles making up the solar wind follow spiral paths along magnetic field lines. The velocity of each particle as it gyrates around a field line can be treated similarly to a thermal velocity in an ordinary gas, and in an ordinary gas the mean thermal velocity is roughly the speed of sound. At the bow shock, the bulk forward velocity of the wind (which is the component of the velocity parallel to the field lines about which the particles gyrate) drops below the speed at which the particles are gyrating. Around the Earth The best-studied example of a bow shock is that occurring where the Sun's wind encounters Earth's magnetopause, although bow shocks occur around all planets, both unmagnetized, such as Mars and Venus and magnetized, such as Jupiter or Saturn. Earth's bow shock is about thick and located about from the planet. At comets Bow shocks form at comets as a result of the interaction between the solar wind and the cometary ionosphere. Far away from the Sun, a comet is an icy boulder without an atmosphere. As it approaches the Sun, the heat of the sunlight causes gas to be released from the cometary nucleus, creating an atmosphere called a coma. The coma is partially ionized by the sunlight, and when the solar wind passes through this ion coma, the bow shock appears. The first observations were made in the 1980s and 90s as several spacecraft flew by comets 21P/Giacobini–Zinner, 1P/Halley, and 26P/Grigg–Skjellerup. It was then found that the bow shocks at comets are wider and more gradual than the sharp planetary bow shocks seen at Earth, for example. These observations were all made near perihelion when the bow shocks already were fully developed. The Rosetta spacecraft followed comet 67P/Churyumov–Gerasimenko from far out in the solar system, at a heliocentric distance of 3.6 AU, in toward perihelion at 1.24 AU, and back out again. This allowed Rosetta to observe the bow shock as it formed when the outgassing increased during the comet's journey toward the Sun. In this early state of development the shock was called the "infant bow shock". The infant bow shock is asymmetric and, relative to the distance to the nucleus, wider than fully developed bow shocks. Around the Sun For several decades, the solar wind has been thought to form a bow shock at the edge of the heliosphere, where it collides with the surrounding interstellar medium. Moving away from the Sun, the point where the solar wind flow becomes subsonic is the termination shock, the point where the interstellar medium and solar wind pressures balance is the heliopause, and the point where the flow of the interstellar medium becomes subsonic would be the bow shock. This solar bow shock was thought to lie at a distance around 230 AU from the Sun – more than twice the distance of the termination shock as encountered by the Voyager spacecraft. However, data obtained in 2012 from NASA's Interstellar Boundary Explorer (IBEX) indicates the lack of any solar bow shock. Along with corroborating results from the Voyager spacecraft, these findings have motivated some theoretical refinements; current thinking is that formation of a bow shock is prevented, at least in the galactic region through which the Sun is passing, by a combination of the strength of the local interstellar magnetic-field and of the relative velocity of the heliosphere. Around other stars In 2006, a far infrared bow shock was detected near the AGB star R Hydrae. Bow shocks are also a common feature in Herbig Haro objects, in which a much stronger collimated outflow of gas and dust from the star interacts with the interstellar medium, producing bright bow shocks that are visible at optical wavelengths. The Hubble Space Telescope captured these images of bow shocks made of dense gasses and plasma in the Orion Nebula. Around massive stars If a massive star is a runaway star, it can form an infrared bow-shock that is detectable in 24 μm and sometimes in 8μm of the Spitzer Space Telescope or the W3/W4-channels of WISE. In 2016 Kobulnicky et al. created the largest spitzer/WISE bow-shock catalog to date with 709 bow-shock candidates. To get a larger bow-shock catalog The Milky Way Project (a Citizen Science project) aims to map infrared bow-shocks in the galactic plane. This larger catalog will help to understand the stellar wind of massive stars. The closest stars with infrared bow-shocks are: Most of them belong to the Scorpius–Centaurus association and Theta Carinae, which is the brightest star of IC 2602, might also belong to the Lower Centaurus–Crux subgroup. Epsilon Persei does not belong to this stellar association. Magnetic draping effect A similar effect, known as the magnetic draping effect, occurs when a super-Alfvénic plasma flow impacts an unmagnetized object such as what happens when the solar wind reaches the ionosphere of Venus: the flow deflects around the object draping the magnetic field along the wake flow. The condition for the flow to be super-Alfvénic means that the relative velocity between the flow and object, , is larger than the local Alfvén velocity which means a large Alfvénic Mach number: . For unmagnetized and electrically conductive objects, the ambient field creates electric currents inside the object, and into the surrounding plasma, such that the flow is deflected and slowed as the time scale of magnetic dissipation is much longer than the time scale of magnetic field advection. The induced currents in turn generate magnetic fields that deflect the flow creating a bow shock. For example, the ionospheres of Mars and Venus provide the conductive environments for the interaction with the solar wind. Without an ionosphere, the flowing magnetized plasma is absorbed by the non-conductive body. The latter occurs, for example, when the solar wind interacts with the Moon which has no ionosphere. In magnetic draping, the field lines are wrapped and draped around the leading side of the object creating a narrow sheath which is similar to the bow shocks in the planetary magnetospheres. The concentrated magnetic field increases until the ram pressure becomes comparable to the magnetic pressure in the sheath: where is the density of the plasma, is the draped magnetic field near the object, and is the relative speed between the plasma and the object. Magnetic draping has been detected around planets, moons, solar coronal mass ejections, and galaxies. See also Shock wave Shock waves in astrophysics Heliosheath Fermi glow Bow shock (aerodynamics) IRC -10414 Notes References External links NASA Astronomy Picture of the Day: Zeta Oph: Runaway Star (8 April 2017) Bow shock image (HD77581) Bow shock image (LL Ori) More on the Voyagers Hear the Jovian bow shock (from the University of Iowa) Cluster spacecraft makes a shocking discovery (Planetary Bow Shock) Planetary science Sun Shock waves Waves in plasmas Concepts in astronomy
Bow shock
[ "Physics", "Astronomy" ]
1,673
[ "Waves in plasmas", "Physical phenomena", "Shock waves", "Concepts in astronomy", "Plasma phenomena", "Waves", "Planetary science", "Astronomical sub-disciplines" ]
1,020,602
https://en.wikipedia.org/wiki/Lucida
Lucida (pronunciation: ) is an extended family of related typefaces designed by Charles Bigelow and Kris Holmes and released from 1984 onwards. The family is intended to be extremely legible when printed at small size or displayed on a low-resolution display – hence the name, from 'lucid' (clear or easy to understand). There are many variants of Lucida, including serif (Fax, Bright), sans-serif (Sans, Sans Unicode, Grande, Sans Typewriter) and scripts (Blackletter, Calligraphy, Handwriting). Many are released with other software, most notably Microsoft Office. Bigelow and Holmes, together with the (now defunct) TeX vendor Y&Y, extended the Lucida family with a full set of TeX mathematical symbols, making it one of the few typefaces that provide full-featured text and mathematical typesetting within TeX. Lucida is still licensed commercially through the TUG store as well through their own web store. The fonts are occasionally updated. Key features The Lucida fonts have a large x-height (tall lower-case letters), open apertures and quite widely spaced letters, classic features of fonts designed for legibility in body text. Capital letters were designed to be somewhat narrow and short in order to make all-caps acronyms blend in. Bigelow has said in interview that the characters were designed based on hand-drawn bitmaps to see what parts of letters needed to be clear in bitmap, before creating outlines that would render as clear bitmaps. The fonts include ligatures, but these are not needed for text, allowing use on simplistic typesetting systems. x-heights are consistent between the fonts. Hinting was used to allow onscreen display. Lucida Arrows A family of fonts containing arrows. Lucida Blackletter A family of cursive blackletter fonts released in 1992. Lucida Bright Based on Lucida Serif, it features more contrasted strokes and serifs. The font was first used as the text face for Scientific American magazine, and its letter-spacing was tightened to give it a slightly closer fit for use in two and three column formats. Lucida Calligraphy A script font developed from Chancery cursive, released in 1991. Lucida Casual A casual font, released in 1994. Similar to Lucida Handwriting, but without connecting strokes. In 2014, Bigelow & Holmes released additional weights in normal and narrow widths. Lucida Console A monospaced font that is a variant of Lucida Sans Typewriter, with smaller line spacing and the addition of the WGL4 character set. In 2014, Bigelow & Holmes released bold weights and italics in normal and narrow widths. Lucida Console was the default font in Microsoft Notepad from Windows 2000 through Windows 7, its replacement being Consolas. This was also the font for the blue screen of death from Windows XP to Windows 7. Lucida Fax A slab serif font family released in 1992. Derived from Lucida, and specifically designed for telefaxing. Lucida Handwriting A font, released in 1992, designed to resemble informal cursive handwriting with modern plastic-tipped or felt-tipped pens or markers. In 2014, Bigelow & Holmes added additional weights and widths to the family. Lucida Icons A family of fonts for ornament and decoration uses. It contains ampersands, interrobangs, asterisms, circled Lucida Sans numerals, etc. Lucida Math A family of fonts for mathematical expressions. Lucida Math Extension contains only mathematical symbols. Lucida Math Italic contains Latin characters from Lucida Serif Italic, but with smaller line spacing, and added Greek letters. Lucida Math contains mathematical symbols, and blackletter (from Lucida Blackletter) and script letters in (from Lucida Calligraphy Italic) Letterlike Symbols region. Lucida OpenType First released in March 2012, this collection includes OpenType math fonts in regular and bold weights, and Lucida Bright, Lucida Sans Typewriter, and Lucida Sans text fonts in the usual four variants (regular, italic, bold, bold italic). The regular math font includes an entirely new math script alphabet in Roundhand style, among other new characters. The Lucida Bright text fonts include Unicode Latin character blocks including Basic Latin, Latin-1, and Latin Extended-A characters for American, Western European, Central European, Turkish, and other Latin-based orthographies. Lucida Sans A family of humanist sans-serif fonts complementing Lucida Serif. The italic is a "true italic" rather than a "sloped roman", inspired by chancery cursive handwriting of the Italian renaissance, which Bigelow and Holmes studied while at Reed College in the 1960s. Lucida Grande A version of Lucida Sans with expanded character sets, released around 2000. It supports Latin, Greek, Cyrillic, Arabic, Hebrew, Thai scripts. It is most notable for having been used as the system font for macOS until version 10.10. Lucida Sans Typewriter Also called Lucida Typewriter Sans, this is a sans-serif monospaced font family, designed for typewriters. Its styling is reminiscent of Letter Gothic and Andalé Mono; a variant, Lucida Console , replaced those two fonts on Microsoft Windows systems. Lucida Sans Unicode Based on Lucida Sans Regular, this version added characters in Arrows, Block Elements, Box Drawing, Combining Diacritical Marks, Control Pictures, Currency Symbols, Cyrillic, General Punctuation, Geometric Shapes, Greek and Coptic, Hebrew, IPA Extensions, Latin Extended-A, Latin Extended-B, Letterlike Symbols, Mathematical Operators, Miscellaneous Symbols, Miscellaneous Technical, Spacing Modifier Letters, Superscripts and Subscripts regions. Lucida Serif The original Lucida font designed in 1985, featuring a thickened serif. It was simply called Lucida when it was first released. Lucida Typewriter Serif Also called Lucida Typewriter, this font is a slab serif monospaced version of Lucida Fax, but with wider serifs. The letters are wider than Lucida Sans Typewriter. Usages Lucida Console is used in various parts of Microsoft Windows. From Windows 2000 until Windows 7, Lucida Console is used as the default typeface of Notepad. In Windows 2000 until Windows 7, and in Windows CE, Lucida Console is used as the typeface of the Blue Screen of Death. Lucida Grande, as well as Lucida Sans Demibold (identical outlines to Lucida Grande Bold but with tighter spacing of numerals), were used as the primary user interface font in Apple Inc.'s Mac OS X operating system until OS X Yosemite, as well as many programs including Front Row. Lucida is also used in the logo for Air Canada. A collection of Lucida variants are included in the Oracle JRE 9. Lucida Calligraphy was used in the logo for Gladden Entertainment. In April 2012, Lucida Sans was selected by GfK Blue Moon as the font for a package design as part of a proposed law in Australia banning logos on cigarette packaging. The proposed law requires cigarettes to be sold in dark olive-brown packages that depict graphic images of the effects of smoking and the cigarette's brand printed in Lucida Sans. According to Tom Delaney, a senior designer with New York design consultant Muts & Joy, "Lucida Sans is one of the least graceful sans-serif typefaces designed. It’s clumsy in its line construction." On August 15, 2012, the Australian government approved the ban on cigarette logos, effectively replacing them with the unattractive packaging. See also MathTime Wingdings References External links Lucida and TeX (TeX Users Group) Lucida Font Family Group - by Kris Holmes, Charles Bigelow (Linotype corporation) Notes on Lucida, by Charles Bigelow Lucida Family Overview by Charles Bigelow and Kris Holmes Lucida Calligraphy Text Samples - Thin, Lite, Normal, Bold, UltraBlack Lucida Handwriting Text Samples - Thin, Lite, Normal, Bold, UltraBlack Lucida Casual Text Samples - Thin, Lite, Normal, Bold, UltraBlack Lucida Grande Text Samples - Light, Normal, Bold, Black Lucida OpenType font set Lucida Bright Math OT Ulrik Vieth and Mojca Miklavec, Another incarnation of Lucida: Towards Lucida OpenType, TUGboat, Volume 32 (2011), No. 2 All Lucida fonts by Charles Bigelow and Kris Holmes Interview with Charles Bigelow (Yue Wang) Unified serif and sans-serif typeface families Symbol typefaces TeX Typefaces and fonts introduced in 1984 Mathematical OpenType typefaces Humanist sans-serif typefaces Typefaces designed by Charles Bigelow (type designer) Typefaces designed by Kris Holmes
Lucida
[ "Mathematics" ]
1,875
[ "Mathematical OpenType typefaces", "TeX", "Mathematical markup languages", "Mathematical typefaces" ]
1,020,605
https://en.wikipedia.org/wiki/Chaotropic%20agent
A chaotropic agent is a molecule in water solution that can disrupt the hydrogen bonding network between water molecules (i.e. exerts chaotropic activity). This has an effect on the stability of the native state of other molecules in the solution, mainly macromolecules (proteins, nucleic acids) by weakening the hydrophobic effect. For example, a chaotropic agent reduces the amount of order in the structure of a protein formed by water molecules, both in the bulk and the hydration shells around hydrophobic amino acids, and may cause its denaturation. Conversely, an antichaotropic agent (kosmotropic) is a molecule in an aqueous solution that will increase the hydrophobic effects within the solution. Antichaotropic salts such as ammonium sulphate can be used to precipitate substances from the impure mixture. This is used in protein purification processes, to remove undesired proteins from solution. Overview A chaotropic agent is a substance which disrupts the structure of, and denatures, macromolecules such as proteins and nucleic acids (e.g. DNA and RNA). Chaotropic solutes increase the entropy of the system by interfering with intermolecular interactions mediated by non-covalent forces such as hydrogen bonds, van der Waals forces, and hydrophobic effects. Macromolecular structure and function is dependent on the net effect of these forces (see protein folding), therefore it follows that an increase in chaotropic solutes in a biological system will denature macromolecules, reduce enzymatic activity and induce stress on a cell (i.e., a cell will have to synthesize stress protectants). Tertiary protein folding is dependent on hydrophobic forces from amino acids throughout the sequence of the protein. Chaotropic solutes decrease the net hydrophobic effect of hydrophobic regions because of a disordering of water molecules adjacent to the protein. This solubilises the hydrophobic region in the solution, thereby denaturing the protein. This is also directly applicable to the hydrophobic region in lipid bilayers; if a critical concentration of a chaotropic solute is reached (in the hydrophobic region of the bilayer) then membrane integrity will be compromised, and the cell will lyse. Chaotropic salts that dissociate in solution exert chaotropic effects via different mechanisms. Whereas chaotropic compounds such as ethanol interfere with non-covalent intramolecular forces as outlined above, salts can have chaotropic properties by shielding charges and preventing the stabilization of salt bridges. Hydrogen bonding is stronger in non-polar media, so salts, which increase the chemical polarity of the solvent, can also destabilize hydrogen bonding. Mechanistically this is because there are insufficient water molecules to effectively solvate the ions. This can result in ion-dipole interactions between the salts and hydrogen bonding species which are more favorable than normal hydrogen bonds. Common chaotropic agents include n-butanol, ethanol, guanidinium chloride, lithium perchlorate, lithium acetate, magnesium chloride, phenol, 2-propanol, sodium dodecyl sulfate, thiourea, and urea. See also Boom method Chaotropic activity Denaturation (biochemistry) DNA separation by silica adsorption Hofmeister series Kosmotropic Minicolumn purification References Biomolecules Entropy
Chaotropic agent
[ "Physics", "Chemistry", "Mathematics", "Biology" ]
727
[ "Thermodynamic properties", "Natural products", "Physical quantities", "Biochemistry", "Dynamical systems", "Quantity", "Organic compounds", "Entropy", "Biomolecules", "Asymmetry", "Structural biology", "Wikipedia categories named after physical quantities", "Symmetry", "Molecular biology"...
1,020,643
https://en.wikipedia.org/wiki/Hydrophobic%20effect
The hydrophobic effect is the observed tendency of nonpolar substances to aggregate in an aqueous solution and to be excluded by water. The word hydrophobic literally means "water-fearing", and it describes the segregation of water and nonpolar substances, which maximizes the entropy of water and minimizes the area of contact between water and nonpolar molecules. In terms of thermodynamics, the hydrophobic effect is the free energy change of water surrounding a solute. A positive free energy change of the surrounding solvent indicates hydrophobicity, whereas a negative free energy change implies hydrophilicity. The hydrophobic effect is responsible for the separation of a mixture of oil and water into its two components. It is also responsible for effects related to biology, including: cell membrane and vesicle formation, protein folding, insertion of membrane proteins into the nonpolar lipid environment and protein-small molecule associations. Hence the hydrophobic effect is essential to life. Substances for which this effect is observed are known as hydrophobes. Amphiphiles Amphiphiles are molecules that have both hydrophobic and hydrophilic domains. Detergents are composed of amphiphiles that allow hydrophobic molecules to be solubilized in water by forming micelles and bilayers (as in soap bubbles). They are also important to cell membranes composed of amphiphilic phospholipids that prevent the internal aqueous environment of a cell from mixing with external water. Folding of macromolecules In the case of protein folding, the hydrophobic effect is important to understanding the structure of proteins that have hydrophobic amino acids (such as valine, leucine, isoleucine, phenylalanine, tryptophan and methionine) clustered together within the protein. Structures of globular proteins have a hydrophobic core in which hydrophobic side chains are buried from water, which stabilizes the folded state. Charged and polar side chains are situated on the solvent-exposed surface where they interact with surrounding water molecules. Minimizing the number of hydrophobic side chains exposed to water is the principal driving force behind the folding process, although formation of hydrogen bonds within the protein also stabilizes protein structure. The energetics of DNA tertiary-structure assembly were determined to be driven by the hydrophobic effect, in addition to Watson–Crick base pairing, which is responsible for sequence selectivity, and stacking interactions between the aromatic bases. Protein purification In biochemistry, the hydrophobic effect can be used to separate mixtures of proteins based on their hydrophobicity. Column chromatography with a hydrophobic stationary phase such as phenyl-sepharose will cause more hydrophobic proteins to travel more slowly, while less hydrophobic ones elute from the column sooner. To achieve better separation, a salt may be added (higher concentrations of salt increase the hydrophobic effect) and its concentration decreased as the separation progresses. Cause The origin of the hydrophobic effect is not fully understood. Some argue that the hydrophobic interaction is mostly an entropic effect originating from the disruption of highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute. A hydrocarbon chain or a similar nonpolar region of a large molecule is incapable of forming hydrogen bonds with water. Introduction of such a non-hydrogen bonding surface into water causes disruption of the hydrogen bonding network between water molecules. The hydrogen bonds are reoriented tangentially to such surface to minimize disruption of the hydrogen bonded 3D network of water molecules, and this leads to a structured water "cage" around the nonpolar surface. The water molecules that form the "cage" (or clathrate) have restricted mobility. In the solvation shell of small nonpolar particles, the restriction amounts to some 10%. For example, in the case of dissolved xenon at room temperature a mobility restriction of 30% has been found. In the case of larger nonpolar molecules, the reorientational and translational motion of the water molecules in the solvation shell may be restricted by a factor of two to four; thus, at 25 °C the reorientational correlation time of water increases from 2 to 4-8 picoseconds. Generally, this leads to significant losses in translational and rotational entropy of water molecules and makes the process unfavorable in terms of the free energy in the system. By aggregating together, nonpolar molecules reduce the surface area exposed to water and minimize their disruptive effect. The hydrophobic effect can be quantified by measuring the partition coefficients of non-polar molecules between water and non-polar solvents. The partition coefficients can be transformed to free energy of transfer which includes enthalpic and entropic components, ΔG = ΔH - TΔS. These components are experimentally determined by calorimetry. The hydrophobic effect was found to be entropy-driven at room temperature because of the reduced mobility of water molecules in the solvation shell of the non-polar solute; however, the enthalpic component of transfer energy was found to be favorable, meaning it strengthened water-water hydrogen bonds in the solvation shell due to the reduced mobility of water molecules. At the higher temperature, when water molecules become more mobile, this energy gain decreases along with the entropic component. The hydrophobic effect depends on the temperature, which leads to "cold denaturation" of proteins. The hydrophobic effect can be calculated by comparing the free energy of solvation with bulk water. In this way, the hydrophobic effect not only can be localized but also decomposed into enthalpic and entropic contributions. See also Entropic force Hydrophobe Hydrophile Hydrophobicity scales Interfacial tension Superhydrophobe Superhydrophobic coating References Chemical bonding Supramolecular chemistry Intermolecular forces
Hydrophobic effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,216
[ "Molecular physics", "Materials science", "Intermolecular forces", "Condensed matter physics", "nan", "Nanotechnology", "Chemical bonding", "Supramolecular chemistry" ]
1,020,980
https://en.wikipedia.org/wiki/Starling%20equation
The Starling principle holds that extracellular fluid movements between blood and tissues are determined by differences in hydrostatic pressure and colloid osmotic pressure (oncotic pressure) between plasma inside microvessels and interstitial fluid outside them. The Starling equation, proposed many years after the death of Starling, describes that relationship in mathematical form and can be applied to many biological and non-biological semipermeable membranes. The classic Starling principle and the equation that describes it have in recent years been revised and extended. Every day around 8 litres of water (solvent) containing a variety of small molecules (solutes) leaves the blood stream of an adult human and perfuses the cells of the various body tissues. Interstitial fluid drains by afferent lymph vessels to one of the regional lymph node groups, where around 4 litres per day is reabsorbed to the blood stream. The remainder of the lymphatic fluid is rich in proteins and other large molecules and rejoins the blood stream via the thoracic duct which empties into the great veins close to the heart. Filtration from plasma to interstitial (or tissue) fluid occurs in microvascular capillaries and post-capillary venules. In most tissues the micro vessels are invested with a continuous internal surface layer that includes a fibre matrix now known as the endothelial glycocalyx whose interpolymer spaces function as a system of small pores, radius circa 5 nm. Where the endothelial glycocalyx overlies a gap in the junction molecules that bind endothelial cells together (inter endothelial cell cleft), the plasma ultrafiltrate may pass to the interstitial space, leaving larger molecules reflected back into the plasma. A small number of continuous capillaries are specialised to absorb solvent and solutes from interstitial fluid back into the blood stream through fenestrations in endothelial cells, but the volume of solvent absorbed every day is small. Discontinuous capillaries as found in sinusoidal tissues of bone marrow, liver and spleen have little or no filter function. The rate at which fluid is filtered across vascular endothelium (transendothelial filtration) is determined by the sum of two outward forces, capillary pressure () and interstitial protein osmotic pressure (), and two absorptive forces, plasma protein osmotic pressure () and interstitial pressure (). The Starling equation describes these forces in mathematical terms. It is one of the Kedem–Katchalski equations which bring nonsteady state thermodynamics to the theory of osmotic pressure across membranes that are at least partly permeable to the solute responsible for the osmotic pressure difference. The second Kedem–Katchalsky equation explains the trans endothelial transport of solutes, . The equation The classic Starling equation reads as follows: where: is the trans endothelial solvent filtration volume per second (SI units of m3·s−1). is the net driving force (SI units of Pa = kg·m−1·s−2, often expressed as mmHg), is the capillary hydrostatic pressure is the interstitial hydrostatic pressure is the plasma protein oncotic pressure is the interstitial oncotic pressure is the hydraulic conductivity of the membrane (SI units of m2·s·kg−1, equivalent to m·s−1·mmHg−1) is the surface area for filtration (SI units of m2) the product · is defined as the filtration coefficient (SI units of m4·s·kg−1, or equivalently in m3·s−1·mmHg−1) is Staverman's reflection coefficient (adimensional) By convention, outward force is defined as positive, and inward force is defined as negative. If Jv is positive, solvent is leaving the capillary (filtration). If negative, solvent is entering the capillary (absorption). Applying the classic Starling equation, it had long been taught that continuous capillaries filter out fluid in their arteriolar section and reabsorb most of it in their venular section, as shown by the diagram. However, empirical evidence shows that, in most tissues, the flux of the intraluminal fluid of capillaries is continuous and, primarily, effluent. Efflux occurs along the whole length of a capillary. Fluid filtered to the space outside a capillary is mostly returned to the circulation via lymph nodes and the thoracic duct. A mechanism for this phenomenon is the Michel-Weinbaum model, in honour of two scientists who, independently, described the filtration function of the glycocalyx. Briefly, the colloid osmotic pressure πi of the interstitial fluid has been found to have no effect on Jv and the colloid osmotic pressure difference that opposes filtration is now known to be π'p minus the subglycocalyx π, which is close to zero while there is adequate filtration to flush interstitial proteins out of the interendothelial cleft. Consequently, Jv is much less than previously calculated, and the unopposed diffusion of interstitial proteins to the subglycocalyx space if and when filtration falls wipes out the colloid osmotic pressure difference necessary for reabsorption of fluid to the capillary. The revised Starling equation is compatible with the steady-state Starling principle: where: is the trans endothelial solvent filtration volume per second. is the net driving force, is the capillary hydrostatic pressure is the interstitial hydrostatic pressure is the plasma protein oncotic pressure is the subglycocalyx oncotic pressure is the hydraulic conductivity of the membrane is the surface area for filtration is Staverman's reflection coefficient Pressures are often measured in millimetres of mercury (mmHg), and the filtration coefficient in millilitres per minute per millimetre of mercury (ml·min−1·mmHg−1). Filtration coefficient In some texts the product of hydraulic conductivity and surface area is called the filtration co-efficient Kfc. Reflection coefficient Staverman's reflection coefficient, σ, is a unitless constant that is specific to the permeability of a membrane to a given solute. The Starling equation, written without σ, describes the flow of a solvent across a membrane that is impermeable to the solutes contained within the solution. σn corrects for the partial permeability of a semipermeable membrane to a solute n. Where σ is close to 1, the plasma membrane is less permeable to the denotated species (for example, larger molecules such as albumin and other plasma proteins), which may flow across the endothelial lining, from higher to lower concentrations, more slowly, while allowing water and smaller solutes through the glycocalyx filter to the extravascular space. Glomerular capillaries have a reflection coefficient close to 1 as normally no protein crosses into the glomerular filtrate. In contrast, hepatic sinusoids have no reflection coefficient as they are fully permeable to protein. Hepatic interstitial fluid within the Space of Diss has the same colloid osmotic pressure as plasma and so hepatocyte synthesis of albumin can be regulated. Albumin and other proteins in the interstitial spaces return to the circulation via lymph. Approximated values Following are typically quoted values for the variables in the classic Starling equation: It is reasoned that some albumin escapes from the capillaries and enters the interstitial fluid where it would produce a flow of water equivalent to that produced by a hydrostatic pressure of +3 mmHg. Thus, the difference in protein concentration would produce a flow of fluid into the vessel at the venous end equivalent to 28 − 3 = 25 mmHg of hydrostatic pressure. The total oncotic pressure present at the venous end could be considered as +25 mmHg. In the beginning (arteriolar end) of a capillary, there is a net driving force () outwards from the capillary of +9 mmHg. In the end (venular end), on the other hand, there is a net driving force of −8 mmHg. Assuming that the net driving force declines linearly, then there is a mean net driving force outwards from the capillary as a whole, which also results in that more fluid exits a capillary than re-enters it. The lymphatic system drains this excess. J. Rodney Levick argues in his textbook that the interstitial force is often underestimated, and measurements used to populate the revised Starling equation show the absorbing forces to be consistently less than capillary or venular pressures. Specific organs Kidneys Glomerular capillaries have a continuous glycocalyx layer in health and the total transendothelial filtration rate of solvent () to the renal tubules is normally around 125 ml/ min (about 180 litres/ day). Glomerular capillary is more familiarly known as the glomerular filtration rate (GFR). In the rest of the body's capillaries, is typically 5 ml/ min (around 8 litres/ day), and the fluid is returned to the circulation via afferent and efferent lymphatics. Lungs The Starling equation can describe the movement of fluid from pulmonary capillaries to the alveolar air space. Clinical significance Woodcock and Woodcock showed in 2012 that the revised Starling equation (steady-state Starling principle) provides scientific explanations for clinical observations concerning intravenous fluid therapy. Traditional teaching of both filtration and absorption of fluid occurring in a single capillary has been superseded by the concept of a vital circulation of extracellular fluid running parallel to the circulation of blood. New approaches to the treatment of oedema (tissue swelling) are suggested. History The Starling equation is named for the British physiologist Ernest Starling, who is also recognised for the Frank–Starling law of the heart. Starling can be credited with identifying that the "absorption of isotonic salt solutions (from the extravascular space) by the blood vessels is determined by this osmotic pressure of the serum proteins" in 1896. See also Renal function References External links Derangedphysiology.com: Starling's Principle of Transvascular Fluid Dynamics Starling's principle of transvascular fluid dynamics | Deranged Physiology Eponymous equations of physics Equations of fluid dynamics Cardiovascular physiology Mathematics in medicine
Starling equation
[ "Physics", "Chemistry", "Mathematics" ]
2,276
[ "Equations of fluid dynamics", "Equations of physics", "Applied mathematics", "Eponymous equations of physics", "Mathematics in medicine", "Fluid dynamics" ]
1,021,118
https://en.wikipedia.org/wiki/Fast%20fracture
In structural engineering and material science, fast fracture is a phenomenon in which a flaw (such as a crack) in a material expands quickly, and leads to catastrophic failure of the material. It proceeds in high speed and requires a relatively small amount of accumulated strain energy, making it a dangerous failure mode. Flaw Stress acting on a material when fast fracture occurs is less than the material's yield stress. A very representative example of this is what happens when poking a blown up balloon with a needle, that is, fast fracture of the balloon's material. The energy in the balloon comes from the compressed gas inside it and the energy stored in the rubber membrane itself. The introduction of the flaw, which in this case is the pin prick, would lead to the explosion as the membrane fails by fast fracture. However, if the same flaw is introduced to a balloon with less energy - as in the case of a partially inflated balloon - the fast fracture will not occur, unless the balloon is punctured progressively so that it reaches a critical pressure at which fast fracture occurs. The occurrence of fast fracture can depend on the material. For instance, it transpires in the cases of brittle materials with less capacity for deformation even if the flaw only involves small defects caused by the manufacturing process. See also Yield (engineering) References Mechanical failure
Fast fracture
[ "Materials_science", "Engineering" ]
268
[ "Mechanical failure", "Materials science", "Mechanical engineering" ]
1,021,187
https://en.wikipedia.org/wiki/Horst%20Berger
Horst Berger (1928-2019) was a structural engineer and designer known for his work with lightweight tensile architecture. After receiving a degree in Civil Engineering in 1954 from Stuttgart University in Stuttgart, Germany, he began working in 1955 at the Bridge and Special Structures Department of Wayss and Freitag in Frankfurt. In 1960, he joined Severud Associates in New York city and worked on projects such as the St. Louis Arch, Madison Square Garden, and Toronto City Hall. After forming Geiger Berger Associates in 1968 with air supported roof inventor David Geiger, the firm gained international recognition for its incorporation of lightweight fabric structures into permanent architectural designs. During his time at Geiger Berger Associates, Horst Berger had the challenge of engineering the roof designed by architect Fazlur Rahman Khan of Skidmore, Owings and Merrill for the Haj Terminal at the Jeddah Airport. This tensile fabric structure consists of 210 roof units contained in ten modules that are supported on steel pylons. In 1990 Horst Berger was asked to create a tensile fabric roof for the Denver International Airport. Challenges of snow loading and attaching the rigid walls to the fabric roof made it one of Berger’s toughest projects. The unique design with the roofing structure gave the terminal a more spacious layout. In 1990 he became a professor at the School of Architecture of the City College of New York. While studying and working in New York, Berger married an American woman Gretchen (Gay) Smart. They had four children, Ralf, Susie, Paul and Barbara; Barbara died in 2011, aged 53. Notable works Hajj terminal, King Abdul Aziz International Airport, Jeddah, Saudi Arabia King Fahd International Stadium, Riyadh, Saudi Arabia Seaworld Pavilion, San Diego, California San Diego Convention Center, San Diego, California Wimbledon Tennis Arena, Wimbledon, London, United Kingdom Great Hall, Alexandra, London, United Kingdom Shoreline Amphitheater, Mountain View, California Whale Pool Enclosure for the New York Aquarium, Brooklyn, New York Denver International Airport, Colorado Eilat Performing Arts Center, Elat, Israel Metrodome, Minneapolis References See also Tensile architecture Tensile and membrane structures City College of New York faculty Tensile architecture Tensile membrane structures 1928 births 2019 deaths
Horst Berger
[ "Technology" ]
458
[ "Structural system", "Tensile architecture" ]
1,021,210
https://en.wikipedia.org/wiki/Immunohistochemistry
Immunohistochemistry is a form of immunostaining. It involves the process of selectively identifying antigens (proteins) in cells and tissue, by exploiting the principle of antibodies binding specifically to antigens in biological tissues. Albert Hewett Coons, Ernest Berliner, Norman Jones and Hugh J Creech was the first to develop immunofluorescence in 1941. This led to the later development of immunohistochemistry. Immunohistochemical staining is widely used in the diagnosis of abnormal cells such as those found in cancerous tumors. In some cancer cells certain tumor antigens are expressed which make it possible to detect. Immunohistochemistry is also widely used in basic research, to understand the distribution and localization of biomarkers and differentially expressed proteins in different parts of a biological tissue. Sample preparation Immunohistochemistry can be performed on tissue that has been fixed and embedded in paraffin, but also cryopreservated (frozen) tissue. Based on the way the tissue is preserved, there are different steps to prepare the tissue for immunohistochemistry, but the general method includes proper fixation, antigen retrieval incubation with primary antibody, then incubation with secondary antibody. Tissue preparation and fixation Fixation of the tissue is important to preserve the tissue and maintaining cellular morphology. The fixation formula, ratio of fixative to tissue and time in the fixative, will affect the result. The fixation solution (fixative) is often 10% neutral buffer formalin. Normal fixation time is 24 hours in room temperature. The ratio of fixative to tissue ranges from 1:1 to 1:20. After the tissue is fixed it can be embedded in paraffin wax. For frozen sections, fixation is usually performed after sectioning if not new antibodies are going to be tested. Then acetone or formalin can be used. Sectioning Sectioning of the tissue sample is done using a microtome. For paraffin embedded tissue 4 μm is normal thickness, and for frozen sections 4 – 6 μm. The thickness of the sliced sections matters, and is an important factor in immunohistochemistry. If you compare a section of brain tissue measuring 4 μm with a section measuring 7 μm, some of what you see in the 7 μm thick section might be lacking in the 4 μm section. This shows the importance of detailed methods related to this methodology. The paraffin embedded tissues should be deparaffinized to remove all the paraffin on and around the tissue sample in xylene or a good substitute, followed by alcohol. Antigen retrieval Antigen retrieval is required to make the epitopes accessible for immunohistochemical staining for most formalin fixed tissue section. The epitopes are the binding sites for antibodies used to visualize the targeted antigen which may be masked due to the fixation. Fixation of the tissue may cause formation of methylene bridges or crosslinking of amino groups, so that the epitopes no longer are available. Antigen retrieval can restore the masked antigenicity, possibly by breaking down the crosslinks caused by fixation. The most common way to perform antigen retrieval is by using high-temperature heating while soaking the slides in a buffer solution. This can be done in different ways, for example by using microwave oven, autoclaves, heating plates or water baths. For frozen sections, antigen retrieval is generally not necessary, but for frozen section that has been fixed in acetone or formalin, can antigen retrieval improve the immunohistochemistry signal. Blocking Non-specific binding of antibodies can cause background staining. Although antibodies bind to specific epitopes, they may also partially or weakly bind to sites on nonspecific proteins that are similar to the binding site on the target protein. By incubating the tissue with normal serum isolated from the species which the secondary antibody was produced, the background staining can be reduced. It is also possible to use commercially available universal blocking buffers. Other common blocking buffers include normal serum, non-fat dry milk, BSA, or gelatin. Endogenous enzyme activity may also cause background staining but can be reduced if the tissue is treated with hydrogen peroxide. Sample labeling After preparing the sample, the target can be visualized by using antibodies labeled with fluorescent compounds, metals or enzymes. There are direct and indirect methods for labeling the sample. Antibody types The antibodies used for detection can be polyclonal or monoclonal. Polyclonal antibodies are made by using animals like guinea pig, rabbit, mouse, rat, or goat. The animal is injected with the antigen of interest and trigger an immune response. The antibodies can be isolated from the animal's whole serum. Polyclonal antibody production will result in a mixture of different antibodies and will recognize multiple epitopes. Monoclonal antibodies are made by injecting the animal with the antigen of interest and then isolating an antibody-producing B cell, typically from the spleen. The antibody producing cell is then fused with a cancer cell line. This causes the antibodies to show specificity for a single epitope. For immunohistochemical detection strategies, antibodies are classified as primary or secondary reagents. Primary antibodies are raised against an antigen of interest and are typically unconjugated (unlabeled). Secondary antibodies are raised against immunoglobulins of the primary antibody species. The secondary antibody is usually conjugated to a linker molecule, such as biotin, that then recruits reporter molecules, or the secondary antibody itself is directly bound to the reporter molecule. Detection methods The direct method is a one-step staining method and involves a labeled antibody reacting directly with the antigen in tissue sections. While this technique utilizes only one antibody and therefore is simple and rapid, the sensitivity is lower due to little signal amplification, in contrast to indirect approaches. The indirect method involves an unlabeled primary antibody that binds to the target antigen in the tissue. Then a secondary antibody, which binds with the primary antibody is added as a second layer. As mentioned, the secondary antibody must be raised against the antibody IgG of the animal species in which the primary antibody has been raised. This method is more sensitive than direct detection strategies because of signal amplification due to the binding of several secondary antibodies to each primary antibody. The indirect method, aside from its greater sensitivity, also has the advantage that only a relatively small number of standard conjugated (labeled) secondary antibodies needs to be generated. For example, a labeled secondary antibody raised against rabbit IgG, is useful with any primary antibody raised in rabbit. This is particularly useful when a researcher is labeling more than one primary antibody, whether due to polyclonal selection producing an array of primary antibodies for a singular antigen or when there is interest in multiple antigens. With the direct method, it would be necessary to label each primary antibody for every antigen of interest. Reporter molecules Reporter molecules vary based on the nature of the detection method, the most common being chromogenic and fluorescence detection. In chromogenic immunohistochemistry an antibody is conjugated to an enzyme, such as alkaline phosphate and horseradish peroxidase, that can catalyze a color-producing reaction in the presence of a chromogenic substrate like diaminobenzidine. The colored product can be analyzed with an ordinary light microscope. In immunofluorescence the antibody is tagged to a fluorophore, such as fluorescein isothiocyanate, tetramethylrhodamine isothiocyanate, aminomethyl Coumarin acetate or Cyanine5. Synthetic fluorochromes from Alexa Fluors is also commonly used. The fluorochromes can be visualized by a fluorescence or confocal microscope. For chromogenic and fluorescent detection methods, densitometric analysis of the signal can provide semi- and fully quantitative data, respectively, to correlate the level of reporter signal to the level of protein expression or localization. Counterstains After immunohistochemical staining of the target antigen, another stain is often applied. The counterstain provide contrast that helps the primary stain stand out and makes it easier to examine the tissue morphology. It also helps with orientation and visualization of the tissue section. Hematoxylin is commonly used. Troubleshooting In immunohistochemical techniques, there are several steps prior to the final staining of the tissue that can cause a variety of problems. It can be strong background staining, weak target antigen staining and presence of artifacts. It is important that antibody quality and the immunohistochemistry techniques are optimized. Endogenous biotin, reporter enzymes or primary/secondary antibody cross-reactivity are common causes of strong background staining. Weak or absent staining may be caused by inaccurate fixation of the tissue or to low antigen levels. These aspects of immunohistochemistry tissue prep and antibody staining must be systematically addressed to identify and overcome staining issues. Methods to eliminate background staining include dilution of the primary or secondary antibodies, changing the time or temperature of incubation, and using a different detection system or different primary antibody. Quality control should as a minimum include a tissue known to express the antigen as a positive control and negative controls of tissue known not to express the antigen, as well as the test tissue probed in the same way with omission of the primary antibody (or better, absorption of the primary antibody). Diagnostic immunohistochemistry markers Immunohistochemistry is an excellent detection technique and has the tremendous advantage of being able to show exactly where a given protein is located within the tissue examined. It is also an effective way to examine the tissues. This has made it a widely used technique in neuroscience, enabling researchers to examine protein expression within specific brain structures. Its major disadvantage is that, unlike immunoblotting techniques where staining is checked against a molecular weight ladder, it is impossible to show in immunohistochemistry that the staining corresponds with the protein of interest. For this reason, primary antibodies must be well-validated in a Western Blot or similar procedure. The technique is even more widely used in diagnostic surgical pathology for immunophenotyping tumors (e.g. immunostaining for e-cadherin to differentiate between ductal carcinoma in situ (stains positive) and lobular carcinoma in situ (does not stain positive)). More recently, immunohistochemical techniques have been useful in differential diagnoses of multiple forms of salivary gland, head, and neck carcinomas. The diversity of immunohistochemistry markers used in diagnostic surgical pathology is substantial. Many clinical laboratories in tertiary hospitals will have menus of over 200 antibodies used as diagnostic, prognostic and predictive biomarkers. Examples of some commonly used markers include: BrdU: used to identify replicating cells. Used to identify tumors as well as in neuroscience research. Cytokeratins: used for identification of carcinomas but may also be expressed in some sarcomas. CD15 and CD30: used for Hodgkin's disease. Alpha fetoprotein: for yolk sac tumors and hepatocellular carcinoma. CD117 (KIT): for gastrointestinal stromal tumors (GIST) and mast cell tumors. CD10 (CALLA): for renal cell carcinoma and acute lymphoblastic leukemia. Prostate specific antigen (PSA): for prostate cancer. estrogens and progesterone receptor (ER & PR) staining are used both diagnostically (breast and gyn tumors) as well as prognostic in breast cancer and predictive of response to therapy (estrogen receptor). Identification of B-cell lymphomas using CD20. Identification of T-cell lymphomas using CD3. PIN-4 cocktail, targeting p63, CK-5, CK-14 and AMACR (latter also known as P504S), and used to distinguish prostate adenocarcinoma from benign glands. Directing therapy A variety of molecular pathways are altered in cancer and some of the alterations can be targeted in cancer therapy. Immunohistochemistry can be used to assess which tumors are likely to respond to therapy, by detecting the presence or elevated levels of the molecular target. Chemical inhibitors Tumor biology allows for a number of potential intracellular targets. Many tumors are hormone dependent. The presence of hormone receptors can be used to determine if a tumor is potentially responsive to antihormonal therapy. One of the first therapies was the antiestrogen, tamoxifen, used to treat breast cancer. Such hormone receptors can be detected by immunohistochemistry. Imatinib, an intracellular tyrosine kinase inhibitor, was developed to treat chronic myelogenous leukemia, a disease characterized by the formation of a specific abnormal tyrosine kinase. Imitanib has proven effective in tumors that express other tyrosine kinases, most notably KIT. Most gastrointestinal stromal tumors express KIT, which can be detected by immunohistochemistry. Monoclonal antibodies Many proteins shown to be highly upregulated in pathological states by immunohistochemistry are potential targets for therapies utilising monoclonal antibodies. Monoclonal antibodies, due to their size, are utilized against cell surface targets. Among the overexpressed targets are members of the EGFR family, transmembrane proteins with an extracellular receptor domain regulating an intracellular tyrosine kinase. Of these, HER2/neu (also known as Erb-B2) was the first to be developed. The molecule is highly expressed in a variety of cancer cell types, most notably breast cancer. As such, antibodies against HER2/neu have been FDA approved for clinical treatment of cancer under the drug name Herceptin. There are commercially available immunohistochemical tests, Dako HercepTest, Leica Biosystems Oracle and Ventana Pathway. Similarly, epidermal growth factor receptor (HER-1) is overexpressed in a variety of cancers including head and neck and colon. Immunohistochemistry is used to determine patients who may benefit from therapeutic antibodies such as Erbitux (cetuximab). Commercial systems to detect epidermal growth factor receptor by immunohistochemistry include the Dako pharmDx. Mapping protein expression Immunohistochemistry can also be used for a more general protein profiling, provided the availability of antibodies validated for immunohistochemistry. The Human Protein Atlas displays a map of protein expression in normal human organs and tissues. The combination of immunohistochemistry and tissue microarrays provides protein expression patterns in a large number of different tissue types. Immunohistochemistry is also used for protein profiling in the most common forms of human cancer. See also Cutaneous conditions with immunofluorescence findings Chromogenic in situ hybridization Tissue Cytometry, a technique that brings the concept of flow cytometry to tissue section, in situ, and helps to perform whole slide scanning and quantification of markers by maintaining the spatial context using machine learning and AI. References Further reading External links The Human Protein Atlas Overview of Immunohistochemistry--describes all aspects of immunohistochemistry including sample prep, staining and troubleshooting Immunofluorescent Staining of Paraffin-Embedded Tissue (IF-P) IHC Tip 1: Antigen retrieval - should I do PIER or HIER? Histochemical Staining Methods - University of Rochester Department of Pathology Immunohistochemistry Staining Protocol Histology Immunologic tests Protein methods Anatomical pathology Staining Laboratory techniques Pathology
Immunohistochemistry
[ "Chemistry", "Biology" ]
3,317
[ "Biochemistry methods", "Pathology", "Staining", "Protein methods", "Protein biochemistry", "Immunologic tests", "Histology", "Microbiology techniques", "nan", "Microscopy", "Cell imaging" ]
1,021,334
https://en.wikipedia.org/wiki/J%C3%B6rg%20Schlaich
Jörg Schlaich (17 October 1934 – 4 September 2021) was a German structural engineer and is known internationally for his ground-breaking work in the creative design of bridges, long-span roofs, and other complex structures. He was a co-founder of the structural engineering and consulting firm Schlaich Bergermann Partner. He was the brother of the architect Brigitte Schlaich Peterhans. Early career Jörg Schlaich studied architecture and civil engineering from 1953 to 1955 at Stuttgart University before completing his studies at Technische Universität Berlin in 1959. He spent 1959 and 1960 at the Case Western Reserve University in Cleveland, United States. In 1963, he joined the firm Leonhardt & Andrä, the firm founded by Fritz Leonhardt. Later career Schlaich was made a partner and was responsible for the Alster-Schwimmhalle in Hamburg, and more importantly, the Olympic Stadium in Munich. He stayed with the firm until 1969. In 1974 he became an academic at Stuttgart University, and in 1980 he founded his own firm, Schlaich Bergermann Partner. In 1993, with the roof of the Gottlieb-Daimler-Stadion (since 2023 MHPArena) in Stuttgart, he introduced the "speichenrad" principle to structural engineering. Indeed, this principle was employed for the first time in the history of Structural Engineering by the Italian engineer Massimo Majowiecki, the designer of the roof of the Olympic Stadium, Rome (built in 1990, three years before the Gottlieb-Daimler-Stadion). Since then, his company has successfully employed it in stadium projects across the globe. Other structures include the observation tower at the Killesbergpark in Stuttgart. Most of his work as well of that of his company is documented on their website. He was also the developer of the solar tower (or solar chimney) and is largely credited with inventing the strut and tie model for reinforced concrete. Further reading Schlaich, Jörg; Bergermann, Rudolf. Leicht Weit (Light Structures) . Holgate, Alan. The Art of Structural Engineering: The Work of Jorg Schlaich and his Team (Books Britain, 1996) . Schlaich, Jörg. The Solar Chimney: Electricity from the Sun . Schlaich, Jörg; Rudolf Bergermann, Wolfgang Schiel & Gerhard Weinrebe (February 2005). " ". Journal of Solar Energy Engineering 127 (1): 117-124. DOI:10.1115/1.1823493. Retrieved on 2007-03-15. External links schlaich bergermann partner (sbp) site References 1934 births 2021 deaths People from Rems-Murr-Kreis People from the Free People's State of Württemberg Bridge engineers German civil engineers Structural engineers Tensile architecture Tensile membrane structures Members of the Academy of Arts, Berlin IStructE Gold Medal winners Werner von Siemens Ring laureates Technische Universität Berlin alumni
Jörg Schlaich
[ "Technology", "Engineering" ]
633
[ "Structural system", "Structural engineering", "Tensile architecture", "Structural engineers" ]
1,021,521
https://en.wikipedia.org/wiki/Equity%20premium%20puzzle
The equity premium puzzle refers to the inability of an important class of economic models to explain the average equity risk premium (ERP) provided by a diversified portfolio of equities over that of government bonds, which has been observed for more than 100 years. There is a significant disparity between returns produced by stocks compared to returns produced by government treasury bills. The equity premium puzzle addresses the difficulty in understanding and explaining this disparity. This disparity is calculated using the equity risk premium: The equity risk premium is equal to the difference between equity returns and returns from government bonds. It is equal to around 5% to 8% in the United States. The risk premium represents the compensation awarded to the equity holder for taking on a higher risk by investing in equities rather than government bonds. However, the 5% to 8% premium is considered to be an implausibly high difference and the equity premium puzzle refers to the unexplained reasons driving this disparity. Description The term was coined by Rajnish Mehra and Edward C. Prescott in a study published in 1985 titled "The Equity Premium: A Puzzle". An earlier version of the paper was published in 1982 under the title "A test of the intertemporal asset pricing model". The authors found that a standard general equilibrium model, calibrated to display key U.S. business cycle fluctuations, generated an equity premium of less than 1% for reasonable risk aversion levels. This result stood in sharp contrast with the average equity premium of 6% observed during the historical period. In 1982, Robert J. Shiller published the first calculation that showed that either a large risk aversion coefficient or counterfactually large consumption variability was required to explain the means and variances of asset returns. Azeredo (2014) shows, however, that increasing the risk aversion level may produce a negative equity premium in an Arrow-Debreu economy constructed to mimic the persistence in U.S. consumption growth observed in the data since 1929. The intuitive notion that stocks are much riskier than bonds is not a sufficient explanation of the observation that the magnitude of the disparity between the two returns, the equity risk premium (ERP), is so great that it implies an implausibly high level of investor risk aversion that is fundamentally incompatible with other branches of economics, particularly macroeconomics and financial economics. The process of calculating the equity risk premium, and selection of the data used, is highly subjective to the study in question, but is generally accepted to be in the range of 3–7% in the long-run. Dimson et al. calculated a premium of "around 3–3.5% on a geometric mean basis" for global equity markets during 1900–2005 (2006). However, over any one decade, the premium shows great variability—from over 19% in the 1950s to 0.3% in the 1970s. In 1997, Siegel found that the actual standard deviation of the 20-year rate of return was only 2.76%. This means that for long-term investors, the risk of holding the stock of a smaller than expected can be derived only by looking at the standard deviation of annual earnings. For long-term investors, the actual risks of fixed-income securities are higher. Through a series of reasoning, the equity premium should be negative. To quantify the level of risk aversion implied if these figures represented the expected outperformance of equities over bonds, investors would prefer a certain payoff of $51,300 to a 50/50 bet paying either $50,000 or $100,000. The puzzle has led to an extensive research effort in both macroeconomics and finance. So far a range of useful theoretical tools and numerically plausible explanations have been presented, but no one solution is generally accepted by economists. Theory The economy has a single representative household whose preferences over stochastic consumption paths are given by: where is the subjective discount factor, is the per capita consumption at time , U() is an increasing and concave utility function. In the Mehra and Prescott (1985) economy, the utility function belongs to the constant relative risk aversion class: where is the constant relative risk aversion parameter. When , the utility function is the natural logarithmic function. Weil (1989) replaced the constant relative risk aversion utility function with the Kreps-Porteus nonexpected utility preferences. The Kreps-Porteus utility function has a constant intertemporal elasticity of substitution and a constant coefficient of relative risk aversion which are not required to be inversely related - a restriction imposed by the constant relative risk aversion utility function. Mehra and Prescott (1985) and Weil (1989) economies are a variations of Lucas (1978) pure exchange economy. In their economies the growth rate of the endowment process, , follows an ergodic Markov Process. where . This assumption is the key difference between Mehra and Prescott's economy and Lucas' economy where the level of the endowment process follows a Markov Process. There is a single firm producing the perishable consumption good. At any given time , the firm's output must be less than or equal to which is stochastic and follows . There is only one equity share held by the representative household. We work out the intertemporal choice problem. This leads to: as the fundamental equation. For computing stock returns where gives the result. The derivative of the Lagrangian with respect to the percentage of stock held must equal zero to satisfy necessary conditions for optimality under the assumptions of no arbitrage and the law of one price. Data Much data exists that says that stocks have higher returns. For example, Jeremy Siegel says that stocks in the United States have returned 6.8% per year over a 130-year period. Proponents of the capital asset pricing model say that this is due to the higher beta of stocks, and that higher-beta stocks should return even more. Others have criticized that the period used in Siegel's data is not typical, or the country is not typical. Possible explanations A large number of explanations for the puzzle have been proposed. These include: Rare events hypothesis, Myopic loss aversion, rejection of the Arrow-Debreu model in favor of different models, modifications to the assumed preferences of investors, imperfections in the model of risk aversion, the excess premium for the risky assets equation results when assuming exceedingly low consumption/income ratios, and a contention that the equity premium does not exist: that the puzzle is a statistical illusion. Kocherlakota (1996), Mehra and Prescott (2003) present a detailed analysis of these explanations in financial markets and conclude that the puzzle is real and remains unexplained. Subsequent reviews of the literature have similarly found no agreed resolution. The mystery of stock premiums occupies a special place in financial and economic theories, and more progress is needed to understand the spread of stocks on bonds. Over time, as well as to determine the factors driving equity premium in various countries / regions may still be active research agenda. A 2023 paper by Edward McQuarrie argues the equity risk premium may not exist, at least not as is commonly understood, and is furthermore based on data from a too narrow a time period in the late 20th century. He argues a more detailed examination of historical data finds "over multi-decade periods, sometimes stocks outperformed bonds, sometimes bonds outperformed stocks and sometimes they performed about the same. New international data confirm this pattern. Asset returns in the US in the 20th century do not generalize." The equity premium: a deeper puzzle Azeredo (2014) showed that traditional pre-1930 consumption measures understate the extent of serial correlation in the U.S. annual real growth rate of per capita consumption of non-durables and services ("consumption growth"). Under alternative measures proposed in the study, the serial correlation of consumption growth is found to be positive. This new evidence implies that an important subclass of dynamic general equilibrium models studied by Mehra and Prescott (1985) generates negative equity premium for reasonable risk-aversion levels, thus further exacerbating the equity premium puzzle. Rare events hypothesis One possible solution to the equity premium puzzle considered by Julliard and Ghosh (2008) is whether it can be explained by the rare events hypothesis, founded by Rietz (1988). They hypothesized that extreme economic events such as the Great Depression, the World Wars and the Great Financial Crisis resulted in equity holders demanding high equity premiums to account for the possibility of the significant loss they could suffer if these events were to materialise. As such, when these extreme economic events do not occur, equity holders are rewarded with higher returns. However, Julliard and Ghosh concluded that rare events are unlikely to explain the equity premium puzzle because the Consumption Capital Asset Pricing Model was rejected by their data and much greater risk aversion levels were required to explain the equity premium puzzle. Moreover, extreme economic events affect all assets (both equity and bonds) and they all yield low returns. For example, the equity premium persisted during the Great Depression, and this suggests that an even greater catastrophic economic event is required, and it must be one which only affect stocks, not bonds. Myopic loss aversion Benartzi & Thaler (1995) contend that the equity premium puzzle can be explained by myopic loss aversion and their explanation is based on Kahneman and Tversky's prospect theory. They rely on two assumptions about decision-making to support theory; loss aversion and mental accounting. Loss aversion refers to the assumption that investors are more sensitive to losses than gains, and in fact, research calculates utility of losses felt by investors to be twice that of the utility of a gain. The second assumption is that investors frequently evaluate their stocks even when the purpose of the investment is to fund retirement or other long-term goals. This makes investors more risk averse compared to if they were evaluating their stocks less frequently. Their study found that the difference between returns gained from stocks and returns gained from bonds decrease when stocks are evaluated less frequently. The two combined creates myopic loss aversion and Benartzi & Thaler concluded that the equity premium puzzle can be explained by this theory. Individual characteristics Some explanations rely on assumptions about individual behavior and preferences different from those made by Mehra and Prescott. Examples include the prospect theory model of Benartzi and Thaler (1995) based on loss aversion. A problem for this model is the lack of a general model of portfolio choice and asset valuation for prospect theory. A second class of explanations is based on relaxation of the optimization assumptions of the standard model. The standard model represents consumers as continuously-optimizing dynamically-consistent expected-utility maximizers. These assumptions provide a tight link between attitudes to risk and attitudes to variations in intertemporal consumption which is crucial in deriving the equity premium puzzle. Solutions of this kind work by weakening the assumption of continuous optimization, for example by supposing that consumers adopt satisficing rules rather than optimizing. An example is info-gap decision theory, based on a non-probabilistic treatment of uncertainty, which leads to the adoption of a robust satisficing approach to asset allocation. Equity characteristics Another explanation of the equity premium puzzle focuses on the characteristics of equity that cannot be captured by typical models but are still consistent with optimisation by investors. The most significant characteristic that is not typically considered is the requirement for equity holders to monitor their activity and have a manager to assist them. Therefore, the principal-agent relationship is very prevalent between corporation managers and equity holders. If an investor was to choose to not have a manager, it is likely costly for them to monitor the activity of the corporations that they invest in and often rely heavily on auditors or they look to the market hypothesis in which information about asset values in the equity markets are exposed. This hypothesis is based on the theory that an investor who is inexperienced and uninformed can bank on the fact that they will get average market returns in an identifiable market portfolio, which is questionable as to whether or not this can be done by an uninformed investor. Although, as per the characteristics of equity in explaining the premium, it is only necessary to hypothesise that people looking to invest do not think they can reach the same level of performance of the market. Another explanation related to the characteristics of equity was explored by a variety of studies including Holmstrom and Tirole (1998), Bansal and Coleman (1996) and Palomino(1996)and was in relation to liquidity. Palomino described the noise trader model that was thin and had imperfect competition is the market for equities and the lower its equilibrium price dropped the higher the premium over risk-free bonds would rise. Holmstrom and Tirole in their studies developed another role for liquidity in the equity market that involved firms willing to pay a premium for bonds over private claims when they would be facing uncertainty over liquidity needs. Tax distortions Another explanation related to the observed growing equity premium was argued by McGrattan and Prescott (2001) to be a result of variations over time of taxes and particularly its effect on interest and dividend income. It is difficult however to give credibility to this analysis due to the difficulties in calibration utilised as well as ambiguity surrounding the existence of any noticeable equity premium before 1945. Even given this, it is evident that the observation that equity premium changes arising from the distortion of taxes over time should be taken into account and give more validity to the equity premium itself. Related data is mentioned in the Handbook of the Equity Risk Premium. Beginning in 1919, it captured the post-World War I recovery, while omitting wartime losses and low pre-war returns. After adding these earlier years, the arithmetic average of the British stock premium for the entire 20th century is 6.6%, which is about 21/4% lower than the incorrect data inferred from 1919-1999. Implied volatility Graham and Harvey have estimated that, for the United States, the expected average premium during the period June 2000 to November 2006 ranged between 4.65 and 2.50. They found a modest correlation of 0.62 between the 10-year equity premium and a measure of implied volatility (in this case VIX, the Chicago Board Options Exchange Volatility Index). Dennis, Mayhew & Stivers (2006) find that changes in implied volatility have an asymmetric effect on stock returns. They found that negative changes in implied volatility have a stronger impact on stock returns than positive changes in implied volatility. The authors argue that such an asymmetric volatility effect can be explained by the fact that investors are more concerned with downside risk than upside potential. That is, investors are more likely to react to negative news and expect negative changes in implied volatility to have a stronger impact on stock returns. The authors also find that changes in implied volatility can predict future stock returns. Stocks that experience negative changes in implied volatility have higher expected returns in the future. The authors state that this relationship is caused by the association between negative changes in implied volatility and market downturns. Yan (2011) presents an explanation for the equity premium puzzle using the slope of the implied volatility smile. The implied volatility smile refers to the pattern of implied volatilities for options contracts with the same expiration date but different strike prices. The slope of the implied volatility smile reflects the market's expectations for future changes in the stock price, with a steeper slope indicating higher expected volatility. The author shows that the slope of the implied volatility smile is a significant predictor of stock returns, even after controlling for traditional risk factors. Specifically, stocks with steeper implied volatility smiles (i.e., higher jump risk) have higher expected returns, consistent with the equity premium puzzle. The author argues that this relationship between the slope of the implied volatility smile and stock returns can be explained by investors' preference for jump risk. Jump risk refers to the risk of sudden, large movements in the stock price, which are not fully captured by traditional measures of volatility. Yan argues that investors are willing to accept lower average returns on stocks that have higher jump risk, because they expect to be compensated with higher returns during times of market stress. Information derivatives The simplest scientific interpretation of the puzzle suggests that consumption optimization is not responsible for the equity premium. More precisely, the timeseries of aggregate consumption is not a leading explanatory factor of the equity premium. The human brain is (simultaneously) engaged in many strategies. Each of these strategies has a goal. While individually rational, the strategies are in constant competition for limited resources. Even within a single person this competition produces a highly complex behavior which does not fit any simple model. Nevertheless, the individual strategies can be understood. In finance this is equivalent to understanding different financial products as information derivatives i.e. as products which are derived from all the relevant information available to the customer. If the numerical values for the equity premium are unknown, the rational examination of the equity product would have accurately predicted the observed ballpark values. From the information derivatives viewpoint consumption optimization is just one possible goal (which never really comes up in practice in its pure academic form). To a classically trained economist this may feel like a loss of a fundamental principle. But it may also be a much needed connection to reality (capturing the real behavior of live investors). Viewing equities as a stand-alone product (information derivative) does not isolate them from the wider economic picture. Equity investments survive in competition with other strategies. The popularity of equities as an investment strategy demands an explanation. In terms of data this means that the information derivatives approach needs to explain not just the realized equities performance but also the investor-expected equity premia. The data suggest the long-term equity investments have been very good at delivering on the theoretical expectations. This explains the viability of the strategy in addition to its performance (i.e. in addition to the equity premium). Market failure explanations Two broad classes of market failure have been considered as explanations of the equity premium. First, problems of adverse selection and moral hazard may result in the absence of markets in which individuals can insure themselves against systematic risk in labor income and noncorporate profits. Second, transaction costs or liquidity constraints may prevent individuals from smoothing consumption over time. In relation to transaction costs, there are significantly greater costs associated with trading stocks than trading bonds. These include costs to acquire information, broker fees, taxes, load fees and the bid-ask spread. As such, when shareholders attempt to capitalise on the equity premium by adjusting their asset allocation and purchasing more stocks, they incur significant trading costs which eliminate the gains from the equity premium. However, Kocherlakota (1996) contends that there is insufficient evidence to support this proposition and further data about the size and sources of trading costs need to be collected before this proposition could be validated. Denial of equity premium A final possible explanation is that there is no puzzle to explain: that there is no equity premium. This can be argued from a number of ways, all of them being different forms of the argument that we don't have enough statistical power to distinguish the equity premium from zero: Selection bias of the US market in studies. The US market was the most successful stock market in the 20th century. Other countries' markets displayed lower long-run returns (but still with positive equity premiums). Picking the best observation (US) from a sample leads to upwardly biased estimates of the premium. Survivorship bias of exchanges: This refers to the equity holder's fear about an economic crash such as the 1929 stock market crash eventuating, even when the probability of that event occurring is minute. The justification here is that over half of the stock exchanges that were operating in early 1900s were discontinued, and the equity risk premium calculated does not account for this. As such, the equity risk premium is "calculated for a survivor" such that if returns from these stock exchanges were included in the calculations, there may not have been such a great disparity between returns gleaned from bonds compared to stocks. However, this hypothesis cannot be easily proven and Mehra and Prescott (1985) in their studies, included the effect on the equity returns following the Great Depression. Although shares lost 80% of their value, comparisons of returns from stocks against bonds showed that even in those periods, significantly higher returns were gained from investing in stocks. Low number of data points: the period 1900–2005 provides only 105 years which is not a large enough sample size to run statistical analyses with full confidence, especially in view of the black swan effect. Windowing: returns of equities (and relative returns) vary greatly depending on which points are included. Using data starting from the top of the market in 1929 or starting from the bottom of the market in 1932 (leading to estimates of equity premium of 1% lower per year), or ending at the top in 2000 (vs. bottom in 2002) or top in 2007 (vs. bottom in 2009 or beyond) completely change the overall conclusion. However, in all windows considered, the equity premium is always greater than zero. A related criticism is that the apparent equity premium is an artifact of observing stock market bubbles in progress. David Blitz, head of Quant Research at Robeco, suggested that the size of the equity premium is not as large as widely believed. It is usually calculated, he said, on the presumption that the true risk-free asset is the one month T bill. If one recalculates, taking the five-year T-bond as the risk free asset, the equity premium is smaller and the risk-return relation becomes more positive. Note however that most mainstream economists agree that the evidence shows substantial statistical power. Benartzi & Thaler analyzed the equity returns over a 200-year period, between 1802 and 1990 and found that whilst equity returns were remained stable between 5.5% and 6.5%, return on government bonds fell significantly from around 5% to 0.5%. Moreover, analysis of how faculty members funded their retirement showed that people who had invested in stocks received much higher returns than people who had invested in government bonds. Implications Implications for the Individual Investor For the individual investor, the equity premium may represent a reasonable reward for taking on the risk of buying shares such that they base their decisions to allocate assets to shares or bonds depending on how risk tolerant or risk averse they are. On the other hand, if the investor believes that the equity premium arise from mistakes and fears, they would capitalize on that fear and mistake and invest considerable portions of their assets in shares. Here, it is prudent to note that economists more commonly allocate significant portions of their asset in shares. Currently, the equity premium is estimated to be 3%. Although this is lower than historical rates, it is still significantly more advantageous than bonds for investors investing in their retirements funds and other long-term funds. The magnitude of the equity premium brings about substantial implications for policy, welfare and also resource allocation. Policy and Welfare Implications Campbell and Cochrane (1995) have found in a study of a model that simulates equity premium value's consistent with asset prices, welfare costs are similar in magnitude to welfare benefits. Therefore essentially, a large risk premium in society where asset prices are a reflection of consumer preferences, implies that the cost of welfare is also large. It also means that in recessions, welfare costs are excessive regardless of aggregate consumption. As the equity premium rises, recession-state income marginal values steadily increase also thus further increasing the welfare costs of recessions. This also brings about questions regarding the need for microeconomic policies that operate by way of higher productivity in the long run by trading off short-term pain in the form of adjustment costs. Given the impact on welfare through recessions and the large equity premium, it is evident that these short-term trade offs as a result of economic policy are likely not ideal, and would be preferred to take place in times of normal economic activity. Resource Allocation When there is a large risk premium associated with equity, there is a high cost of systematic risk in returns. One of these being its potential implications on individual portfolio decisions. Some research has argued that high rates of return are just signs of misplaced risk-aversion in which investors are able to earn high returns with little risk from switching from stocks to other assets such as bonds. Research on the contrary indicates that a large percentage of the general public believe that the stock market is best for investors that are in it for the long haul and may also link to another implication being trends in the equity premium. Some claims have been made that the equity premium has declined over time in the past few years and may be supported by other studies claiming that tax declines may also continue to reduce the premium and the fact that transaction costs in securities markets decline this is consistent with a declining premium. The trend implication is also supported by models such as 'noise traders' that create a cyclical premium due to noise traders being excessively optimistic thus declining the premium, and vice versa when the optimism is replaced with pessimism, this would explain the constant decline of equity premium as a stock price bubble. See also Ellsberg paradox Fed model Loss aversion Risk aversion List of cognitive biases Economic puzzle Forward premium anomaly Real exchange-rate puzzles References Further reading Behavioral finance Finance theories Stock market 1985 introductions Economic puzzles Prospect theory
Equity premium puzzle
[ "Biology" ]
5,294
[ "Behavioral finance", "Behavior", "Human behavior" ]
1,021,622
https://en.wikipedia.org/wiki/Abulia
In neurology, abulia, or aboulia (from , meaning "will"), refers to a lack of will or initiative and can be seen as a disorder of diminished motivation. Abulia falls in the middle of the spectrum of diminished motivation, with apathy being less extreme and akinetic mutism being more extreme than abulia. The condition was originally considered to be a disorder of the will, and aboulic individuals are unable to act or make decisions independently; and their condition may range in severity from subtle to overwhelming. In the case of akinetic mutism, many patients describe that as soon as they "will" or attempt a movement, a "counter-will" or "resistance" rises up to meet them. Symptoms and signs The clinical condition denoted abulia was first described in 1838; however, since that time, a number of different, some contradictory, definitions have emerged. Abulia has been described as a loss of drive, expression, behavior and speech output, with slowing and prolonged speech latency, and reduction of spontaneous thought content and initiative, being considered more recently as 'a reduction in action emotion and cognition'. The clinical features most commonly associated with abulia are: Difficulty in initiating and sustaining purposeful movements Lack of spontaneous movement Reduced spontaneous speech Increased response-time to queries Passivity Reduced emotional responsiveness and spontaneity Reduced social interactions Reduced interest in usual pastimes Especially in patients with progressive dementia, it may affect feeding. Patients may continue to chew or hold food in their mouths for hours without swallowing it. The behavior may be most evident after these patients have eaten part of their meals and no longer have strong appetites. Differentiation from other disorders Both neurologists and psychiatrists recognize abulia to be a distinct clinical entity, but its status as a syndrome is unclear. Although abulia has been known to clinicians since 1838, it has been subjected to different interpretations – from 'a pure lack of will', in the absence of motor paralysis to, more recently, being considered 'a reduction in action emotion and cognition'. As a result of the changing definition of abulia, there is currently a debate on whether or not abulia is a sign or a symptom of another disease, or its own disease that seems to appear in the presence of other more well-researched diseases, such as Alzheimer's disease. A 2002 survey of two movement disorder experts, two neuropsychiatrists, and two rehabilitation experts, did not seem to shed any light on the matter of differentiating abulia from other DDMs. The experts used the terms "apathy" and "abulia" interchangeably and debated whether or not abulia was a discrete entity, or just a hazy gray area on a spectrum of more defined disorders. Four of the experts said abulia was a sign and a symptom, and the group was split on whether or not it was a syndrome. Another survey, which consisted of true and false questions about what abulia is distinct from, whether it is a sign, symptom, or syndrome, where lesions are present in cases of abulia, what diseases are commonly associated with abulia, and what current treatments are used for abulia, was sent to 15 neurologists and 10 psychiatrists. Most experts agreed that abulia is clinically distinct from depression, akinetic mutism, and alexithymia. However, only 32% believed abulia was different from apathy, while 44% said they were not different, and 24% were unsure. Yet again, there was disagreement about whether or not abulia is a sign, symptom, or syndrome. The study of motivation has been mostly about how stimuli come to acquire significance for animals. Only recently has the study of motivational processes been extended to integrate biological drives and emotional states in the explanation of purposeful behavior in human beings. Considering the number of disorders attributed to a lack of will and motivation, it is essential that abulia and apathy be defined more precisely to avoid confusion. Causes Many different causes of abulia have been suggested. While there is some debate about the validity of abulia as a separate disease, experts mostly agree that abulia is the result of frontal lesions and not with cerebellar or brainstem lesions. As a result of more and more evidence showing that the mesolimbic and the mesocortical dopamine system are key to motivation and responsiveness to reward, abulia may be a dopamine-related dysfunction. Abulia may also result from a variety of brain injuries which cause personality change, such as dementing illnesses, trauma, or intracerebral hemorrhage (stroke), especially stroke causing diffuse injury to the right hemisphere. Damage to the basal ganglia Injuries to the frontal lobe and/or the basal ganglia can interfere with an individual's ability to initiate speech, movement, and social interaction. Studies have shown that 5-67% of all patients with traumatic brain injuries and 13% of patients with lesions on their basal ganglia experience some form of diminished motivation. It may complicate rehabilitation when a stroke patient is uninterested in performing tasks like walking despite being capable of doing so. It should be differentiated from apraxia, when a brain injured patient has impairment in comprehending the movements necessary to perform a motor task despite not having any paralysis that prevents performing the task; that condition can also result in lack of initiation of activity. Damage to the capsular genu A case study involving two patients with acute confusional state and abulia was conducted to see if these symptoms were the result of an infarct in the capsular genu. Using clinical neuropsychological and MRI evaluations at baseline and one year later showed that the cognitive impairment was still there one year after the stroke. Cognitive and behavioral alterations due to a genu infarct are most likely because the thalamo-cortical projection fibers that originate from the ventral-anterior and medial-dorsal nuclei traverse the internal capsule genu. These tracts are part of a complex system of cortical and subcortical frontal circuits through which the flow of information from the entire cortex takes place before reaching the basal ganglia. Cognitive deterioration could have occurred through the genu infarcts affecting the inferior and anterior thalamic peduncles. In this case study the patients did not show any functional deficits at the follow-up one year after the stroke and were not depressed but did show diminished motivations. This result supports the idea that abulia may exist independently of depression as its own syndrome. Damage to anterior cingulate circuit The anterior cingulate circuit consists of the anterior cingulate cortex, also referred to as Brodmann area 24, and its projections to the ventral striatum which includes the ventromedial caudate. The loop continues to connect to the ventral pallidum, which connects to the ventral anterior nucleus of the thalamus. This circuit is essential for the initiation of behavior, motivation and goal orientation, which are the very things missing from a patient with a disorder of diminished motivation. Unilateral injury or injury along any point in the circuit leads to abulia regardless of the side of the injury, but if there is bilateral damage, the patient will exhibit a more extreme case of diminished motivation, akinetic mutism. Acute caudate vascular lesions It s well documented that the caudate nucleus is involved in degenerative diseases of the central nervous system such as Huntington disease. In a case study of 32 acute caudate stroke patients, 48% were found to be experiencing abulia. Most of the cases where abulia was present were when the patients had a left caudate infarct that extended into the putamen as seen through a CT or MRI scan. Diagnosis Diagnosis for abulia can be quite difficult because it falls between two other disorders of diminished motivation, and one could easily see an extreme case of abulia as akinetic mutism or a lesser case of abulia as apathy and therefore, not treat the patient appropriately. If it were to be confused with apathy, it might lead to attempts to involve the patient with physical rehabilitation or other interventions where a source of strong motivation would be necessary to succeed but would still be absent. The best way to diagnose abulia is through clinical observation of the patient as well as questioning of close relatives and loved ones to give the doctor a frame of reference with which they can compare the patient's new behavior to see if there is in fact a case of diminished motivation. In recent years, imaging studies using a CT or MRI scan have been shown to be quite helpful in localizing brain lesions which have been shown to be one of the main causes of abulia. Conditions where abulia may be present Normal pressure hydrocephalus Major depressive disorder Persistent depressive disorder Attention deficit hyperactivity disorder Schizophrenia Frontotemporal dementia Parkinson's disease Huntington's disease Pick's disease Progressive supranuclear palsy Traumatic brain injury Stroke Alzheimer's disease A lack of motivation has been reported in 25–50% of patients with Alzheimer's disease. While depression is also common in patients with this disease, abulia is not a mere symptom of depressions because more than half of the patients with Alzheimer's disease with abulia do not have depression. Several studies have shown that abulia is most prevalent in cases of severe dementia which may result from reduced metabolic activity in the prefrontal regions of the brain. Patients with Alzheimer's disease and abulia are significantly older than patients with Alzheimer's who do not lack motivation. Going along with that, the prevalence of abulia increased from 14% in patients with a mild case Alzheimer's disease to 61% in patients with a severe case of Alzheimer's disease, which most likely developed over time as the patient got older. Treatment Most current treatments for abulia are pharmacological, including the use of antidepressants. However, antidepressant treatment is not always successful and this has opened the door to alternative methods of treatment. The first step to successful treatment of abulia, or any other DDM, is a preliminary evaluation of the patient's general medical condition and fixing the problems that can be fixed easily. This may mean controlling seizures or headaches, arranging physical or cognitive rehabilitation for cognitive and sensorimotor loss, or ensuring optimal hearing, vision, and speech. These elementary steps also increase motivation because improved physical status may enhance functional capacity, drive, and energy and thereby increase the patient's expectation that initiative and effort will be successful. There are 5 steps to pharmacological treatment: Optimize medical status. Diagnose and treat other conditions more specifically associated with diminished motivation (e.g., apathetic hyperthyroidism, Parkinson's disease). Eliminate or reduce doses of psychotropics and other agents that aggravate motivational loss (e.g., SSRIs, dopamine antagonists). Treat depression efficaciously when both DDM and depression are present. Increase motivation through use of stimulants, dopamine agonists, or other agents such as cholinesterase inhibitors. See also Avolition References External links Am.-english dictionary www.merriam-webster.com refers to J.C.A. Heinroth (textbook 1818). Symptoms and signs: Nervous system Symptoms and signs of mental disorders Motivation Disorders of diminished motivation
Abulia
[ "Biology" ]
2,357
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
1,021,628
https://en.wikipedia.org/wiki/Chernobyl%20exclusion%20zone
The Chernobyl Nuclear Power Plant Zone of Alienation, also called the 30-Kilometre Zone or simply The Zone, was established shortly after the 1986 Chernobyl disaster in the Ukrainian SSR of the Soviet Union. Initially, Soviet authorities declared an exclusion zone spanning a radius around the Chernobyl Nuclear Power Plant, designating the area for evacuations and placing it under military control. Its borders have since been altered to cover a larger area of Ukraine: it includes the northernmost part of Vyshhorod Raion in Kyiv Oblast, and also adjoins the Polesie State Radioecological Reserve in neighbouring Belarus. The Chernobyl exclusion zone is managed by an agency of the State Emergency Service of Ukraine, while the power plant and its sarcophagus and the New Safe Confinement are administered separately. The current area of approximately in Ukraine is where radioactive contamination is the highest, and public access and habitation are accordingly restricted. Other areas of compulsory resettlement and voluntary relocation not part of the restricted exclusion zone exist in the surrounding areas and throughout Ukraine. In February 2019, it was revealed that talks were underway to re-adjust the exclusion zone's boundaries to reflect the declining radioactivity of its outer areas. Public access to the exclusion zone is restricted in order to prevent access to hazardous areas, reduce the spread of radiological contamination, and conduct radiological and ecological monitoring activities. Today, the Chernobyl exclusion zone is one of the most radioactively contaminated areas on Earth and draws significant scientific interest for the high levels of radiation exposure in the environment, as well as increasing interest from disaster tourists. It has become a thriving sanctuary, with natural flora and fauna and some of the highest biodiversity and thickest forests in all of Ukraine. This is primarily due to the lack of human activity in the exclusion zone since 1986, in spite of the radioactive fallout. Since the beginning of the Russian invasion of Ukraine in February 2022, the Chernobyl exclusion zone has been the site of fighting with neighbouring Russia, which captured Chernobyl on 24 February 2022. By April 2022, however, as the Kyiv offensive failed, the Russian military withdrew from the region. Ukrainian authorities have continued to keep the exclusion zone closed to tourists, pending the eventual cessation of hostilities in the Russo-Ukrainian War. History Pre-1986: Before the Chernobyl nuclear disaster Historically and geographically, the zone is the heartland of the Polesia region. This predominantly rural woodland and marshland area was once home to 120,000 people living in the cities of Chernobyl and Pripyat as well as 187 smaller communities, but is now mostly uninhabited. All settlements remain designated on geographic maps but marked as () – "uninhabited". The woodland in the area around Pripyat was a focal point of partisan resistance during the Second World War, which allowed evacuated residents to evade guards and return into the woods. In the woodland near the Chernobyl Nuclear Power Plant stood the "Partisan's Tree" or "Cross Tree", which was used to hang captured partisans. The tree fell down due to age in 1996 and a memorial now stands at its location. 1986: Soviet exclusion zones 10-kilometre and 30-kilometre radii The Exclusion Zone was established on soon after the Chernobyl disaster, when a Soviet government commission headed by Nikolai Ryzhkov decided on a "rather arbitrary" area of a radius from Reactor 4 as the designated evacuation area. The 30 km Zone was initially divided into three subzones: the area immediately adjacent to Reactor 4, an area of approximately radius from the reactor, and the remaining 30 km zone. Protective clothing and available facilities varied between these subzones. Later in 1986, after updated maps of the contaminated areas were produced, the zone was split into three areas to designate further evacuation areas based on the revised dose limit of 100 mSv. the "Black Zone" (over 200 μSv·h−1), to which evacuees were never to return the "Red Zone" (50–200 μSv·h−1), where evacuees might return once radiation levels normalized the "Blue Zone" (30–50 μSv·h−1), where children and pregnant women were evacuated starting in the summer of 1986 Special permission for access and full military control was put in place in late 1986. Although evacuations were not immediate, 91,200 people were eventually evacuated from these zones. In November 1986, control over activities in the zone was given to the new production association Kombinat. Based in the evacuated city of Chernobyl, the association's responsibility was to operate the power plant, decontaminate the 30 km zone, supply materials and goods to the zone, and construct housing outside the new town of Slavutych for the power plant personnel and their families. In March 1989, a "Safe Living Concept" was created for people living in contaminated zones beyond the Exclusion Zone in Belarus, Ukraine, and Russia. In October 1989, the Soviet government requested assistance from the International Atomic Energy Agency (IAEA) to assess the "Soviet Safe Living Concept" for inhabitants of contaminated areas. "Throughout the Soviet period, an image of containment was partially achieved through selective resettlements and territorial delineations of contaminated zones." Post-1991: Independent Ukraine In February 1991, the law On The Legal Status of the Territory Exposed to the Radioactive Contamination resulting from the ChNPP Accident was passed, updating the borders of the Exclusion Zone and defining obligatory and voluntary resettlement areas, and areas for enhanced monitoring. The borders were based on soil deposits of strontium-90, caesium-137, and plutonium as well as the calculated dose rate (sieverts/h) as identified by the National Commission for Radiation Protection of Ukraine. Responsibility for monitoring and coordination of activities in the Exclusion Zone was given to the Ministry of Chernobyl Affairs. In-depth studies were conducted from 1992 to 1993, culminating the updating of the 1991 law followed by further evacuations from the Polesia area. A number of evacuation zones were determined: the "Exclusion Zone", the "Zone of Absolute (Mandatory) Resettlement", and the "Zone of Guaranteed Voluntary Resettlement", as well as many areas throughout Ukraine designated as areas for radiation monitoring. The evacuation of contaminated areas outside of the Exclusion Zone continued in both the compulsory and voluntary resettlement areas, with 53,000 people evacuated from areas in Ukraine from 1990 to 1995. After Ukrainian Independence, funding for the policing and protection of the zone was initially limited, resulting in even further settling by samosely (returnees) and other illegal intrusion. In 1997, the areas of Poliske and Narodychi, which had been evacuated, were added to the existing area of the Exclusion Zone, and the zone now encompasses the exclusion zone and parts of the zone of Absolute (Mandatory) Resettlement of an area of approximately . This Zone was placed under management of the 'Administration of the exclusion zone and the zone of absolute (mandatory) resettlement' within the Ministry of Emergencies. On 15 December 2000, all nuclear power production at the power plant ceased after an official ceremony with then-President Leonid Kuchma when the last remaining operational reactor, number 3, was shut down. Russian invasion of Ukraine (2022–present) The Chernobyl Exclusion Zone was the site of fighting between Russian and Ukrainian forces during the Battle of Chernobyl on 24 February 2022, as part of the Russian invasion of Ukraine. Russian forces reportedly captured the plant the same day. Facilities at Chernobyl still require ongoing management, in part to ensure the continued cooling of spent nuclear fuel. An estimated 100 plant workers and 200 Ukrainian guards who were at the Chernobyl Nuclear Power Plant when the Russians arrived had been unable to leave. Normally they would change shifts daily and would not live at the site. They had limited supplies of medication, food, and electricity. According to Ukrainian reports, the radiation levels in the exclusion zone increased after the invasion. The higher levels are believed to be a result of disturbance of radioactive dust by the military activity or possibly incorrect readings caused by cyberattacks. On 10 March, the International Atomic Energy Agency stated that it had lost all contact with Chernobyl. On 22 March, the Ukrainian state agency responsible for the Chernobyl exclusion zone reported that Russian forces had destroyed a new laboratory at the Chernobyl nuclear power plant. The laboratory, which opened in 2015, worked to improve the management of radioactive waste, among other things. "The laboratory contained highly active samples and samples of radionuclides that are now in the hands of the enemy, which we hope will harm itself and not the civilized world", the agency said in its statement. On 27 March, Lyudmila Denisova, then–Verkhovna Rada Commissioner for Human Rights, said that 31 known individual fires covering 10,000 hectares were burning in the zone. These fires caused "...an increased level of radioactive air pollution", according to Denisova. Firefighters were unable to reach the fires due to the Russian forces in the area. These wildfires are seasonal; one fire that was 11,500 hectares in size took place in 2020, and a series of several smaller fires occurred throughout the 2010s. On 31 March, it was reported that most of the Russian troops occupying Chernobyl withdrew. An Exclusion Zone employee made a post on Facebook suggesting that Russian troops were suffering from acute radiation sickness, based on a photo of military buses unloading near a radiation hospital in Belarus. Chernobyl operator Energoatom claimed that Russian troops had dug trenches in the most contaminated part of the Chernobyl exclusion zone, receiving "significant doses" of radiation. BBC News reported unconfirmed reports that some were being treated in Belarus. On 3 April, Ukrainian forces retook the Chernobyl power plant. Population The 30-kilometre zone is estimated to be home to 197 Samosely living in 11 villages as well as in the town of Chernobyl. This number is in decline, down from previous estimates of 314 in 2007 and 1,200 in 1986. These residents are senior citizens, with an average age of 63. After repeated attempts at expulsion, the authorities have accepted their presence and allowed them to stay with limited supporting services. Residence is now informally permitted by the Ukrainian government. Approximately 3,000 people work in the Zone of Alienation on various tasks, such as the construction of the New Safe Confinement, the ongoing decommissioning of the reactors, and assessment and monitoring of the conditions in the zone. Employees do not live inside the zone, but work shifts there. Some of the workers work "4-3" shifts (four days on, three days off), while others work 15 days on and 15 days off. Other workers commute into the zone daily from Slavutych. The duration of shifts is counted strictly for reasons involving pension and healthcare. Everyone employed in the Zone is monitored for internal bioaccumulation of radioactive elements. The town of Chernobyl, located outside of the 10-kilometre Exclusion Zone, was evacuated following the accident but now serves as a base to support the workers within the Exclusion Zone. Its amenities include administrative buildings, general stores, a canteen, a hotel, and a bus station. Unlike other areas within the Exclusion Zone, the town is actively maintained by workers, such as lawn areas being mowed and autumn leaves being collected. Access and tourism Prior to the COVID-19 pandemic and Russian invasion there were many visitors to the Exclusion Zone annually, and daily tours from Kyiv. In addition, multiple-day excursions can be easily arranged with Ukrainian tour operators. Most overnight tourists stay in a hotel within the town of Chernobyl, which is located within the Exclusion Zone. According to an exclusion area tour guide, as of 2017, there are approximately 50 licensed exclusion area tour guides in total, working for approximately nine companies. Visitors must present their passports when entering the Exclusion Zone and are screened for radiation when exiting, both at the 10 km checkpoint and at the 30 km checkpoint. The Exclusion Zone can also be entered if an application is made directly to the zone administration department. Some evacuated residents of Pripyat have established a remembrance tradition, which includes annual visits to former homes and schools. In the Chernobyl zone, there is one operating Eastern Orthodox church, St. Elijah Church. According to Chernobyl disaster liquidators, the radiation levels there are "well below the level across the zone", a fact that president of the Ukrainian Chernobyl Union Yury Andreyev considers miraculous. The Chernobyl Exclusion Zone has been accessible to interested parties such as scientists and journalists since the zone was created. An early example was Elena Filatova's online account of her alleged solo bike ride through the zone. This gained her Internet fame, but was later alleged to be fictional, as a guide claimed Filatova was part of an official tour group. Regardless, her story drew the attention of millions to the nuclear catastrophe. After Filatova's visit in 2004, a number of papers such as The Guardian and The New York Times began to produce reports on tours to the zone. Tourism to the area became more common after Pripyat was featured in popular video games S.T.A.L.K.E.R.: Shadow of Chernobyl and Call of Duty 4: Modern Warfare. Fans of the S.T.A.L.K.E.R. franchise, who refer to themselves as "stalkers", often gain access to the Zone. ("The Zone" and "stalker" derive from Arkady and Boris Strugatsky's science fiction novel Roadside Picnic, which preceded the accident but which described the evacuation of part of Russia after the appearance of dangerous alien artifacts. It served as the basis for the classic film Stalker.) Prosecution of trespassers became more severe after a significant increase in trespassing in the Exclusion Zone. An article in the penal code of Ukraine was specially introduced, and horse patrols were added to protect the zone's perimeter. In 2012, journalist Andrew Blackwell published Visit Sunny Chernobyl: And Other Adventures in the World's Most Polluted Places. Blackwell recounts his visit to the Exclusion Zone, when a guide and driver took him through the zone and to the reactor site. On 14 April 2013, the 32nd episode of the wildlife documentary TV program River Monsters (Atomic Assassin, Season 5, Episode 1) was broadcast, featuring the host Jeremy Wade catching a wels catfish in the cooling pools of the Chernobyl power plant at the heart of the Exclusion Zone. On 16 February 2014, an episode of the British motoring TV programme Top Gear was broadcast, featuring two of the presenters, Jeremy Clarkson and James May, driving into the Exclusion Zone. A portion of the finale of the Netflix documentary Our Planet, released in 2019, was filmed in the Exclusion Zone. The area was used as the primary example of how quickly an ecosystem can recover and thrive in the absence of human interference. In 2019, Chernobyl Spirit Company released Atomik Vodka, the first consumer product made from materials grown and cultivated in the exclusion zone. On 11 April 2022, the zone administration department suspended the validity of passes that allowed access to the exclusion zone, for the duration of martial law in Ukraine. Illegal activities The poaching of game, illegal logging, and metal salvage have been problems within the zone. Despite police control, intruders started infiltrating the perimeter to remove potentially contaminated materials, from televisions to toilet seats, especially in Pripyat, where the residents of about 30 high-rise apartment buildings had to leave all of their belongings behind. In 2007, the Ukrainian government adopted more severe criminal and administrative penalties for illegal activities in the alienation zone, as well as reinforced units assigned to these tasks. The population of Przewalski's horse, introduced to the Exclusion Zone in 1998, has reportedly fallen since 2005 due to poaching. Administration Government agencies In April 2011, the State Agency of Ukraine on the Exclusion Zone Management (SAUEZM) became the successor to the State Department – Administration of the exclusion zone and the zone of absolute (mandatory) resettlement according to presidential decree. The SAUEZM is, as its predecessor, an agency within the State Emergency Service of Ukraine. Policing of the Zone is conducted by special units of the Ministry of Internal Affairs of Ukraine and, along the border with Belarus, by the State Border Guard Service of Ukraine. The SAUEZM is tasked with: Conducting environmental and radioactivity monitoring in the zone Management of long-term storage and disposal of radioactive waste Leasing of land in the exclusion zone and the zone of absolute (mandatory) resettlement Administering of state funds for radioactive waste management Monitoring and preservation of documentation on the subject of radioactivity Coordination of the decommissioning of the nuclear power plant Maintenance of a register of persons who have suffered as a result of the disaster The Chernobyl Nuclear Power Plant is located inside the zone but is administered separately. Plant personnel, 3,800 workers , reside primarily in Slavutych, a specially-built remote city in Kyiv Oblast outside of the Exclusion Zone, east of the accident site. Checkpoints There are 11 checkpoints. Dytiatky, near the village of Dytiatky Stari Sokoly, near the village of Zelenyi Mys, near the village of Poliske, near the village of Ovruch, near the village of Davydky, Narodychi settlement hromada, Korosten Raion Vilcha, near the village of Dibrova, near the village of Benivka, near the city of Pripyat The city of Pripyat itself Leliv, near the city of Chernobyl Paryshiv, between the city of Chernobyl and the border with Belarus (route P56) Development and recovery projects The Chernobyl Exclusion Zone is an environmental recovery area, with efforts devoted to remediation and safeguarding of the reactor site. At the same time, projects for wider economic and social revival of the territories around the disaster zone have been envisioned or implemented. In November 2007, the United Nations General Assembly adopted a resolution calling for "recovery and sustainable development" of the areas affected by the Chernobyl accident. Commenting on the issue, UN Development Programme officials mentioned the plans to achieve "self-reliance" of the local population, "agriculture revival" and development of ecotourism. However, it is not clear whether such plans, made by the UN and then-President Victor Yushchenko, deal with the zone of alienation proper, or only with the other three zones around the disaster site where contamination is less intense and restrictions on the population are looser (such as the district of Narodychi in Zhytomyr Oblast). Since 2011, tour operators have been bringing tourists inside the Exclusion Zone (illegal tours may have started even before). Tourists are accompanied by tour guides at all times and are not able to wander too far on their own due to the presence of several radioactive "hot spots". Pripyat was deemed safe for tourists to visit for a short period of time in the late 2010s, although certain precautions must be taken. In 2016, the Ukrainian government declared the part of the exclusion zone on its territory the Chernobyl Radiation and Environmental Biosphere Reserve. It was reported in 2016 that "A heavily contaminated area within a 10-kilometer radius" of the plant would be used for the storage of nuclear waste. The IAEA carried out a feasibility study in 2018 to assess the prospect of expanding the local waste management infrastructure. In 2017, three companies were reported developing plans for solar farms within the Chernobyl Exclusion Zone. The high feed-in tariffs offered, the availability of land, and easy access to transmission lines (which formerly ran to the nuclear power station) have all been noted as beneficial to siting a solar farm. The solar plant began operations in October 2018. In 2019, following a three-year research project into the transfer of radioactivity to crops grown in the exclusion zone conducted by scientists from UK and Ukrainian universities, one bottle of vodka using grain from the zone was produced. The vodka did not contain abnormal levels of radiation because of the distillation process. The researchers consider the production of vodka, and its sales profits, a means to aid economic recovery of the communities most adversely affected by the disaster. The project later switched to producing and exporting "Atomik" apple spirit, made from apples grown in the Narodychi District. Radioactive contamination The territory of the zone is polluted unevenly. Spots of hyperintensive pollution were created first by wind and rain spreading radioactive dust at the time of the accident, and subsequently by numerous burial sites for various material and equipment used in decontamination. Zone authorities pay attention to protecting such spots from tourists, scrap hunters, and wildfires, but admit that some dangerous burial sites remain unmapped, and only recorded in the memories of the (aging) Chernobyl liquidators. Flora and fauna There has been an ongoing scientific debate about the extent to which flora and fauna of the zone were affected by the radioactive contamination that followed the accident. As noted by Baker and Wickliffe, one of many issues is differentiating between negative effects of Chernobyl radiation and effects of changes in farming activities resulting from human evacuation. Near the facility, a dense cloud of radioactive dust killed off a large area of Scots pine trees; the rusty orange color of the dead trees led to the nickname "The Red Forest" (Рудий ліс). The Red Forest was among the world's most radioactive places; to reduce the hazard, the Red Forest was bulldozed and the highly radioactive wood was buried, though the soil continues to emit significant radiation. Other species in the same area, such as birch trees, survived, indicating that plant species may vary considerably in their sensitivity to radiation. Cases of mutant deformity in animals of the zone include partial albinism and other external malformations in swallows and insect mutations. A study of several hundred birds belonging to 48 different species also demonstrated that birds inhabiting highly radioactively contaminated areas had smaller brains compared to birds from clean areas. A reduction in the density and the abundance of animals in highly radioactively contaminated areas has been reported for several taxa, including birds, insects, spiders, and mammals. In birds, which are an efficient bioindicator, a negative correlation has been reported between background radiation and bird species richness. Scientists such as Anders Pape Møller (University of Paris-Sud) and Timothy Mousseau (University of South Carolina) report that birds and smaller animals such as voles may be particularly affected by radioactivity. Møller is the first author on 9 of the 20 most-cited articles relating to the ecology, evolution and non-human biology in the Chernobyl area. However, some of Møller's research has been criticized as flawed. Prior to his work at Chernobyl, Møller was accused of falsifying data in a 1998 paper about asymmetry in oak leaves, which he retracted in 2001. In 2004, the Danish Committees on Scientific Dishonesty (DCSD) reported that Møller was guilty of "scientific dishonesty". The French National Centre for Scientific Research (CNRS) subsequently concluded that there was insufficient evidence to establish either guilt or innocence. Strongly held opinions about Møller and his work have contributed to the difficulty of reaching a scientific consensus on the effects of radiation on wildlife in the Exclusion Zone. More recently, the populations of large mammals have increased due to a significant reduction of human interference. The populations of traditional Polesian animals (such as the gray wolf, badger, wild boar, roe deer, white-tailed eagle, black stork, western marsh harrier, short-eared owl, red deer, moose, great egret, whooper swan, least weasel, common kestrel, and beaver) have multiplied enormously and begun expanding outside the zone. The zone is considered as a classic example of an involuntary park. The return of wolves and other animals to the area is being studied by scientists such as Marina Shkvyria (National Academy of Sciences of Ukraine), Sergey Gaschak (Chernobyl Centre in Ukraine), and Jim Beasley (University of Georgia). Camera traps have been installed and are used to record the presence of species. Studies of wolves, which are concentrated in higher-radiation areas near the center of the exclusion zone, may enable researchers to better assess relationships between radiation levels, animal health, and population dynamics. The area also houses herds of European bison (native to the area) and Przewalski's horses (foreign to the area, as the extinct tarpan was the native wild horse) released there after the accident. Some accounts refer to the reappearance of extremely rare native lynx, and there are videos of brown bears and their cubs, an animal not seen in the area for more than a century. Special game warden units are organized to protect and control them. No scientific study has been conducted on the population dynamics of these species. The rivers and lakes of the zone pose a significant threat of spreading polluted silt during spring floods. They are systematically secured by dikes. Grass and forest fires It is known that fires can make contamination mobile again. In particular, V.I. Yoschenko et al. reported on the possibility of increased mobility of caesium, strontium, and plutonium due to grass and forest fires. As an experiment, fires were set and the levels of the radioactivity in the air downwind of these fires were measured. Grass and forest fires have happened inside the contaminated zone, releasing radioactive fallout into the atmosphere. In 1986, a series of fires destroyed of forest, and several other fires have since burned within the zone. A serious fire in early May 1992 affected of land, including of forest. This resulted in a great increase in the levels of caesium-137 in airborne dust. In 2010, a series of wildfires affected contaminated areas, specifically the surroundings of Bryansk and border regions with Belarus and Ukraine. The Russian government claimed that there was no discernible increase in radiation levels, while Greenpeace accused the government of denial. On 4 April 2020, a fire broke in the Zone on at least 20 hectares of Ukrainian forests. Approximately 90 firefighters were deployed to extinguish the blaze, as well as a helicopter and two aircraft. Radiation is still present in these forests, making firefighting more difficult; authorities stated that there was no danger to the surrounding population. The previous reported fire was in June 2018. Current state of the ecosystem Despite the negative effect of the disaster on human life, many scientists see an overall beneficial effect to the ecosystem. Though the immediate effects of the accident were negative, the area quickly recovered and is today seen as very healthy. The lack of people in the area has increased the biodiversity of the Exclusion Zone in the years since the disaster. In the aftermath of the disaster, radioactive contamination in the air had a decidedly negative effect on the fauna, vegetation, rivers, lakes, and groundwater of the area. The radiation resulted in deaths among coniferous plants, soil invertebrates, and mammals, as well as a decline in reproductive numbers among both plants and animals. The surrounding forest was covered in radioactive particles, resulting in the death of 400 hectares of the most immediate pine trees, though radiation damage can be found in an area of tens of thousands of hectares. An additional concern is that as the dead trees in the Red Forest (named for the color of the dead pines) decay, contamination is leaking into the groundwater. Despite all this, Professor Nick Beresford, an expert on Chernobyl and ecology, said that "the overall effect was positive" for the wildlife in the area. The impact of radiation on individual animals has not been studied, but cameras in the area have captured evidence of a resurgence of the mammalian population – including rare animals such as the lynx and the vulnerable European bison. Research on the health of Chernobyl's wildlife is ongoing, and there is concern that the wildlife still suffers from some of the negative effects of the radiation exposure. Though it will be years before researchers collect the necessary data to fully understand the effects, for now, the area is essentially one of Europe's largest nature preserves. Overall, an assessment by plant biochemist Stuart Thompson concluded, "the burden brought by radiation at Chernobyl is less severe than the benefits reaped from humans leaving the area." In fact, the ecosystem around the power plant "supports more life than before". Infrastructure The industrial, transport, and residential infrastructure has been largely crumbling since the 1986 evacuation. There are at least 800 known "burial grounds" (Ukrainian singular: mohyl'nyk) for the contaminated vehicles with hundreds of abandoned military vehicles and helicopters. River ships and barges lie in the abandoned port of Chernobyl. The port can easily be seen in satellite images of the area. The Jupiter Factory, one of the largest buildings in the zone, was in use until 1996 but has since been abandoned and its condition is deteriorating. The infrastructure immediately used by the existing nuclear-related installations is maintained and developed, such as the railway link to the outside world from the Semykhody station used by the power plant. Chernobyl-2 The Chernobyl-2 site (a.k.a. the "Russian Woodpecker") is a former Soviet military installation relatively close to the power plant, consisting of a gigantic transmitter and receiver belonging to the Duga-1 over-the-horizon radar system. Located from the surface area of Chernobyl-2 is a large underground complex that was used for anti-missile defense, space surveillance and communication, and research. Military units were stationed there. In popular culture Immediately after the explosion on 26 April 1986, Russian photographer Igor Kostin photographed and reported on the event, getting the first pictures from the air, then for the next 20 years he continued visiting the area to document the political and personal stories of those impacted by the disaster, publishing a book of photos Chernobyl: confessions of a reporter. In 2014, the official video for Pink Floyd's "Marooned" features scenes of the town of Pripyat. In an opening scene of the 1998 film Godzilla, the main character, scientist Nick Tatopoulos, is in the Chernobyl Exclusion Zone, researching the effects of environmental radiation on earthworms. British photographer John Darwell was among the first foreigners to photograph within the Chernobyl Exclusion Zone for three weeks in late 1999, including in Pripyat, in numerous villages, a landfill site, and people continuing to live within the Zone. This resulted in an exhibition and book Legacy: Photographs inside the Chernobyl Exclusion Zone. Stockport: Dewi Lewis, 2001. . Visits have since been made by numerous other documentary and art photographers. In A Good Day to Die Hard, a 2013 American action thriller film, the protagonists steal a car and drive to Pripyat where a safe deposit box with a file is located, only to find many men loading containers into vehicles while instead they are supposed to only get a secret file. The safe deposit box with the supposed file is a secret passage to a Chernobyl-era vault containing €1 billion worth of weapons-grade uranium. It is turned out that there is no secret file and the antagonists have concocted a scheme to steal the uranium deposit to make big money in the black market. In a 2014 episode of Top Gear, the hosts were challenged with making their cars run out of fuel before they could reach the Exclusion Zone. Jeremy Wade, of the fishing documentary River Monsters, risks his life to catch a river monster that supposedly lives near or in the cooling ponds of the Chernobyl power plant near Pripyat. A large fraction of Martin Cruz Smith's 2004 crime novel Wolves Eat Dogs (the fifth in his series starring Russian detective Arkady Renko) is set in the Exclusion Zone. The opening scene of the 2005 horror film Return of the Living Dead: Necropolis takes place within Chernobyl, where canisters of the zombie chemical 2-4-5 Trioxin are found to be held. The video game franchise S.T.A.L.K.E.R., released in 2007, recreates parts of the zone from source photographs and in-person visits (bridges, railways, buildings, compounds, abandoned vehicles), albeit taking some artistic license regarding the geography of the Zone for gameplay reasons. In the 2007 video game Call of Duty 4: Modern Warfare, two missions, i.e. "All Ghillied Up" and "One Shot, One Kill" take place in Pripyat. A 2009 episode of Destination Truth depicts Josh Gates and the Destination Truth team exploring the ruins of Pripyat for signs of paranormal activity. In 2011, Guillaume Herbaut and Bruno Masi created the web documentary La Zone, funded by CNC, LeMonde.fr and Agat Films. The documentary explores the communities and individuals that still inhabit or visit the Exclusion Zone. The PBS program Nature aired on 19 October 2011, its documentary Radioactive Wolves which explores the return to nature which has occurred in the Exclusion Zone among wolves and other wildlife. In the 2011 film Transformers: Dark of the Moon, Chernobyl is depicted when the Autobots investigate suspected alien activity. 2011: the award-winning short film Seven Years of Winter was filmed under the direction of Marcus Schwenzel in 2011. In his short film the filmmaker tells the drama of the orphan Andrej, which is sent into the nuclear environment by his brother Artjom in order to ransack the abandoned homes. In 2015 the film received the Award for Best Film from the Uranium International Film Festival. The 2012 film Chernobyl Diaries is set in the Exclusion Zone. The horror movie follows a tour group that become stranded in Pripyat, and their encounters with creatures mutated by radioactive exposure. The 2015 documentary The Russian Woodpecker, which won the Grand Jury Prize for World Documentary at the Sundance Film Festival, has extensive footage from the Chernobyl Exclusion Zone and focuses on a conspiracy theory behind the disaster and the nearby Duga radar installation. Markiyan Kamysh's 2015 book, Stalking the Atomic City: Life Among the Decadent and the Depraved of Chornobyl, about illegal pilgrimage in the Chernobyl Exclusion Zone. The 2015 documentary The Babushkas Of Chernobyl directed by Anne Bogart and Holly Morris focuses on elderly residents who remain in the Exclusion Zone. These people, a majority of whom are women, are self-sufficient farmers who receive routine visits from officials to check on their health and radiation levels. The film won several awards. The five-part HBO miniseries Chernobyl was aired in 2019, dramatizing the events of the explosion and relief efforts after the fact. It was primarily shot in Lithuania. In 2019, the Spintires video game released a DLC where players can drive around the Exclusion Zone behind the wheel of a Russian truck to hunt down prize logging sites, while also trying to avoid getting blasted by radiation. The power plant, Pripyat, Red Forest, Kupsta Lake and the Duga Radar have all been recreated, so players can also go on a sightseeing tour from the truck. The survival horror video game Chernobylite by The Farm 51 is set in the Chernobyl Exclusion Zone. In Chris Tarrant: Extreme Railways Season 5 Episode - "Extreme Nuclear Railway: A Journey Too Far?" (episode 22), Chris Tarrant visits Chernobyl on his journey through Ukraine. See also 2020 Chernobyl Exclusion Zone wildfires Effects of the Chernobyl disaster List of Chernobyl-related articles Polesie State Radioecological Reserve Area 51 Notes References External links State Agency of Ukraine on Exclusion Zone Management (SAUEZM) website – the central executive body over the zone (formerly under the Ministry of Emergencies of Ukraine) Conservation, Optimization and Management of Carbon and Biodiversity in the Chornobyl Exclusion Zone – a project of SAUEZM, UNEP, GEF, and the Ministry of Ecology and Natural Resources of Ukraine Chernobyl Radiation and Ecological Biosphere Reserve Chernobyl Center – research institution working in the zone Official radiation measurements – SUAEZM. Online map News and publications Wildlife defies Chernobyl radiation - by BBC News, 20 April 2006 Radioactive Wolves - by PBS Documentary aired in the U.S. on Oct, 19 2011 Inside the Forbidden Forests 1993 The Guardian article about the zone The zone as a wildlife reserve Images from inside the Zone ChernobylGallery.com - Photographs of Chernobyl and Pripyat Lacourphotos.com - Pripyat in Wintertime (Urban photos) Images from inside the Zone Exclusion Zone Exclusion Zone Environment of Ukraine Administrative divisions of Ukraine Radioactively contaminated areas Belarus–Ukraine border 1986 establishments in Ukraine History of Kyiv Oblast History of Zhytomyr Oblast
Chernobyl exclusion zone
[ "Chemistry", "Technology" ]
7,609
[ "Radioactively contaminated areas", "Radioactive contamination", "Aftermath of the Chernobyl disaster", "Soil contamination", "Environmental impact of nuclear power" ]
1,021,753
https://en.wikipedia.org/wiki/Variety%20%28universal%20algebra%29
In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras, and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called finitary algebraic categories. A covariety is the class of all coalgebraic structures of a given signature. Terminology A variety of algebras should not be confused with an algebraic variety, which means a set of solutions to a system of polynomial equations. They are formally quite distinct and their theories have little in common. The term "variety of algebras" refers to algebras in the general sense of universal algebra; there is also a more specific sense of algebra, namely as algebra over a field, i.e. a vector space equipped with a bilinear multiplication. Definition A signature (in this context) is a set, whose elements are called operations, each of which is assigned a natural number (0, 1, 2, ...) called its arity. Given a signature σ and a set V, whose elements are called variables, a word is a finite rooted tree in which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operation o has as many branches away from the root as the arity of o. An equational law is a pair of such words; the axiom consisting of the words v and w is written as . A theory consists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theory T, an algebra of T consists of a set A together with, for each operation o of T with arity n, a function such that for each axiom and each assignment of elements of A to the variables in that axiom, the equation holds that is given by applying the operations to the elements of A as indicated by the trees defining v and w. The class of algebras of a given theory T is called a variety of algebras. Given two algebras of a theory T, say A and B, a homomorphism is a function such that for every operation o of arity n. Any theory gives a category where the objects are algebras of that theory and the morphisms are homomorphisms. Examples The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law: The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively multiplication (binary), identity (nullary, a constant) and inversion (unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities: The class of rings also forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation). If we fix a specific ring R, we can consider the class of left R-modules. To express the scalar multiplication with elements from R, we need one unary operation for each element of R. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the left R-modules do form a variety of algebras. The fields do not form a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below). The cancellative semigroups also do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form a quasivariety as the implication defining the cancellation property is an example of a quasi-identity. Birkhoff's variety theorem Given a class of algebraic structures of the same signature, we can define the notions of homomorphism, subalgebra, and product. Garrett Birkhoff proved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products. This is a result of fundamental importance to universal algebra and known as Birkhoff's variety theorem or as the HSP theorem. H, S, and P stand, respectively, for the operations of homomorphism, subalgebra, and product. One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving the converse—classes of algebras closed under the HSP operations must be equational—is more difficult. Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety. Subvarieties A subvariety of a variety of algebras V is a subclass of V that has the same signature as V and is itself a variety, i.e., is defined by a set of identities. Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups does not form a subvariety of the variety of semigroups because the signatures are different. Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains and does not contain its subalgebra (more precisely, submonoid) . However, the class of abelian groups is a subvariety of the variety of groups because it consists of those groups satisfying , with no change of signature. The finitely generated abelian groups do not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated. Viewing a variety V and its homomorphisms as a category, a subvariety U of V is a full subcategory of V, meaning that for any objects a, b in U, the homomorphisms from a to b in U are exactly those from a to b in V. Free objects Suppose V is a non-trivial variety of algebras, i.e. V contains algebras with more than one element. One can show that for every set S, the variety V contains a free algebra FS on S. This means that there is an injective set map that satisfies the following universal property: given any algebra A in V and any map , there exists a unique V-homomorphism such that . This generalizes the notions of free group, free abelian group, free algebra, free module etc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra. Category theory Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitary monads and Lawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called a finitary algebraic category. For any finitary algebraic category V, the forgetful functor has a left adjoint , namely the functor that assigns to each set the free algebra on that set. This adjunction is monadic, meaning that the category V is equivalent to the Eilenberg–Moore category SetT for the monad . Moreover the monad T is finitary, meaning it commutes with filtered colimits. The monad is thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories. Working with monads permits the following generalization. One says a category is an algebraic category if it is monadic over Set. This is a more general notion than "finitary algebraic category" because it admits such categories as CABA (complete atomic Boolean algebras) and CSLat (complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category of sigma algebras also has infinitary operations, but their arity is countable whence its signature is small (forms a set). Every finitary algebraic category is a locally presentable category. Pseudovariety of finite algebras Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion of variety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities. A pseudovariety is usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of a variety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived. Pseudovarieties are of particular importance in the study of finite semigroups and hence in formal language theory. Eilenberg's theorem, often referred to as the variety theorem, describes a natural correspondence between varieties of regular languages and pseudovarieties of finite semigroups. See also Quasivariety Notes External links Two monographs available free online: Stanley N. Burris and H.P. Sankappanavar (1981), A Course in Universal Algebra. Springer-Verlag. . [Proof of Birkhoff's Theorem is in II§11.] Peter Jipsen and Henry Rose (1992), Varieties of Lattices, Lecture Notes in Mathematics 1533. Springer Verlag. . Universal algebra
Variety (universal algebra)
[ "Mathematics" ]
2,342
[ "Fields of abstract algebra", "Universal algebra" ]
1,021,754
https://en.wikipedia.org/wiki/Sound%20localization
Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance. The sound localization mechanisms of the mammalian auditory system have been extensively studied. The auditory system uses several cues for sound source localization, including time difference and level difference (or intensity difference) between the ears, and spectral information. Other animals, such as birds and reptiles, also use them but they may use them differently, and some also have localization cues which are absent in the human auditory system, such as the effects of ear movements. Animals with the ability to localize sound have a clear evolutionary advantage. How sound reaches the brain Sound is the perceptual result of mechanical vibrations traveling through a medium such as air or water. Through the mechanisms of compression and rarefaction, sound waves travel through the air, bounce off the pinna and concha of the exterior ear, and enter the ear canal. In mammals, the sound waves vibrate the tympanic membrane (ear drum), causing the three bones of the middle ear to vibrate, which then sends the energy through the oval window and into the cochlea where it is changed into a chemical signal by hair cells in the organ of Corti, which synapse onto spiral ganglion fibers that travel through the cochlear nerve into the brain. Neural interactions In vertebrates, interaural time differences are known to be calculated in the superior olivary nucleus of the brainstem. According to Jeffress, this calculation relies on delay lines: neurons in the superior olive which accept innervation from each ear with different connecting axon lengths. Some cells are more directly connected to one ear than the other, thus they are specific for a particular interaural time difference. This theory is equivalent to the mathematical procedure of cross-correlation. However, because Jeffress's theory is unable to account for the precedence effect, in which only the first of multiple identical sounds is used to determine the sounds' location (thus avoiding confusion caused by echoes), it cannot be entirely used to explain the response. Furthermore, a number of recent physiological observations made in the midbrain and brainstem of small mammals have shed considerable doubt on the validity of Jeffress's original ideas. Neurons sensitive to interaural level differences (ILDs) are excited by stimulation of one ear and inhibited by stimulation of the other ear, such that the response magnitude of the cell depends on the relative strengths of the two inputs, which in turn, depends on the sound intensities at the ears. In the auditory midbrain nucleus, the inferior colliculus (IC), many ILD sensitive neurons have response functions that decline steeply from maximum to zero spikes as a function of ILD. However, there are also many neurons with much more shallow response functions that do not decline to zero spikes. Human auditory system Sound localization is the process of determining the location of a sound source. The brain utilizes subtle differences in intensity, spectral, and timing cues to localize sound sources. Localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, and the distance (for static sounds) or velocity (for moving sounds). The azimuth of a sound is signaled by the difference in arrival times between the ears, by the relative amplitude of high-frequency sounds (the shadow effect), and by the asymmetrical spectral reflections from various parts of our bodies, including torso, shoulders, and pinnae. The distance cues are the loss of amplitude, the loss of high frequencies, and the ratio of the direct signal to the reverberated signal. Depending on where the source is located, our head acts as a barrier to change the timbre, intensity, and spectral qualities of the sound, helping the brain orient where the sound emanated from. These minute differences between the two ears are known as interaural cues. Lower frequencies, with longer wavelengths, diffract the sound around the head forcing the brain to focus only on the phasing cues from the source. Helmut Haas discovered that we can discern the sound source despite additional reflections at 10 decibels louder than the original wave front, using the earliest arriving wave front. This principle is known as the Haas effect, a specific version of the precedence effect. Haas measured down to even a 1 millisecond difference in timing between the original sound and reflected sound increased the spaciousness, allowing the brain to discern the true location of the original sound. The nervous system combines all early reflections into a single perceptual whole allowing the brain to process multiple different sounds at once. The nervous system will combine reflections that are within about 35 milliseconds of each other and that have a similar intensity. Duplex theory To determine the lateral input direction (left, front, right), the auditory system analyzes the following ear signal information: In 1907, Lord Rayleigh utilized tuning forks to generate monophonic excitation and studied the lateral sound localization theory on a human head model without auricle. He first presented the interaural clue difference based sound localization theory, which is known as Duplex Theory. Human ears are on different sides of the head, and thus have different coordinates in space. As shown in the duplex theory figure, since the distances between the acoustic source and ears are different, there are time difference and intensity difference between the sound signals of two ears. We call those kinds of differences as Interaural Time Difference (ITD) and Interaural Intensity Difference (IID) respectively. From the duplex theory figure we can see that for source B1 or source B2, there will be a propagation delay between two ears, which will generate the ITD. Simultaneously, human head and ears may have a shadowing effect on high-frequency signals, which will generate IID. Interaural time difference (ITD) – Sound from the right side reaches the right ear earlier than the left ear. The auditory system evaluates interaural time differences from: (a) Phase delays at low frequencies and (b) group delays at high frequencies. Theory and experiments show that ITD relates to the signal frequency . Suppose the angular position of the acoustic source is , the head radius is and the acoustic velocity is , the function of ITD is given by: . In above closed form, we assumed that the 0 degree is in the right ahead of the head and counter-clockwise is positive. Interaural intensity difference (IID) or interaural level difference (ILD) – Sound from the right side has a higher level at the right ear than at the left ear, because the head shadows the left ear. These level differences are highly frequency dependent and they increase with increasing frequency. Massive theoretical researches demonstrate that IID relates to the signal frequency and the angular position of the acoustic source . The function of IID is given by: For frequencies below 1000 Hz, mainly ITDs are evaluated (phase delays), for frequencies above 1500 Hz mainly IIDs are evaluated. Between 1000 Hz and 1500 Hz there is a transition zone, where both mechanisms play a role. Localization accuracy is 1 degree for sources in front of the listener and 15 degrees for sources to the sides. Humans can discern interaural time differences of 10 microseconds or less. For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 626 μs) are smaller than the half wavelength of the sound waves. So the auditory system can determine phase delays between both ears without confusion. Interaural level differences are very low in this frequency range, especially below about 200 Hz, so a precise evaluation of the input direction is nearly impossible on the basis of level differences alone. As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation. For frequencies above 1600 Hz the dimensions of the head are greater than the length of the sound waves. An unambiguous determination of the input direction based on interaural phase alone is not possible at these frequencies. However, the interaural level differences become larger, and these level differences are evaluated by the auditory system. Also, delays between the ears can still be detected via some combination of phase differences and group delays, which are more pronounced at higher frequencies; that is, if there is a sound onset, the delay of this onset between the ears can be used to determine the input direction of the corresponding sound source. This mechanism becomes especially important in reverberant environments. After a sound onset there is a short time frame where the direct sound reaches the ears, but not yet the reflected sound. The auditory system uses this short time frame for evaluating the sound source direction, and keeps this detected direction as long as reflections and reverberation prevent an unambiguous direction estimation. The mechanisms described above cannot be used to differentiate between a sound source ahead of the hearer or behind the hearer; therefore additional cues have to be evaluated. Pinna filtering effect Duplex theory shows that ITD and IID play significant roles in sound localization, but they can only deal with lateral localization problems. For example, if two acoustic sources are placed symmetrically at the front and back of the right side of the human head, they will generate equal ITDs and IIDs, in what is called the cone model effect. However, human ears can still distinguish between these sources. Besides that, in natural sense of hearing, one ear alone, without any ITD or IID, can distinguish between them with high accuracy. Due to the disadvantages of duplex theory, researchers proposed the pinna filtering effect theory. The shape of the human pinna is concave with complex folds and asymmetrical both horizontally and vertically. Reflected and direct waves generate a frequency spectrum on the eardrum, relating to the acoustic sources. Then auditory nerves localize the sources using this frequency spectrum. These spectrum clues generated by the pinna filtering effect can be presented as a head-related transfer function (HRTF). The corresponding time domain expressions are called the Head-Related Impulse Response (HRIR). The HRTF is also described as the transfer function from the free field to a specific point in the ear canal. We usually recognize HRTFs as LTI systems: where L and R represent the left ear and right ear respectively, and represent the amplitude of the sound pressure at the entrances to the left and right ear canals, and is the amplitude of sound pressure at the center of the head coordinate when listener does not exist. In general, an HRTF's and are functions of source angular position , elevation angle , the distance between the source and the center of the head , the angular velocity and the equivalent dimension of the head . At present, the main institutes that work on measuring HRTF database include CIPIC International Lab, MIT Media Lab, the Graduate School in Psychoacoustics at the University of Oldenburg, the Neurophysiology Lab at the University of Wisconsin–Madison and Ames Lab of NASA. Databases of HRIRs from humans with normal and impaired hearing and from animals are publicly available. Other cues The human outer ear, i.e. the structures of the pinna and the external ear canal, form direction-selective filters. Depending on the sound input direction, different filter resonances become active. These resonances implant direction-specific patterns into the frequency responses of the ears, which can be evaluated by the auditory system for sound localization. Together with other direction-selective reflections at the head, shoulders and torso, they form the outer ear transfer functions. These patterns in the ear's frequency responses are highly individual, depending on the shape and size of the outer ear. If sound is presented through headphones, and has been recorded via another head with different-shaped outer ear surfaces, the directional patterns differ from the listener's own, and problems will appear when trying to evaluate directions in the median plane with these foreign ears. As a consequence, front–back permutations or inside-the-head-localization can appear when listening to dummy head recordings, or otherwise referred to as binaural recordings. It has been shown that human subjects can monaurally localize high frequency sound but not low frequency sound. Binaural localization, however, was possible with lower frequencies. This is likely due to the pinna being small enough to only interact with sound waves of high frequency. It seems that people can only accurately localize the elevation of sounds that are complex and include frequencies above 7,000 Hz, and a pinna must be present. When the head is stationary, the binaural cues for lateral sound localization (interaural time difference and interaural level difference) do not give information about the location of a sound in the median plane. Identical ITDs and ILDs can be produced by sounds at eye level or at any elevation, as long as the lateral direction is constant. However, if the head is rotated, the ITD and ILD change dynamically, and those changes are different for sounds at different elevations. For example, if an eye-level sound source is straight ahead and the head turns to the left, the sound becomes louder (and arrives sooner) at the right ear than at the left. But if the sound source is directly overhead, there will be no change in the ITD and ILD as the head turns. Intermediate elevations will produce intermediate degrees of change, and if the presentation of binaural cues to the two ears during head movement is reversed, the sound will be heard behind the listener. Hans Wallach artificially altered a sound's binaural cues during movements of the head. Although the sound was objectively placed at eye level, the dynamic changes to ITD and ILD as the head rotated were those that would be produced if the sound source had been elevated. In this situation, the sound was heard at the synthesized elevation. The fact that the sound sources objectively remained at eye level prevented monaural cues from specifying the elevation, showing that it was the dynamic change in the binaural cues during head movement that allowed the sound to be correctly localized in the vertical dimension. The head movements need not be actively produced; accurate vertical localization occurred in a similar setup when the head rotation was produced passively, by seating the blindfolded subject in a rotating chair. As long as the dynamic changes in binaural cues accompanied a perceived head rotation, the synthesized elevation was perceived. In the 1960s Batteau showed the pinna also enhances horizontal localization. Distance of the sound source The human auditory system has only limited possibilities to determine the distance of a sound source. In the close-up-range there are some indications for distance determination, such as extreme level differences (e.g. when whispering into one ear) or specific pinna (the visible part of the ear) resonances in the close-up range. The auditory system uses these clues to estimate the distance to a sound source: Direct/ Reflection ratio: In enclosed rooms, two types of sound are arriving at a listener: The direct sound arrives at the listener's ears without being reflected at a wall. Reflected sound has been reflected at least one time at a wall before arriving at the listener. The ratio between direct sound and reflected sound can give an indication about the distance of the sound source. Loudness: Distant sound sources have a lower loudness than close ones. This aspect can be evaluated especially for well-known sound sources. Sound spectrum: High frequencies are more quickly damped by the air than low frequencies. Therefore, a distant sound source sounds more muffled than a close one, because the high frequencies are attenuated. For sound with a known spectrum (e.g. speech) the distance can be estimated roughly with the help of the perceived sound. ITDG: The Initial Time Delay Gap describes the time difference between arrival of the direct wave and first strong reflection at the listener. Nearby sources create a relatively large ITDG, with the first reflections having a longer path to take, possibly many times longer. When the source is far away, the direct and the reflected sound waves have similar path lengths. Movement: Similar to the visual system there is also the phenomenon of motion parallax in acoustical perception. For a moving listener nearby sound sources are passing faster than distant sound sources. Level Difference: Very close sound sources cause a different level between the ears. Signal processing Sound processing of the human auditory system is performed in so-called critical bands. The hearing range is segmented into 24 critical bands, each with a width of 1 Bark or 100 Mel. For a directional analysis the signals inside the critical band are analyzed together. The auditory system can extract the sound of a desired sound source out of interfering noise. This allows the listener to concentrate on only one speaker if other speakers are also talking (the cocktail party effect). With the help of the cocktail party effect sound from interfering directions is perceived attenuated compared to the sound from the desired direction. The auditory system can increase the signal-to-noise ratio by up to 15 dB, which means that interfering sound is perceived to be attenuated to half (or less) of its actual loudness. In enclosed rooms not only the direct sound from a sound source is arriving at the listener's ears, but also sound which has been reflected at the walls. The auditory system analyses only the direct sound, which is arriving first, for sound localization, but not the reflected sound, which is arriving later (law of the first wave front). So sound localization remains possible even in an echoic environment. This echo cancellation occurs in the Dorsal Nucleus of the Lateral Lemniscus (DNLL). In order to determine the time periods, where the direct sound prevails and which can be used for directional evaluation, the auditory system analyzes loudness changes in different critical bands and also the stability of the perceived direction. If there is a strong attack of the loudness in several critical bands and if the perceived direction is stable, this attack is in all probability caused by the direct sound of a sound source, which is entering newly or which is changing its signal characteristics. This short time period is used by the auditory system for directional and loudness analysis of this sound. When reflections arrive a little bit later, they do not enhance the loudness inside the critical bands in such a strong way, but the directional cues become unstable, because there is a mix of sound of several reflection directions. As a result, no new directional analysis is triggered by the auditory system. This first detected direction from the direct sound is taken as the found sound source direction, until other strong loudness attacks, combined with stable directional information, indicate that a new directional analysis is possible. (see Franssen effect) Specific techniques with applications Auditory transmission stereo system This kind of sound localization technique provides us the real virtual stereo system. It utilizes "smart" manikins, such as KEMAR, to glean signals or use DSP methods to simulate the transmission process from sources to ears. After amplifying, recording and transmitting, the two channels of received signals will be reproduced through earphones or speakers. This localization approach uses electroacoustic methods to obtain the spatial information of the original sound field by transferring the listener's auditory apparatus to the original sound field. The most considerable advantages of it would be that its acoustic images are lively and natural. Also, it only needs two independent transmitted signals to reproduce the acoustic image of a 3D system. 3D para-virtualization stereo system The representatives of this kind of system are SRS Audio Sandbox, Spatializer Audio Lab and Qsound Qxpander. They use HRTF to simulate the received acoustic signals at the ears from different directions with common binary-channel stereo reproduction. Therefore, they can simulate reflected sound waves and improve subjective sense of space and envelopment. Since they are para-virtualization stereo systems, the major goal of them is to simulate stereo sound information. Traditional stereo systems use sensors that are quite different from human ears. Although those sensors can receive the acoustic information from different directions, they do not have the same frequency response of human auditory system. Therefore, when binary-channel mode is applied, human auditory systems still cannot feel the 3D sound effect field. However, the 3D para-virtualization stereo system overcome such disadvantages. It uses HRTF principles to glean acoustic information from the original sound field then produce a lively 3D sound field through common earphones or speakers. Multichannel stereo virtual reproduction Since the multichannel stereo systems require many reproduction channels, some researchers adopted the HRTF simulation technologies to reduce the number of reproduction channels. They use only two speakers to simulate multiple speakers in a multichannel system. This process is called as virtual reproduction. Essentially, such approach uses both interaural difference principle and pinna filtering effect theory. Unfortunately, this kind of approach cannot perfectly substitute the traditional multichannel stereo system, such as 5.1/7.1 surround sound system. That is because when the listening zone is relatively larger, simulation reproduction through HRTFs may cause invert acoustic images at symmetric positions. Animals Since most animals have two ears, many of the effects of the human auditory system can also be found in other animals. Therefore, interaural time differences (interaural phase differences) and interaural level differences play a role for the hearing of many animals. But the influences on localization of these effects are dependent on head sizes, ear distances, the ear positions and the orientation of the ears. Smaller animals like insects use different techniques as the separation of the ears are too small. For the process of animals emitting sound to improve localization, a biological form of active sonar, see animal echolocation. Lateral information (left, ahead, right) If the ears are located at the side of the head, similar lateral localization cues as for the human auditory system can be used. This means: evaluation of interaural time differences (interaural phase differences) for lower frequencies and evaluation of interaural level differences for higher frequencies. The evaluation of interaural phase differences is useful, as long as it gives unambiguous results. This is the case, as long as ear distance is smaller than half the length (maximal one wavelength) of the sound waves. For animals with a larger head than humans the evaluation range for interaural phase differences is shifted towards lower frequencies, for animals with a smaller head, this range is shifted towards higher frequencies. The lowest frequency which can be localized depends on the ear distance. Animals with a greater ear distance can localize lower frequencies than humans can. For animals with a smaller ear distance the lowest localizable frequency is higher than for humans. If the ears are located at the side of the head, interaural level differences appear for higher frequencies and can be evaluated for localization tasks. For animals with ears at the top of the head, no shadowing by the head will appear and therefore there will be much less interaural level differences, which could be evaluated. Many of these animals can move their ears, and these ear movements can be used as a lateral localization cue. In the median plane (front, above, back, below) For many mammals there are also pronounced structures in the pinna near the entry of the ear canal. As a consequence, direction-dependent resonances can appear, which could be used as an additional localization cue, similar to the localization in the median plane in the human auditory system. There are additional localization cues which are also used by animals. Head tilting For sound localization in the median plane (elevation of the sound) also two detectors can be used, which are positioned at different heights. In animals, however, rough elevation information is gained simply by tilting the head, provided that the sound lasts long enough to complete the movement. This explains the innate behavior of cocking the head to one side when trying to localize a sound precisely. To get instantaneous localization in more than two dimensions from time-difference or amplitude-difference cues requires more than two detectors. Localization with coupled ears (flies) The tiny parasitic fly Ormia ochracea has become a model organism in sound localization experiments because of its unique ear. The animal is too small for the time difference of sound arriving at the two ears to be calculated in the usual way, yet it can determine the direction of sound sources with exquisite precision. The tympanic membranes of opposite ears are directly connected mechanically, allowing resolution of sub-microsecond time differences and requiring a new neural coding strategy. Ho showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animal's head. Efforts to build directional microphones based on the coupled-eardrum structure are underway. Bi-coordinate sound localization (owls) Most owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source. Dolphins Dolphins (and other odontocetes) rely on echolocation to aid in detecting, identifying, localizing, and capturing prey. Dolphin sonar signals are well suited for localizing multiple, small targets in a three-dimensional aquatic environment by utilizing highly directional (3 dB beamwidth of about 10 deg), broadband (3 dB bandwidth typically of about 40 kHz; peak frequencies between 40 kHz and 120 kHz), short duration clicks (about 40 μs). Dolphins can localize sounds both passively and actively (echolocation) with a resolution of about 1 deg. Cross-modal matching (between vision and echolocation) suggests dolphins perceive the spatial structure of complex objects interrogated through echolocation, a feat that likely requires spatially resolving individual object features and integration into a holistic representation of object shape. Although dolphins are sensitive to small, binaural intensity and time differences, mounting evidence suggests dolphins employ position-dependent spectral cues derived from well-developed head-related transfer functions, for sound localization in both the horizontal and vertical planes. A very small temporal integration time (264 μs) allows localization of multiple targets at varying distances. Localization adaptations include pronounced asymmetry of the skull, nasal sacks, and specialized lipid structures in the forehead and jaws, as well as acoustically isolated middle and inner ears. The role of Prestin in sound localization: In the realm of mammalian sound localization, the Prestin gene has emerged as a pivotal player, particularly in the fascinating arena of echolocation employed by bats and dolphins. Discovered just over a decade ago, Prestin encodes a protein located in the inner ear's hair cells, facilitating rapid contractions and expansions. This intricate mechanism operates akin to an antique phonograph horn, amplifying sound waves within the cochlea and elevating the overall sensitivity of hearing. In 2014 Liu and others delved into the evolutionary adaptations of Prestin, unveiling its critical role in the ultrasonic hearing range essential for animal sonar, specifically in the context of echolocation. This adaptation proves instrumental for dolphins navigating through turbid waters and bats seeking sustenance in nocturnal darkness. Noteworthy is the emission of high-frequency echolocation calls by toothed whales and echolocating bats, showcasing diversity in shape, duration, and amplitude. However, it is their high-frequency hearing that becomes paramount, as it enables the reception and analysis of echoes bouncing off objects in their environment. A meticulous dissection of Prestin protein function in sonar-guided bats and bottlenose dolphins, juxtaposed with nonsonar mammals, sheds light on the intricacies of this process. Evolutionary analyses of Prestin protein sequences brought forth a compelling observationa singular amino acid shift from threonine (Thr or T) in sonar mammals to asparagine (Asn or N) in nonsonar mammals. This specific alteration, subject to parallel evolution, emerges as a linchpin in the mammalian echolocation narrative. Subsequent experiments lent credence to this hypothesis, identifying four key amino acid distinctions in sonar mammals that likely contribute to their distinctive echolocation features. The confluence of evolutionary analyses and empirical findings provides robust evidence, marking a significant juncture in comprehending the Prestin gene's role in the evolutionary trajectory of mammalian echolocation systems. This research underscores the adaptability and evolutionary significance of Prestin, offering valuable insights into the genetic foundations of sound localization in bats and dolphins, particularly within the sophisticated realm of echolocation. History The term 'binaural' literally signifies 'to hear with two ears', and was introduced in 1859 to signify the practice of listening to the same sound through both ears, or to two discrete sounds, one through each ear. It was not until 1916 that Carl Stumpf (1848–1936), a German philosopher and psychologist, distinguished between dichotic listening, which refers to the stimulation of each ear with a different stimulus, and diotic listening, the simultaneous stimulation of both ears with the same stimulus. Later, it would become apparent that binaural hearing, whether dichotic or diotic, is the means by which sound localization occurs. Scientific consideration of binaural hearing began before the phenomenon was so named, with speculations published in 1792 by William Charles Wells (1757–1817) based on his research into binocular vision. Giovanni Battista Venturi (1746–1822) conducted and described experiments in which people tried to localize a sound using both ears, or one ear blocked with a finger. This work was not followed up on, and was only recovered after others had worked out how human sound localization works. Lord Rayleigh (1842–1919) would do these same experiments and come to the results, without knowing Venturi had first done them, almost seventy-five years later. Charles Wheatstone (1802–1875) did work on optics and color mixing, and also explored hearing. He invented a device he called a "microphone" that involved a metal plate over each ear, each connected to metal rods; he used this device to amplify sound. He also did experiments holding tuning forks to both ears at the same time, or separately, trying to work out how sense of hearing works, that he published in 1827. Ernst Heinrich Weber (1795–1878) and August Seebeck (1805–1849) and William Charles Wells also attempted to compare and contrast what would become known as binaural hearing with the principles of binocular integration generally. Understanding how the differences in sound signals between two ears contributes to auditory processing in such a way as to enable sound localization and direction was considerably advanced after the invention of the stethophone by Somerville Scott Alison in 1859, who coined the term 'binaural'. Alison based the stethophone on the stethoscope, which had been invented by René Théophile Hyacinthe Laennec (1781–1826); the stethophone had two separate "pickups", allowing the user to hear and compare sounds derived from two discrete locations. See also Acoustic location Animal echolocation Binaural fusion Coincidence detection in neurobiology Human echolocation Perceptual-based 3D sound localization Psychoacoustics Spatial hearing loss References External links auditoryneuroscience.com: Collection of multimedia files and flash demonstrations related to spatial hearing Collection of references about sound localization Interaural Intensity Difference Processing in Auditory Midbrain Neurons: Effects of a Transient Early Inhibitory Input Online learning center - Hearing and Listening HearCom:Hearing in the Communication Society, an EU research project Research on "Non-line-of-sight (NLOS) Localisation for Indoor Environments" by CMR at UNSW An introduction to sound localization Sound and Room An introduction to acoustic holography An introduction to acoustic beamforming Link to reference 8: https://kns.cnki.net/kcms2/article/abstract?v=C1uazonQNNh31hpdlsyEyXcqR2uafvd3NO5N-rwCbIvv4k-h-lQ2euw2Ja7xMXcwObpETefJWcYFa1zXJqT8ezXCQyp8UxeCVFCuTs07Lhqt4Qc6zy4aOw==&uniplatform=NZKPT Acoustics Neuroethology Hearing Sound Spatial cognition
Sound localization
[ "Physics", "Biology" ]
6,812
[ "Behavior", "Ethology", "Neuroethology", "Classical mechanics", "Acoustics", "Space", "Spatial cognition", "Spacetime" ]
1,021,879
https://en.wikipedia.org/wiki/Virgin%20Galactic
Virgin Galactic Holdings, Inc. is a British-American spaceflight company founded by Richard Branson and the Virgin Group conglomerate which retains an 11.9% stake through Virgin Investments Limited. It is headquartered in California, and operates from New Mexico. The company develops commercial spacecraft and provides suborbital spaceflights to space tourists. Virgin Galactic's suborbital spacecraft are air launched from beneath a carrier airplane known as White Knight Two. Virgin Galactic's maiden spaceflight occurred in 2018 with its VSS Unity spaceship. Branson had originally hoped to see a maiden spaceflight by 2010, but the date was delayed, primarily due to the October 2014 crash of VSS Enterprise. The company did the early work on the satellite launch development of LauncherOne before this was hived off to a separate company, Virgin Orbit, in 2017. The company also has aspirations for suborbital transport, to provide rocket-powered, point-to-point air travel. The spin-off company, Virgin Orbit was shut down in May 2023. On 13 December 2018, VSS Unity achieved the project's first suborbital space flight, VSS Unity VP-03, with two pilots, reaching an altitude of , and officially entering outer space by U.S. standards. In February 2019, the project carried three people, including a passenger, on VSS Unity VF-01, with a member of the team floating within the cabin during a spaceflight that reached . On 11 July 2021, founder Richard Branson and three other employees rode on VSS Unity 22 as passengers, marking the first time a spaceflight company founder has travelled on his own ship into outer space. In February 2022, Virgin Galactic announced that it was opening ticket sales to the public. The price of a reservation was $450,000. In June 2023, Virgin Galactic launched its first commercial space tourism flight called Galactic 01. Galactic 07 in June 2024 was the final flight of Unity as the company shifted focus to its Delta class vehicles and a higher launch cadence. Structure and history Formation and early activities Virgin Galactic was founded in 2004 by British entrepreneur Sir Richard Branson, who had previously founded the Virgin Group and the Virgin Atlantic airline, and who had a long personal history of balloon and surface record-breaking activities. As part of Branson's promotion of the firm, he has added a variation of the Virgin Galactic livery to his personal business jet, the Dassault Falcon 900EX "Galactic girl" (G-GALX). The Spaceship Company The Spaceship Company (TSC) was founded by Richard Branson through Virgin Group (which owned 70%) and Burt Rutan through Scaled Composites (which owned 30%) to build commercial spaceships and launch aircraft for space travel. From the time of TSC's formation in 2005, the launch customer was Virgin Galactic, which contracted to purchase five SpaceShipTwos and two WhiteKnightTwos. Scaled Composites was contracted to develop and build the initial prototypes of WhiteKnightTwo and SpaceShipTwo, and then TSC began production of the follow-on vehicles beginning in 2008. In 2012, after Northrop Grumman acquired Scaled Composites, Virgin Galactic acquired the remaining 30% of The Spaceship Company. Investors After a claimed investment by Virgin Group of , in 2010 the sovereign wealth fund of Abu Dhabi, Aabar Investments group, acquired a 31.8% stake in Virgin Galactic for , receiving exclusive regional rights to launch tourism and scientific research space flights from the United Arab Emirates capital. In July 2011, Aabar invested a further to develop a program to launch small satellites into low Earth orbit, raising their equity share to 37.8%. Virgin announced in June 2014 that they were in talks with Google about the injection of capital to fund both development and operations. The New Mexico government has invested approximately $200m (£121m) in the Spaceport America facility, for which Virgin Galactic is the anchor tenant; other commercial space companies also use the site. On Monday 28 October 2019, Virgin Galactic listed into the New York Stock Exchange, trading under the ticker symbol 'SPCE', the first publicly traded space tourism company (i.e., company whose primary business is space tourism). The company raised $450 million through a SPAC merger listing, and company's market value after listing was more than $2.4 billion. At the time, the company claimed to have over 600 customer reservations representing approximately $80 million in total collected deposits and more than $120 million in "potential revenue". Retail interest After its listing, SPCE was a popular stock for many retail investors and was often mentioned on the subreddit r/wallstreetbets. Aims Early history and background The Ansari X Prize was a space competition in which the X Prize Foundation offered a US$10,000,000 prize for the first non-government organization to launch a reusable crewed spacecraft into space twice within two weeks. It was modeled after early 20th-century aviation prizes, and aimed to spur development of low-cost spaceflight. Created in May 1996 and initially called just the "X Prize", it was renamed the "Ansari X Prize" on 6 May 2004 following a multimillion-dollar donation from entrepreneurs Anousheh Ansari and Amir Ansari. The prize was won on 4 October 2004, the 47th anniversary of the Sputnik 1 launch, by the Tier One project designed by Burt Rutan and financed by Microsoft co-founder Paul Allen, using the experimental spaceplane SpaceShipOne. $10 million was awarded to the winner, and more than $100 million was invested in new technologies in pursuit of the prize. Overview of the flights to be developed The spacecraft initially called SpaceShipTwo was planned to achieve a suborbital journey with a short period of weightlessness. Carried to about 16 kilometers, or 52,000 ft, underneath a carrier aircraft, White Knight Two, after separation the vehicle was to continue to over 100 km (the Kármán line, a common definition of where "space" begins). The time from liftoff of the White Knight Two mothership carrying SpaceShipTwo until the touchdown of the spacecraft after the suborbital flight would be about 2.5 hours. The suborbital flight itself would be only a small fraction of that time, with weightlessness lasting approximately 6 minutes. Passengers were to be able to release themselves from their seats during these six minutes and float around the cabin. Development operations 2007 Scaled Composites fuel tank testing explosion In July 2007, three Scaled Composites employees were killed and three critically injured at the Mojave spaceport while testing components of the rocket motor for SpaceShipTwo. An explosion occurred during a cold fire test, which involved nitrous oxide flowing through fuel injectors. The procedure had been expected to be safe. Commencement of sub-space test flights Just a year later, in July 2008, Richard Branson predicted the maiden space voyage would take place within 18 months. In October 2009, Virgin Galactic announced that initial flights would take place from Spaceport America "within two years." Later that year, Scaled Composites announced that White Knight Two's first SpaceShipTwo captive flights would be in early 2010. Both aircraft did fly together in March 2010. The credibility of the earlier promises of launch dates by Virgin Galactic were brought into question in October 2014 by its chief executive, George T. Whitesides, when he told The Guardian: "We've changed dramatically as a company. When I joined in 2010 we were mostly a marketing organisation. Right now we can design, build, test, and fly a rocket motor all by ourselves and all in Mojave, which I don't think is done anywhere else on the planet". On 7 December 2009, SpaceShipTwo was unveiled at the Mojave Spaceport. Branson told the 300 people attending, each of whom had booked rides at $200,000 each, that flights would begin "in 2011." However, in April 2011, Branson announced further delays, saying "I hope 18 months from now, we'll be sitting in our spaceship and heading off into space." By February 2012, SpaceShipTwo had completed 15 test flights attached to White Knight Two and an additional 16 glide tests, the last of which took place in September 2011. A rocket-powered test flight of SpaceShipTwo took place on 29 April 2013, with an engine burn of 16 seconds duration. The brief flight began at an altitude of 47,000 feet and reached a maximum altitude of 55,000 feet. While the SS2 achieved a speed of Mach 1.2 (920 mph), this was less than half the 2,000 mph speed predicted by Richard Branson. SpaceShipTwo's second supersonic flight achieved a speed of 1,100 mph for 20 seconds; while this was an improvement, it fell far short of the 2,500 mph for 70 seconds required to carry six passengers into space. However, Branson still announced his spaceship would be capable of "launching 100 satellites every day." In addition to the suborbital passenger business, Virgin Galactic intended to market SpaceShipTwo for suborbital space science missions and market White Knight Two for "small satellite" launch services. It had planned to initiate RFPs for the satellite business in early 2010, but flights had not materialized as of 2014. On 14 May 2013, Richard Branson stated on Virgin Radio Dubai's Kris Fade Morning Show that he would be aboard the first public flight of SpaceShipTwo, which had again been rescheduled, this time to 25 December 2013. "Maybe I'll dress up as Father Christmas", Branson said. The third rocket-powered test flight of SpaceShipTwo took place on 10 January 2014 and successfully tested the spaceship's Reaction Control System (RCS) and the newly installed thermal protection coating on the vehicle's tail booms. Virgin Galactic CEO George Whitesides said "We are progressively closer to our target of starting commercial service in 2014". Interviewed by The Observer at the time of her 90th birthday in July 2014, Branson's mother, Eve, told reporter Elizabeth Day of her intention of going to space herself. Asked when that might be, she replied: "I think it's the end of the year", adding after a pause, "It's always 'the end of the year' ". In February 2014, cracks in WhiteKnightTwo, where the spars connect with the fuselage, were discovered during an inspection conducted after Virgin Galactic took possession of the aircraft from builder Scaled Composites. In September 2014, Richard Branson described the intended date for the first commercial flight as February or March 2015; by the time of this announcement, a new plastic-based fuel had yet to be ignited in-flight. By September 2014, the three test flights of the SS2 had only reached an altitude of around 71,000 ft, approximately 13 miles; in order to receive a Federal Aviation Administration license to carry passengers, the craft needs to complete test missions at full speed and 62-mile height. Following the announcement of further delays, UK newspaper The Sunday Times reported that Branson faced a backlash from those who had booked flights with Virgin Galactic, with the company having received $80 million in fares and deposits. Tom Bower, author of Branson: The Man behind the Mask, told the Sunday Times: "They spent 10 years trying to perfect one engine and failed. They are now trying to use a different engine and get into space in six months. It's just not feasible." BBC science editor David Shukman commented in October 2014, that "[Branson's] enthusiasm and determination [are] undoubted. But his most recent promises of launching the first passenger trip by the end of this year had already started to look unrealistic some months ago." VSS Enterprise crash At 10:51 PST 31 October 2014, the fourth rocket-powered test flight of the company's first SpaceShipTwo craft, VSS Enterprise, ended in disaster, as it broke apart in mid-air, with the debris falling into the Mojave desert in California, shortly after being released from the mothership. Initial reports attributed the loss to an unidentified "in-flight anomaly". The flight was the first test of SpaceShipTwo with new plastic-based fuel, replacing the original—a rubber-based solid fuel that had not met expectations. 39-year-old co-pilot Michael Alsbury was killed and 43-year-old pilot Peter Siebold was seriously injured. Investigation and media comment Initial investigations found that the engine and propellant tanks were intact, showing that there had not been a fuel explosion. Telemetry data and cockpit video showed that instead, the air braking system appeared to have deployed incorrectly and too early, for unknown reasons, and that the craft had violently broken apart in mid-air seconds later. U.S. National Transportation Safety Board Chairman Christopher Hart said on 2 November 2014 that investigators had determined SpaceShipTwo's tail system was supposed to have been released for deployment as the craft was traveling about 1.4 times the speed of sound; instead, the tail section began pivoting when the vehicle was flying at Mach 1. "I'm not stating that this is the cause of the mishap. We have months and months of investigation to determine what the cause was." Asked if pilot error was a possible factor, Hart said: "We are looking at all of these issues to determine what was the root cause of this mishap." He noted that it was also unclear how the tail mechanism began to rotate once it was unlocked, since that maneuver requires a separate pilot command that was never given, and whether the craft's position in the air and its speed somehow enabled the tail section to swing free on its own. In November 2014, Branson and Virgin Galactic came under criticism for their attempts to distance the company from the disaster by referring to the test pilots as Scaled Composites employees. Virgin Galactic's official statement on 31 October 2014 said: "Virgin Galactic's partner Scaled Composites conducted a powered test flight of SpaceShipTwo earlier today. [...] Local authorities have confirmed that one of the two Scaled Composites pilots died during the accident". This was in strong contrast to public communications previously released concerning the group's successful flights, which had routinely presented pilots, craft, and projects within the same organizational structures, as being "Virgin Galactic" flights or activities of "the Galactic team". The BBC's David Shukman commented that: "Even as details emerge of what went wrong, this is clearly a massive setback to a company hoping to pioneer a new industry of space tourism. Confidence is everything and this will not encourage the long list of celebrity and millionaire customers waiting for their first flight". At a hearing in Washington D.C. on 28 July 2015, and a press release on the same day the NTSB cited inadequate design safeguards, poor pilot training, lack of rigorous FAA oversight and a potentially anxious co-pilot without recent flight experience as important factors in the 2014 crash. They determined that the co-pilot, who died in the accident, prematurely unlocked a movable tail section some ten seconds after SpaceShip Two fired its rocket engine and was breaking the sound barrier, resulting in the craft's breaking apart. But the Board also found that the Scaled Composites unit of Northrop Grumman, which designed and flew the prototype space tourism vehicle, did not properly prepare for potential human slip-ups by providing a fail-safe system that could have guarded against such premature deployment. "A single-point human failure has to be anticipated," board member Robert Sumwalt said. Instead, Scaled Composites "put all their eggs in the basket of the pilots doing it correctly." NTSB Chairman Christopher Hart emphasized that consideration of human factors, which was not emphasized in the design, safety assessment, and operation of SpaceShipTwo's feather system, was critical to safe human spaceflight to mitigate the potential consequences of human error. "Manned commercial spaceflight is a new frontier, with many unknown risks and hazards. In such an environment, safety margins around known hazards must be rigorously established and, where possible, expanded. For commercial spaceflight to successfully mature, we must meticulously seek out and mitigate known hazards, as a prerequisite to identifying and mitigating new hazards." In its submission to the NTSB, Virgin Galactic reported that the second SS2, at the time nearing completion, had been modified with an automatic mechanical inhibit device to prevent locking or unlocking of the feather during safety-critical phases. An explicit warning about the dangers of premature unlocking had also been added to the checklist and operating handbook, and a formalized crew resource management (CRM) approach, already used by Virgin for its WK2 operations, was being adopted for SS2. However, despite CRM issues being cited as a likely contributing cause, Virgin confirmed that it would not modify the cockpit display system. While Virgin had been pursuing the development of a smallsat launch vehicle since 2012, the company began in 2015 to make the smallsat launch business a larger part of Virgin's core business plan, as the Virgin human spaceflight program had experienced multiple delays. This part of the business was spun off into a new company called Virgin Orbit in 2017. VSS Unity Following the crash of VSS Enterprise, the replacement SpaceShipTwo named VSS Unity was rolled out on 19 February 2016. Test flights were set to begin after ground tests completed in August 2016. VSS Unity completed its first flight (first free flight; captive carry flights had taken place since September 2016), a successful glide test, in December 2016. The glide lasted ten minutes. By January 2018, seven glide tests had been completed, and on 5 April 2018 it performed a powered test flight, the first since 2014. By July 2018, Unity had gone considerably higher and faster in its testing program than had its predecessor. On 13 December 2018, VSS Unity reached a height of 82.7 km (51.4 miles) above the Earth at speeds close to three times the speed of sound. The two pilots, Mark "Forger" Stucky and Frederick "CJ" Sturckow earned commercial astronaut wings from the US government for the accomplishment. Another flight in February 2019 carried third crew member (1 in the passenger cabin) for the first time. After transfer to Spaceport America in New Mexico in February 2020, a couple of 15 km altitude test flights were carried out. Due to a surge in the number of Covid-19 cases in New Mexico, Virgin Galactic had to postpone a key test flight of its spacecraft in November 2020, and then in December 2020, a computer connection issue prevented engine ignition. On 22 May 2021, VSS Unity flew its sixth powered test flight reaching an altitude of 89 km [55 mi]. This suborbital flight marked the first ever human space flight from New Mexico; it was piloted by CJ Sturkow (pilot-in-command) and Dave MacKay. The VSS Unity was carried to 44,000 feet by the jet powered launch aircraft Mothership Eve, where it was released to reach its suborbital altitude over New Mexico. A fully crewed test flight took place on 11 July 2021 with two pilots Dave Mackay and Michael Masucci and the four passengers were Richard Branson, Beth Moses, Colin Bennett and Sirisha Bandla. The flight was initially claimed to be successful but it was later revealed Unity briefly stepped outside the airspace that had been reserved for it and the FAA were not informed as required. The FAA grounded Virgin Galactic's space planes before allowing a resumption of flights after some changes to procedures including reserving a larger volume of airspace. On 14 October 2021, Virgin Galactic announced that an upgrade program for Unity and Eve would begin, delaying future commercial flights to mid 2022. This followed material analysis that required further analysis. Spaceship III The first Spaceship III, VSS Imagine, was rolled out on 30 March 2021 and it was indicated there is ground testing to do before glide test flights should commence not earlier than Summer 2021. List of launches Collaborations Potential collaboration with NASA In February 2007, Virgin announced that they had signed a memorandum of understanding with NASA to explore the potential for collaboration, but this produced only a relatively small contract in 2011 of up to $4.5 million for research flights. OneWeb satellite Internet access provider Virgin Group in January 2015 announced an investment into the OneWeb satellite constellation providing world Internet access service of WorldVu. Virgin Galactic would take a share of the launch contracts to launch the satellites into their orbits. The prospective launches were to use the LauncherOne system. In 2017 the LauncherOne business was spun off into Virgin Orbit, which ceased operations in 2023 following bankruptcy. Collaboration with Boom Technology Virgin Galactic and the Virgin Group collaborated with Boom Technology in order to create a new supersonic passenger transporter as a successor to the Concorde. This new supersonic plane would fly at Mach 2.2 (similar to Concorde) for a 3-hour trans-Atlantic flight (half of standard), projected to cost $2,500–10,000 per seat (half of Concorde) for a load of 45 passengers (the Concorde held 100). It was anticipated that with the accumulation of knowledge since the design of Concorde, the new plane would be safer and cheaper with better fuel economy, operating costs, and aerodynamics. Boom would collaborate with Virgin's The Spaceship Company for design, engineering, and flight-test support, and manufacturing. The initial model would be the Boom Technology XB-1 "Baby Boom" Supersonic Demonstrator 1/3-size prototype. It would be capable of trans-Pacific flight, LA-to-Sydney in 6.75 hours, traveling at . XB-1 would be equipped with General Electric J85 engines, Honeywell avionics, with composite structures fabricated by Blue Force using TenCate Advanced Composites carbon fibre products. Virgin Galactic had optioned 10 units. These options expired in 2020. Collaboration with Under Armour On 24 January 2019, Virgin Galactic announced a partnership with Under Armour for the fabrication of space suits for passengers and pilots of SpaceShipTwo. Under Armour would also create uniforms for Virgin Galactic employees working at Spaceport America. The full range known as the UA | VG (Under Armour | Virgin Galactic) built with UA's new Intelliknit fabric was revealed later, ahead of Richard Branson's inaugural commercial flight. This range included a base layer, the space suit and footwear. It was said that the base layer would enhance performance and blood flow during the high and zero G portions of flight and the liner of the spacesuit was made of new fabrics such as Tencel Luxe, SpinIt and Nomex, used for temperature control and moisture management. Personnel and passengers Key personnel David Mackay, former RAF test pilot, was named chief pilot for Virgin Galactic in 2011 and chief test-pilot. Steve Isakowitz was appointed as Virgin Galactic's president in June 2013. In October 2016, Mike Moses replaced Steve Isakowitz as president; Isakowitz moved to Aerospace Corp. to become president and CEO; Moses was promoted from VP Operations, and was once a NASA flight director and shuttle integration manager. Personnel Founder: Richard Branson Interim Chairman: Ray Mabus CEO: Michael Colglazier CFO: Doug Ahrens President – Safety: Mike Moses Pilot corps Chief pilot: Dave "Mac" Mackay Chief flight instructor: Mike "Sooch" Masucci Test pilot: Kelly Latimer Pilot: Rick "CJ" Sturckow Pilot: Nicola Pecile Chief space flight participant instructor: Beth Moses Space flight participant instructor: Colin Bennett Aircraft and spacecraft Motherships White Knight Two The White Knight Two is a special aeroplane built as the mothership and launch-platform for the spacecraft SpaceShipTwo and the uncrewed launch vehicle LauncherOne (LauncherOne never launched from underneath a White Knight Two). The mothership is a large fixed-wing aircraft with two hulls linked together by a central wing. Two aircraft were planned – VMS Eve and VMS Spirit of Steve Fossett. On 22 May 2021 Mothership Eve was used to carry VSS Unity to a launch altitude of 44,000 feet. Boeing 747 The LauncherOne system used a Boeing 747-400 aircraft, renamed Cosmic Girl, which was acquired from Virgin Atlantic. This was spun off into Virgin Orbit with the LauncherOne business in 2017. Generation II mothership Virgin Galactic plans to have generation 2 motherships ready for 2025, for the next-generation Delta-class spaceplanes. In July 2022, Virgin announced it would partner with Boeing's Aurora Flight Sciences to design and build the next generation of mothership. Boeing ended work on the contract in 2023 and has now filed suit against Virgin Galactic over unpaid bills according to a report in SpaceNews. Spaceships SpaceShip Two Richard Branson unveiled the rocket plane on 7 December 2009, announcing that, after testing, the plane would carry fare-paying passengers ticketed for short duration journeys just above the atmosphere. Virgin Group would initially launch from a base in New Mexico before extending operations around the globe. Built from lightweight carbon-composite materials and powered by a hybrid rocket motor, SS2 was based on the Ansari X Prize-winning SpaceShipOne concept – a rocket plane that was lifted initially by a carrier aircraft before independent launch. SS1 became the world's first private spaceship with a series of high-altitude flights in 2004. The programme was delayed after three Scaled Composites employees – Todd Ivens, Eric Blackwell and Charles May – were killed in an accident in Mojave on 26 July 2007, where the detonation of a tank of nitrous oxide destroyed a test stand. They had been observing the test from behind a chain-link fence that offered no protection from the shrapnel and debris when the tank exploded. Three other employees were injured in the blast and the company was fined for breaches of health and safety rules. The cause of the accident has never been made public. The successor to SS1, SS2 was twice as large, measuring 18 m (60 ft) in length; whereas SpaceShipOne could carry a single pilot and two passengers, SS2 was planned to have a crew of two and room for six passengers. By August 2013, 640 customers had signed up for a flight, initially at a ticket price of $200,000 per person, but raised to $250,000 in May 2013. Tickets were available from more than 140 "space agents" worldwide. SpaceShipTwo's projected performance SpaceShipTwo was designed to fly to a height of 110 km, going beyond the defined boundary of space (100 km) and lengthening the experience of weightlessness for its passengers. The spacecraft would reach a top speed of 4000 km/h (2485 mph). On 23 May 2014, Virgin Galactic announced that they had abandoned use of the Sierra Nevada Corporation (SNC) nitrous-oxide-rubber motor for SpaceShipTwo; on 24 July 2014, SNC confirmed that they had also abandoned use of this motor for their Dream Chaser space shuttle. Future testing was to see SpaceShipTwo powered by a polyamide grain powered motor. As of July 2021 the maximum height reached has been 89.9 km. In honor of the science-fiction series Star Trek, the first ship was named after the fictional starship Enterprise. To reenter the atmosphere, SpaceShipTwo folded its wings and then returned them to their original position for an unpowered descent flight back onto the runway. The craft had a very limited cross-range capability, and until other planned spaceports would be built worldwide, it had to land in the area where it started. Further spaceports were planned in Abu Dhabi and elsewhere, with the intention that the spaceline would have a worldwide availability and commodity in the future. There was a series of delays to the SS2 flight test vehicle becoming operational, amidst repeated assurances from Virgin Galactic marketing that operational flights were only a year or two out. The Wall Street Journal reported in November 2014 that there has been "tension between Mr. Branson's upbeat projections and the persistent hurdles that challenged the company's hundreds of technical experts." The company responded that "the company and its contractors 'have internal milestones, such as schedule estimates and goals, but the companies are driven by safety and the completion of the flight test program before moving into commercial service.' Virgin Galactic's schedules have always been consistent with internal schedules of its contractors and changes have 'never impacted flight safety'." SpaceShip III SpaceShip III is an evolved version of SpaceShipTwo. Delta-class spaceship Virgin Galactic plans to have its third generation spaceship, the Delta class, ready for testing in 2025 and commercial flight in 2026, along with the next generation of mothership. The Delta class is to be functionally the same as the SpaceShip III class, but it has been redesigned for higher production volumes. Fleet Commercial spaceflight locations In 2008 it was announced that test launches for its fleet of two White Knight Two mother ships and five or more SpaceShipTwo tourist suborbital spacecraft would take place from the Mojave Spaceport, where Scaled Composites was constructing the spacecraft. An international architectural competition for the design of Virgin Galactic's operating base, Spaceport America in New Mexico, saw the contract awarded to URS and Foster + Partners architects. In the same year Virgin Galactic announced that it would eventually operate in Europe out of Spaceport Sweden or even from RAF Lossiemouth in Scotland. While the original plan called for flight operations to transfer from the California desert to the new spaceport upon completion of the spaceport, at the time Virgin Galactic had yet to complete the development and test program of SpaceShipTwo. In October 2010, the 3,000 m (10,000 ft) runway at Spaceport America was opened, with SpaceShipTwo "VSS Enterprise" shipped to the site carried underneath the fuselage of Virgin Galactic's mothership Eve. Other operations and aspirations LauncherOne LauncherOne was an orbital launch vehicle that Virgin Galactic had begun working on by late 2008, with the technical specifications defined in some detail in late 2009. The LauncherOne configuration was proposed to be an expendable, two-stage, liquid-fueled rocket, envisaged to be air-launched from a White Knight Two. This would make it a similar configuration to that used by Orbital Sciences' Pegasus, or a smaller version of the StratoLaunch. LauncherOne was publicly announced in July 2012. It was intended to launch "smallsat" payloads of into Earth orbit. Several commercial customers initially contracted for launches, including GeoOptics, Skybox Imaging, Spaceflight Services, and Planetary Resources. Both Surrey Satellite Technology and Sierra Nevada Space Systems began developing satellite buses "optimized to the design of LauncherOne". In 2015, Virgin Galactic established a research, development, and manufacturing center for LauncherOne at the Long Beach Airport. The company reported in March 2015 that they were on schedule to begin test flights of LauncherOne with its Newton 3 engine by the end of 2016. On 25 June 2015, the company signed a contract with OneWeb Ltd. for 39 satellite launches for its satellite constellation with an option for an additional 100 launches. In March 2017, Virgin Galactic spun off its 200-member LauncherOne team into a new company called Virgin Orbit. Virgin Orbit went bankrupt in 2023 after a few space launches. Point to point suborbital travel In 2016 TSC, Virgin Galactic and the Virgin Group began a collaboration with Boom Technology to develop a supersonic trans-oceanic passenger jetliner. A mission concept review of a Mach 3 vehicle design was carried out. Competition Virgin Galactic is not the only corporation pursuing suborbital spacecraft for tourism. Blue Origin Blue Origin was developing suborbital flights with its New Shepard spacecraft at the same time as Virgin Galactic developed their vehicles. Although initially more secretive about its plans, Jeff Bezos' company ended up developing a spacecraft that takes off and lands vertically and could carry six (at sometime planned four) people to the edge of space. New Shepard first flew unmanned above the Karman line and landed in 2015 and the same vehicle was reflown unmanned to above the Karman line again in 2016. On 20 July 2021, Blue Origin flew their first crewed flight and first paying customer, Dutch teenager Oliver Daemen. Also on the flight were Bezos himself, his younger brother, and aviation legend Wally Funk. Commercial Crew Program On 16 September 2014, SpaceX and Boeing were awarded contracts as part of NASA's CCtCap program to develop their Crew Dragon and CST-100 Starliner spacecraft, respectively. Both are capsule designs to bring crew to orbit, a different commercial market than that addressed by Virgin Galactic. XCOR Now-defunct XCOR Aerospace had also worked on rocket-powered aircraft during many of the years that Virgin Galactic had; XCOR's Lynx suborbital vehicle was under development for more than a decade, and its predecessor, the XCOR EZ-Rocket experimental rocket powered airplane did actually take flight, but the company closed its doors in 2017. Notable accomplishments First launch of founder into space On 11 July 2021 Virgin Galactic became the first spaceflight company to independently launch a founder of the company into space, using the high US definition of space, having flown founder Richard Branson above the mark on flight Unity 22. This suborbital flight was accomplished using the twin-fuselage aircraft launch platform VMS Eve, coupled together with VSS Unity, enabling Branson, three other employee passengers and the two pilots to experience approximately three minutes of weightlessness above Earth's atmosphere. The entire flight lasted approximately one hour, taking off and landing at Spaceport America facility near Truth or Consequences, New Mexico. This flight had originally been scheduled to occur later in the summer; however, shortly after the announcement of competitor Blue Origin's plans to fly Amazon founder Jeff Bezos into space on 20 July 2021, the Virgin Galactic flight was rescheduled to occur on 11 July 2021. At the time Virgin Galactic had been certified by the FAA to provide commercial spaceflight travel, and its accounts reported that over 600 commercial passengers had already signed up. The August 2021 price was US$450,000 per person. First commercial flight Virgin Galactic (at some point) planned to begin commercial spaceflight service in 2022; and said it was in the final phases of returning its suborbital spaceplane to commercial service in Feb 2022. The first commercial flight took place on 29 June 2023 with three outside passengers (people not employed by Virgin Galactic). The 70-minute mission was purchased for the Italian Air Force and the Italian National Research Council. The company at the time had a backlog of 800 or so individuals who've bought tickets to ride on Unity. The approximate launch cadence at the time was about one launch a month. Pause of commercial flights Virgin Galactic ceased flights of its VSS Unity spaceplane in mid-2024 to focus on developing its next-generation Delta-class spacecraft. This strategic shift aims to enhance flight frequency and operational efficiency, with the Delta-class vehicles expected to commence commercial service by 2026. See also Virgin Orbit Dennis Tito List of crewed spacecraft New Mexico Spaceport Authority NewSpace X Prize Foundation Billionaire space race Notes References External links The Spaceship Company Virgin Galactic's SpaceShipTwo Mothership Makes Maiden Flight Virgin Galactic:Let the Journey Begin (Video) Branson And Rutan Launch New Spaceship Manufacturing Company U.S. Okays Virgin Galactic Spaceship Plans New Mexico Spaceport Bills Signed Lloyds Eyes Covering Virgin Spaceflights Virgin Galactic Rolls Out Mothership "Eve" Episode 38: 23 January 2011: Want to be an Astronaut? Book a ticket online Failure to launch? Spaceport America takes a couple of hits Companies listed on the New York Stock Exchange 2017 initial public offerings Aerospace companies of the United States Private spaceflight companies Commercial spaceflight Human spaceflight programs Space tourism Technology companies based in Greater Los Angeles Companies based in Orange County, California Airlines established in 2004 Technology companies established in 2004 2004 establishments in California Aabar Investments G 2019 mergers and acquisitions Special-purpose acquisition companies American companies established in 2004
Virgin Galactic
[ "Engineering" ]
7,447
[ "Space programs", "Human spaceflight programs" ]
1,021,897
https://en.wikipedia.org/wiki/CERGA%20Observatory
The CERGA Observatory (, ; obs. code: 010) was a scientific department and astronomical station of the Côte d'Azur Observatory in southern France, where several asteroids were discovered during 1984–1993. Description CERGA included 28 researchers and as many engineers and technicians located on the Observatory sites of Nice, Grasse and Calern (Caussols). The scientific activities covered fields as diverse as fundamental astronomy, celestial mechanics, and space geodesy. CERGA was in charge of several observing facilities of the Lunar Laser Ranging experiment, for example, the lunar-laser ranging telescope and the two satellite laser stations. By nature the scientific activity involved the acquisition of data and their processing, a dedicated instrumental development and a close relationship with the more theoretical aspects in dynamics and observation modelling. CERGA was dissolved in 2004 when the parent Côte d'Azur Observatory re-organized. The main-belt asteroid 2252 CERGA was named for the observatory, where this asteroid was discovered by Kōichirō Tomita. List of discovered minor planets The Minor Planet Center directly credits the CERGA observatory with the discovery of 21 asteroids made during 1984–1993. The discoveries were made using the observatory's 0.9-meter Schmidt telescope. See also OCA–DLR Asteroid Survey References External links CERGA English home page http://www.oca.eu/ CERGA observatory, www.beyond.fr Astronomy institutes and departments Asteroid surveys Year of establishment missing 2004 disestablishments in France Minor-planet discovering observatories French UMR
CERGA Observatory
[ "Astronomy" ]
319
[ "Astronomy organizations", "Astronomy institutes and departments" ]
4,190,476
https://en.wikipedia.org/wiki/Phytogeography
Phytogeography (from Greek φυτόν, phytón = "plant" and γεωγραφία, geographía = "geography" meaning also distribution) or botanical geography is the branch of biogeography that is concerned with the geographic distribution of plant species and their influence on the earth's surface. Phytogeography is concerned with all aspects of plant distribution, from the controls on the distribution of individual species ranges (at both large and small scales, see species distribution) to the factors that govern the composition of entire communities and floras. Geobotany, by contrast, focuses on the geographic space's influence on plants. Fields Phytogeography is part of a more general science known as biogeography. Phytogeographers are concerned with patterns and process in plant distribution. Most of the major questions and kinds of approaches taken to answer such questions are held in common between phyto- and zoogeographers. Phytogeography in wider sense (or geobotany, in German literature) encompasses four fields, according with the focused aspect, environment, flora (taxa), vegetation (plant community) and origin, respectively: plant ecology (or mesology – however, the physiognomic-ecological approach on vegetation and biome study are also generally associated with this field); plant geography (or phytogeography in strict sense, chorology, floristics); plant sociology (or phytosociology, synecology – however, this field does not prescind from flora study, as its approach to study vegetation relies upon a fundamental unit, the plant association, which is defined upon flora). historical plant geography (or paleobotany, paleogeobotany) Phytogeography is often divided into two main branches: ecological phytogeography and historical phytogeography. The former investigates the role of current day biotic and abiotic interactions in influencing plant distributions; the latter are concerned with historical reconstruction of the origin, dispersal, and extinction of taxa. Overview The basic data elements of phytogeography are occurrence records (presence or absence of a species) with operational geographic units such as political units or geographical coordinates. These data are often used to construct phytogeographic provinces (floristic provinces) and elements. The questions and approaches in phytogeography are largely shared with zoogeography, except zoogeography is concerned with animal distribution rather than plant distribution. The term phytogeography itself suggests a broad meaning. How the term is actually applied by practicing scientists is apparent in the way periodicals use the term. The American Journal of Botany, a monthly primary research journal, frequently publishes a section titled "Systematics, Phytogeography, and Evolution." Topics covered in the American Journal of Botany's "Systematics and Phytogeography" section include phylogeography, distribution of genetic variation and, historical biogeography, and general plant species distribution patterns. Biodiversity patterns are not heavily covered. A flora is the group of all plant species in a specific period of time or area, in which each species is independent in abundance and relationships to the other species. The group or the flora can be assembled in accordance with floral element, which are based on common features. A flora element can be a genetic element, in which the group of species share similar genetic information i.e. common evolutionary origin; a migration element has a common route of access into a habitat; a historical element is similar to each other in certain past events and an ecological element is grouped based on similar environmental factors. A population is the collection of all interacting individuals of a given species, in an area. An area is the entire location where a species, an element or an entire flora can occur. Aerography studies the description of that area, chorology studies their development. The local distribution within the area as a whole, as that of a swamp shrub, is the topography of that area. Areas are an important factor is forming an image about how species interaction result in their geography. The nature of an area’s margin, their continuity, their general shape and size relative to other areas, make the study of area crucial in identifying these types of information. For example, a relict area is an area surviving from an earlier and more exclusive occurrence. Mutually exclusive plants are called vicarious (areas containing such plants are also called vicarious). The earth’s surface is divided into floristic region, each region associated with a distinctive flora. History Phytogeography has a long history. One of the subjects earliest proponents was Prussian naturalist Alexander von Humboldt, who is often referred to as the "father of phytogeography". Von Humboldt advocated a quantitative approach to phytogeography that has characterized modern plant geography. Gross patterns of the distribution of plants became apparent early on in the study of plant geography. For example, Alfred Russel Wallace, co-discoverer of the principle of natural selection, discussed the latitudinal gradients in species diversity, a pattern observed in other organisms as well. Much research effort in plant geography has since then been devoted to understanding this pattern and describing it in more detail. In 1890, the United States Congress passed an act that appropriated funds to send expeditions to discover the geographic distributions of plants (and animals) in the United States. The first of these was The Death Valley Expedition, including Frederick Vernon Coville, Frederick Funston, Clinton Hart Merriam, and others. Research in plant geography has also been directed to understanding the patterns of adaptation of species to the environment. This is done chiefly by describing geographical patterns of trait/environment relationships. These patterns termed ecogeographical rules when applied to plants represent another area of phytogeography. Floristic regions Floristics is a study of the flora of some territory or area. Traditional phytogeography concerns itself largely with floristics and floristic classification,. China has been a focus to botanist for its rich biota as it holds the record for the earliest known angiosperm megafossil. See also Biogeography Botany Geobotanical prospecting indicator value Species distribution Zoogeography Association (ecology) References Bibliography External links Biogeography
Phytogeography
[ "Biology" ]
1,284
[ "Biogeography" ]
4,190,859
https://en.wikipedia.org/wiki/Dimethyl%20dicarbonate
Dimethyl dicarbonate (DMDC) is a colorless liquid with a pungent odor at high concentration at room temperature. It is primarily used as a beverage preservative, processing aid, or sterilant (INS No. 242) being highly active against typical beverage spoiling microorganisms like yeast, bacteria, or mould. Usage Dimethyl dicarbonate is used to stabilize beverages by preventing microbial spoilage. It can be used in various non-alcoholic as well as alcoholic drinks like wine, cider, beer-mix beverages or hard seltzers. Beverage spoiling microbes are killed by methoxycarbonylation of proteins. It acts by inhibiting enzymes involved in the microbial metabolism, e.g. acetate kinase and L-glutamic acid decarboxylase. It has also been proposed that DMDC inhibits the enzymes alcohol dehydrogenase and glyceraldehyde 3-phosphate dehydrogenase by causing the methoxycarbonylation of their histidine components. In wine, it is often used to replace potassium sorbate, as it inactivates wine spoilage yeasts such as Brettanomyces. Once it has been added to beverages, the efficacy of the chemical is provided by the following reactions: DMDC + water → methanol + carbon dioxide DMDC + ethanol → ethyl methyl carbonate DMDC + ammonia → methyl carbamate DMDC + amino acid → derived carboxymethyl The application of DMDC is particularly useful when wine needs to be sterilized but cannot be sterile filtered, pasteurized, or sulfured. DMDC is also used to stabilize non-alcoholic beverages such as carbonated or non-carbonated juice beverages, isotonic sports beverages, iced teas and flavored waters. DMDC is produced by Lanxess under the trade name Velcorin® DMDC is added before the filling of the beverage. It then breaks down into small amounts of methanol and carbon dioxide, which are both natural constituents of fruit and vegetable juices. The EU Scientific Committee on Food, the FDA in the United States and the JECFA of the WHO have confirmed the safe use in beverages. The FDA approved its use in wines in 1988, with the maximum level being permitted set at 200 mg/L, and only if there were fewer than 500 yeast cells/mL at time of dosage. It is also approved in the EU, where it is listed under E number E242, as well as Australia and New Zealand. See also Dimethyl carbonate References External links Dimethyl dicarbonate at EPA SRS Dimethyl dicarbonate technical data at FAO Dimethyl dicarbonate and microbiological stability Winemaking Preservatives Methyl esters Methylating agents Dicarbonates E-number additives
Dimethyl dicarbonate
[ "Chemistry" ]
602
[ "Methylation", "Methylating agents" ]
4,191,602
https://en.wikipedia.org/wiki/PNP%20agar
PNP Agar is an agar medium used in microbiology to identify Staphylococcus species that have phosphatase activity. The medium changes color when p-nitrophenylphosphate disodium (PNP) is dephosphorylated. PNP agar is composed of Mueller–Hinton agar buffered to pH 5.6 to 5.8, with the addition of 0.495 mg/mL PNP. References Microbiological media Cell culture media
PNP agar
[ "Biology" ]
107
[ "Microbiological media", "Microbiology equipment" ]
4,191,649
https://en.wikipedia.org/wiki/Pall%20Corporation
Pall Corporation, headquartered in Port Washington, New York and a wholly owned subsidiary of Danaher Corporation since 2015, is a global supplier of filtration, separations and purification products. Total revenues for fiscal year 2014 were $2.8 billion, with $103 million spent on R&D. Pall Corporation's business is split into two broad groups: Life Sciences (c.51%) and Industrial (c.49%). These business groups provide fluid management products and systems to customers in biotechnology, pharmaceutical, transfusion medicine, energy, electronics, municipal and industrial water purification, aerospace, transportation and broad industrial markets. The company was founded by David B. Pall in 1946 as Micro Metallic Corporation. History Founded in 1946 as Micro Metallic Corporation. In 1953, Pall purchased an industrial building at 30 Sea Cliff Ave, Glen Cove, NY (occupied until 1999). In 1958, Pall Corporation constructed a building at 36 Sea Cliff Ave (occupied it until 1971, when Pall Corporation sold the building to August Thomsen). The company was renamed Pall Corp in 1957. In 1958 Pall began to develop filters for use in aircraft hydraulics, applied to the landing gears of American Airlines Boeing 707s. Then, Pall developed filters to purify jet fuel. Through the 1960s, the business expanded, with sales of $6.7 million in 1960. Pall Europe Limited formed in 1966. Pall Cortland was established in 1961, purchased from Trinity Equipment Company. In the 1970s, Pall became a leader in fine filtration. Sales reached $88 million in 1978. Major contribution in medical applications. Pall played a major role in the cleanup of the 1979 Three Mile Island nuclear accident. The company continued to grow in the 80's and 90's, adding applications and products. In the mid-80's, Pall contributed to the construction of the Eurotunnel under the English Channel, providing solutions to hydraulic operations needed to bore through the channel bedrock. In 1988, they began selling a filter for blood transfusions that reduced leukocyte levels below all other existing filters. Centrisep air cleaners were integrated into U.S. Army and Royal Air Force (UK) helicopters to reduce sand and dust out of engines during Operation Desert Storm in 1991. In 1997 the company acquired Gelman sciences, and in 1998, Pall acquired German company Rochem. In response to an article in Forbes magazine about dioxane in Michigan, Farsad Fotouhi, VP of Life Sciences division, responded "Pall is in full compliance with the Consent Judgment it entered with the Michigan Department of Environmental Quality (MDEQ), which serves as the legal framework for the cleanup." Later in 2013, Scio Township Supervisor Clark said he's heard from Fotouhi that there will be a staff of about 20 people that will remain on the site. On May 31, 2015, Danaher Corporation announced it would acquire Pall Corporation. The transaction closed in August 2015, with Danaher paying $127.20 per share or about $13.8 billion. The acquisition was completed on August 31, 2015. In September 2022, it was announced that Pall Life Sciences will be merging with Cytiva to create a new Biotechnology Group within Danaher. Details Today, the company is divided into two separate, integrated businesses: Pall Life Sciences and Pall Industries. The Scientific & Lab Services employed 175 people worldwide at 29 locations, in 2011. The R&D group has 12 sites, with seven in the United States. The main industrial technical center is at Cortland, NY. Pall has plants in New Port Richey, DeLand, Florida, Cortland, New York, Timonium, Maryland, Fajardo, Puerto Rico, Ilfracombe, Portsmouth and locations around the world. In 2013 it announced plans to close its plants in Ann Arbor and Fort Myers, Florida. Achievements 1990: Dr. Pall is awarded the National Medal of Technology for patenting and commercializing over 100 filtration and other fluid clarification products beneficial to society and for building Pall Corporation into a global company. 2008: Dr. Pall is posthumously inducted into the National Inventors Hall of Fame for his invention of the leukoreduction filter. 2009: Pall Corporation is named one of the greenest companies in America in Newsweek’s September 28 issue. The company was ranked second in the industrial goods sector and 47th among America's largest companies. 2011: Pall Corporation is awarded the Engineering Materials Achievement Award (EMAA) by ASM International. The company was recognized for its porous iron aluminide technology. 2011: Pall Corporation is named a top green company in Newsweek's third annual Green Rankings. The company was ranked fifth in the capital goods sector and 69th among the U.S. 500 list. References External links Pall Corporation official site 50 years of Pall Companies formerly listed on the New York Stock Exchange Industrial machine manufacturers Manufacturing companies established in 1946 American brands 2015 mergers and acquisitions Pall Corp. Manufacturing companies based in New York (state) 1946 establishments in New York (state) Danaher subsidiaries
Pall Corporation
[ "Engineering" ]
1,070
[ "Industrial machine manufacturers", "Industrial machinery" ]
4,191,785
https://en.wikipedia.org/wiki/Aigo
Beijing Huaqi Information Digital Technology Co., Ltd, trading as Aigo (stylized as aigo), is a Chinese consumer electronics company. It is headquartered in the Ideal Plaza () in Haidian District, Beijing. History Beijing Huaqi Information Digital Technology Co Ltd (北京华旗资讯科技发展有限公司) is a consumer electronics manufacturer headquartered in Beijing. It was founded by Féng Jūn, who is the current president, since 1993. The company initially produced keyboards. aigo may be participating in a trend that sees Chinese nationals preferring to purchase locally produced durable goods. Products Aigo's products include MIDs, digital media players, computer cases, digital cameras, cpu cooling fans, computer peripherals,monitors and computer mouses. Subsidiaries Aigo has 27 subsidiaries and several R&D facilities. An incomplete list of aigo's subsidiaries can be found here. Aigo Music Established in 1993 and located in Beijing, aigo Music operates a digital music service much like iTunes. The first of its kind in China, it is, as of 2009, the biggest portal for legal downloading of music in the country. Strategic partnerships with Warner Music, EMI and Sony allow a wide range of music to be offered at 0.99 yuan per song. Beijing aifly Education and Technology Co Ltd aigo set up this English as a Second Language brand with help from Crazy English founder Li Yang. Beijing aigo Digital Animation Institution An aigo subsidiary that specializes in 3D animated films. Huaqi Information Technology (Singapore) Pte Ltd Set up in October 2003, it operates two official aigo outlet stores in Singapore. Shenzhen aigo R&D Co Ltd Established in 2006, this Shenzhen-based research and development facility focuses on the development of mobile multimedia software. Sponsorships aigo is a sponsor of a number of sporting events, the majority involving automobile racing. Motorsport aigo was an official partner of the Vodafone McLaren Mercedes Formula One team. As of 2008, aigo sponsored Chinese driver "Frankie" Cheng Congfu, in A1GP racing. aigo was an official partner of the 2007 race of champions, a racing competition that uses a variety of different vehicles. aigo was one of the sponsors of Bryan Herta Autosports during Indianapolis 500. Football aigo, as of 2009, had a global strategic cooperation effort with Manchester United. Notes References External links Aigo Aigo aigo tagged posts @ gizmodo.com aigo tagged posts @ engadget.com Electronics companies of China Chinese brands Consumer electronics brands Companies established in 1993 Computer hardware companies Computer storage companies Computer systems companies Manufacturing companies based in Beijing Privately held companies of China 1993 establishments in China 1993 in Beijing
Aigo
[ "Technology" ]
550
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
4,192,379
https://en.wikipedia.org/wiki/Institute%20for%20Transuranium%20Elements
The Institute for Transuranium Elements (ITU) is a nuclear research institute in Karlsruhe, Germany. The ITU is one of the seven institutes of the Joint Research Centre, a Directorate-General of the European Commission. The ITU has about 300 staff. Its specialists have access to an extensive range of advanced facilities, many unavailable elsewhere in Europe. Mission statement The Directorate General-Joint Research Centre is the European Commission's science and knowledge service. Its mission is to support EU policies with independent evidence throughout the whole policy cycle. Its work has a direct impact on the lives of citizens by contributing with its research outcomes to a healthy and safe environment, secure energy supplies, sustainable mobility and consumer health and safety. The JRC hosts specialist laboratories and unique research facilities and is home to thousands of scientists working to support EU policy. The JRC has ten Directorates and is located across five EU Member States (Belgium, Germany, Italy, the Netherlands and Spain). The Directorate involved in this project is Directorate G – Nuclear Safety and Security within which the JRC's nuclear work programme, funded by the EURATOM Research and Training Programme, is carried out. It contributes to the scientific foundation for the protection of the European citizen against risks associated with the handling and storage of highly radioactive material, and scientific and technical support for the conception, development, implementation, and monitoring of community policies related to nuclear energy. Research and policy support activities of Directorate G contribute towards achieving effective safety and safeguards systems for the nuclear fuel cycle, to enhance nuclear security then contributing to achieving the goal of low carbon energy production. The research programmes are carried out at the JRC sites in Germany (Karlsruhe), Belgium (Geel), The Netherlands (Petten) and Italy (Ispra) and consist of research, knowledge management and training activities on nuclear safety and security. They are performed in collaboration and/or in support to the EU Member States and relevant international organizations. Today the Directorate G is one of the leading nuclear research establishments for nuclear science and technology and a unique provider of nuclear data measurements. Typical research and policy support activities are experimental and modelling studies covering nuclear reactor and fuel cycle safety, including current and innovative nuclear energy systems. Fundamental properties, irradiation effects and behaviour under normal and accident conditions of nuclear fuels and structural materials are studied. The activities cover also studies of structural integrity and functioning of nuclear components, emergency preparedness and radioactivity environmental monitoring, nuclear waste management and decommissioning, as well as the study of non-energy technological and medical applications of radionuclides. A dedicated functional entity is devoted to the management and dissemination of knowledge and to facilitate open access to JRC nuclear facilities including training and education. Security Normally entry for visitors to the ITU was by prior invitation only for security reasons; a person wishing to enter the site as a visitor will be required to hand over their passport, before passing through a combined metal and radiation detector. The details of the devices used to test visitors for radioactive and nuclear materials are not public knowledge (for security reasons). Also on entry visitors are subject to a search by a security officer. All bags are examined using an x-ray machine similar to that used in an airport. Activities The work of the ITU could be divided into a series of smaller activities. Alpha-immunotherapy A cancer treatment involving the production of antibodies bearing alpha particle-emitting radioisotopes which bind to cancer cells. The idea is to create a "magic bullet" which will seek and destroy cancer wherever it is hidden within the body. This treatment has reached clinical trials. Bismuth-213 is one of the isotopes which has been used: this is made by the alpha decay of actinium-225, which in turn is made by the irradiation of radium-226 with a cyclotron. Basic actinide research Work has included the superconductivity and magnetic properties of actinides such as plutonium and americium. Safety of nuclear fuel The ITU is involved in a range of different areas of research on nuclear safety. Accidents The ITU's work includes the study of fuel behaviour during "out of control nuclear-reactor" conditions. In the 2004 annual report from the ITU some results of the PIE on PHEBUS (FPT2) fuel are reported. PHEBUS is a series of experiments where fuel was overheated and damaged under very strictly controlled conditions, in order to obtain data on what would happen in a serious nuclear power reactor accident. Waste forms The long-term performance of waste and the systems designed to isolate it from "man and his environment" are studied here. For instance the corrosion of uranium dioxide is studied at the ITU. Spent fuel characterisation The ITU performs Post Irradiation Examination of spent nuclear fuel. Partitioning and transmutation Partitioning is the separation of nuclear wastes into different elements, see nuclear reprocessing for more details. The ITU is involved in both aqueous and pyro separation methods. They have published papers on the DIAMEX process. See nuclear transmutation for details. Measurement of radioactivity in the environment The ITU is funded by the European Union, and theoretically has no "pro-" or "anti-nuclear" policy. The ITU is able to examine environmental samples in order to decide if dangerous levels of radioactive contamination are present. For instance hot particles found on a beach in Scotland near Dounreay were examined at the ITU.Page 375 of http://www-pub.iaea.org/MTCD/Publications/PDF/Pub1169_web.pdf Much of this work is aimed at the measurement of very low levels of radioactivity; the ITU's analytical service uses inductively coupled plasma mass spectrometry to measure most radioactive isotopes with greater sensitivity than those possible with direct radiometric measurements. Nuclear security and safeguards The ITU has a service which assists police and other law enforcement organisations by examining any seized radioactive or nuclear material. Materials are analysed to discover what they are, where they come from, and what possible use they might have been. Karlsruhe Nuclide Chart The ITU manages the various versions and editions of the Karlsruhe Nuclide Chart. References External links Institute for Transuranium Elements Karlsruhe Nuclide Chart Nuclear medicine organizations International research institutes Research institutes in Germany Nuclear reprocessing sites Nuclear technology in Germany Nuclear research institutes
Institute for Transuranium Elements
[ "Engineering" ]
1,326
[ "Nuclear research institutes", "Nuclear medicine organizations", "Nuclear organizations" ]
4,192,777
https://en.wikipedia.org/wiki/History%20of%20the%20World%20Wide%20Web
The World Wide Web ("WWW", "W3" or simply "the Web") is a global information medium that users can access via computers connected to the Internet. The term is often mistakenly used as a synonym for the Internet, but the Web is a service that operates over the Internet, just as email and Usenet do. The history of the Internet and the history of hypertext date back significantly further than that of the World Wide Web. Tim Berners-Lee invented the World Wide Web while working at CERN in 1989. He proposed a "universal linked information system" using several concepts and technologies, the most fundamental of which was the connections that existed between information. He developed the first web server, the first web browser, and a document formatting protocol, called Hypertext Markup Language (HTML). After publishing the markup language in 1991, and releasing the browser source code for public use in 1993, many other web browsers were soon developed, with Marc Andreessen's Mosaic (later Netscape Navigator), being particularly easy to use and install, and often credited with sparking the Internet boom of the 1990s. It was a graphical browser which ran on several popular office and home computers, bringing multimedia content to non-technical users by including images and text on the same page. Websites for use by the general public began to emerge in 1993–94. This spurred competition in server and browser software, highlighted in the Browser wars which was initially dominated by Netscape Navigator and Internet Explorer. Following the complete removal of commercial restrictions on Internet use by 1995, commercialization of the Web amidst macroeconomic factors led to the dot-com boom and bust in the late 1990s and early 2000s. The features of HTML evolved over time, leading to HTML version 2 in 1995, HTML3 and HTML4 in 1997, and HTML5 in 2014. The language was extended with advanced formatting in Cascading Style Sheets (CSS) and with programming capability by JavaScript. AJAX programming delivered dynamic content to users, which sparked a new era in Web design, styled Web 2.0. The use of social media, becoming common-place in the 2010s, allowed users to compose multimedia content without programming skills, making the Web ubiquitous in every-day life. Background The underlying concept of hypertext as a user interface paradigm originated in projects in the 1960s, from research such as the Hypertext Editing System (HES) by Andries van Dam at Brown University, IBM Generalized Markup Language, Ted Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based memex, which was described in the 1945 essay "As We May Think". Other precursors were FRESS and Intermedia. Paul Otlet's project Mundaneum has also been named as an early 20th-century precursor of the Web. In 1980, Tim Berners-Lee, at the European Organization for Nuclear Research (CERN) in Switzerland, built ENQUIRE, as a personal database of people and software models, but also as a way to experiment with hypertext; each new page of information in ENQUIRE had to be linked to another page. When Berners-Lee built ENQUIRE, the ideas developed by Bush, Engelbart, and Nelson did not influence his work, since he was not aware of them. However, as Berners-Lee began to refine his ideas, the work of these predecessors would later help to confirm the legitimacy of his concept. During the 1980s, many packet-switched data networks emerged based on various communication protocols (see Protocol Wars). One of these standards was the Internet protocol suite, which is often referred to as TCP/IP. As the Internet grew through the 1980s, many people realized the increasing need to be able to find and organize files and use information. By 1985, the Domain Name System (upon which the Uniform Resource Locator is built) came into being. Many small, self-contained hypertext systems were created, such as Apple Computer's HyperCard (1987). Berners-Lee's contract in 1980 was from June to December, but in 1984 he returned to CERN in a permanent role, and considered its problems of information management: physicists from around the world needed to share data, yet they lacked common machines and any shared presentation software. Shortly after Berners-Lee's return to CERN, TCP/IP protocols were installed on Unix machines at the institution, turning it into the largest Internet site in Europe. In 1988, the first direct IP connection between Europe and North America was established and Berners-Lee began to openly discuss the possibility of a web-like system at CERN. He was inspired by a book, Enquire Within upon Everything. Many online services existed before the creation of the World Wide Web, such as for example CompuServe, Usenet and bulletin board systems. 1989–1991: Origins CERN While working at CERN, Tim Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. On 12 March 1989, he submitted a memorandum, titled "Information Management: A Proposal", to the management at CERN. The proposal used the term "web" and was based on "a large hypertext database with typed links". It described a system called "Mesh" that referenced ENQUIRE, the database and software project he had built in 1980, with a more elaborate information management system based on links embedded as text: "Imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext, a term that he says was coined in the 1950s. Berners-Lee notes the possibility of multimedia documents that include graphics, speech and video, which he terms hypermedia. Although the proposal attracted little interest, Berners-Lee was encouraged by his manager, Mike Sendall, to begin implementing his system on a newly acquired NeXT workstation. He considered several names, including Information Mesh, The Information Mine or Mine of Information, but settled on World Wide Web. Berners-Lee found an enthusiastic supporter in his colleague and fellow hypertext enthusiast Robert Cailliau who began to promote the proposed system throughout CERN. Berners-Lee and Cailliau pitched Berners-Lee's ideas to the European Conference on Hypertext Technology in September 1990, but found no vendors who could appreciate his vision. Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested to members of both technical communities that a marriage between the two technologies was possible. But, when no one took up his invitation, he finally assumed the project himself. In the process, he developed three essential technologies: a system of globally unique identifiers for resources on the Web and elsewhere, the universal document identifier (UDI), later known as uniform resource locator (URL); the publishing language Hypertext Markup Language (HTML); the Hypertext Transfer Protocol (HTTP). With help from Cailliau he published a more formal proposal on 12 November 1990 to build a "hypertext project" called World Wide Web (abbreviated "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture. The proposal was modelled after the Standard Generalized Markup Language (SGML) reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. At this point HTML and HTTP had already been in development for about two months and the first web server was about a month from completing its first successful test. Berners-Lee's proposal estimated that a read-only Web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available". By December 1990, Berners-Lee and his work team had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP), the HyperText Markup Language (HTML), the first web browser (named WorldWideWeb, which was also a web editor), the first web server (later known as CERN httpd) and the first web site (https://info.cern.ch/) containing the first web pages that described the project itself was published on 20 December 1990. The browser could access Usenet newsgroups and FTP files as well. A NeXT Computer was used by Berners-Lee as the web server and also to write the web browser. Working with Berners-Lee at CERN, Nicola Pellow developed the first cross-platform web browser, the Line Mode Browser. 1991–1994: The Web goes public, early growth Initial launch In January 1991, the first web servers outside CERN were switched on. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext, inviting collaborators. Paul Kunz from the Stanford Linear Accelerator Center (SLAC) visited CERN in September 1991, and was captivated by the Web. He brought the NeXT software back to SLAC, where librarian Louise Addis adapted it for the VM/CMS operating system on the IBM mainframe as a way to host the SPIRES-HEP database and display SLAC's catalog of online documents. This was the first web server outside of Europe and the first in North America. The World Wide Web had several differences from other hypertext systems available at the time. The Web required only unidirectional links rather than bidirectional ones, making it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn, presented the chronic problem of link rot. Early browsers The WorldWideWeb browser only ran on NeXTSTEP operating system. This shortcoming was discussed in January 1992, and alleviated in April 1992 by the release of Erwise, an application developed at the Helsinki University of Technology, and in May by ViolaWWW, created by Pei-Yuan Wei, which included advanced features such as embedded graphics, scripting, and animation. ViolaWWW was originally an application for HyperCard. Both programs ran on the X Window System for Unix. In 1992, the first tests between browsers on different platforms were concluded successfully between buildings 513 and 31 in CERN, between browsers on the NexT station and the X11-ported Mosaic browser. ViolaWWW became the recommended browser at CERN. To encourage use within CERN, Bernd Pollermann put the CERN telephone directory on the web—previously users had to log onto the mainframe in order to look up phone numbers. The Web was successful at CERN and spread to other scientific and academic institutions. Students at the University of Kansas adapted an existing text-only hypertext browser, Lynx, to access the web in 1992. Lynx was available on Unix and DOS, and some web designers, unimpressed with glossy graphical websites, held that a website not accessible through Lynx was not worth visiting. In these earliest browsers, images opened in a separate "helper" application. From Gopher to the WWW In the early 1990s, Internet-based projects such as Archie, Gopher, Wide Area Information Servers (WAIS), and the FTP Archive list attempted to create ways to organize distributed data. Gopher was a document browsing system for the Internet, released in 1991 by the University of Minnesota. Invented by Mark P. McCahill, it became the first commonly used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way. In less than a year, there were hundreds of Gopher servers. It offered a viable alternative to the World Wide Web in the early 1990s and the consensus was that Gopher would be the primary way that people would interact with the Internet. However, in 1993, the University of Minnesota declared that Gopher was proprietary and would have to be licensed. In response, on 30 April 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due, and released their code into the public domain. This made it possible to develop servers and clients independently and to add extensions without licensing restrictions. Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this spurred the development of various browsers which precipitated a rapid shift away from Gopher. By releasing Berners-Lee's invention for public use, CERN encouraged and enabled its widespread use. Early websites intermingled links for both the HTTP web protocol and the Gopher protocol, which provided access to content through hypertext menus presented as a file system rather than through HTML files. Early Web users would navigate either by bookmarking popular directory pages or by consulting updated lists such as the NCSA "What's New" page. Some sites were also indexed by WAIS, enabling users to submit full-text searches similar to the capability later provided by search engines. After 1993 the World Wide Web saw many advances to indexing and ease of access through search engines, which often neglected Gopher and Gopherspace. As its popularity increased through ease of use, incentives for commercial investment in the Web also grew. By the middle of 1994, the Web was outcompeting Gopher and the other browsing systems for the Internet. NCSA The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana–Champaign (UIUC) established a website in November 1992. After Marc Andreessen, a student at UIUC, was shown ViolaWWW in late 1992, he began work on Mosaic with another UIUC student Eric Bina, using funding from the High-Performance Computing and Communications Initiative, a US-federal research and development program initiated by US Senator Al Gore. Andreessen and Bina released a Unix version of the browser in February 1993; Mac and Windows versions followed in August 1993. The browser gained popularity due to its strong support of integrated multimedia, and the authors' rapid response to user bug reports and recommendations for new features. Historians generally agree that the 1993 introduction of the Mosaic web browser was a turning point for the World Wide Web. Before the release of Mosaic in 1993, graphics were not commonly mixed with text in web pages, and the Web was less popular than older protocols such as Gopher and WAIS. Mosaic could display inline images and submit forms for Windows, Macintosh and X-Windows. NCSA also developed HTTPd, a Unix web server that used the Common Gateway Interface to process forms and Server Side Includes for dynamic content. Both the client and server were free to use with no restrictions. Mosaic was an immediate hit; its graphical user interface allowed the Web to become by far the most popular protocol on the Internet. Within a year, web traffic surpassed Gopher's. Wired declared that Mosaic made non-Internet online services obsolete, and the Web became the preferred interface for accessing the Internet. Early growth The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularising use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is an information space containing hyperlinked documents and other resources, identified by their URIs. It is implemented as both client and server software using Internet protocols such as TCP/IP and HTTP. In keeping with its origins at CERN, early adopters of the Web were primarily university-based scientific departments or physics laboratories such as SLAC and Fermilab. By January 1993 there were fifty web servers across the world. By October 1993 there were over five hundred servers online, including some notable websites. Practical media distribution and streaming media over the Web was made possible by advances in data compression, due to the impractically high bandwidth requirements of uncompressed media. Following the introduction of the Web, several media formats based on discrete cosine transform (DCT) were introduced for practical media distribution and streaming over the Web, including the MPEG video format in 1991 and the JPEG image format in 1992. The high level of image compression made JPEG a good format for compensating slow Internet access speeds, typical in the age of dial-up Internet access. JPEG became the most widely used image format for the World Wide Web. A DCT variation, the modified discrete cosine transform (MDCT) algorithm, led to the development of MP3, which was introduced in 1991 and became the first popular audio format on the Web. In 1992 the Computing and Networking Department of CERN, headed by David Williams, withdrew support of Berners-Lee's work. A two-page email sent by Williams stated that the work of Berners-Lee, with the goal of creating a facility to exchange information such as results and comments from CERN experiments to the scientific community, was not the core activity of CERN and was a misallocation of CERN's IT resources. Following this decision, Tim Berners-Lee left CERN for the Massachusetts Institute of Technology (MIT), where he continued to develop HTTP. The first Microsoft Windows browser was Cello, written by Thomas R. Bruce for the Legal Information Institute at Cornell Law School to provide legal information, since access to Windows was more widespread amongst lawyers than access to Unix. Cello was released in June 1993. 1994–2004: Open standards, going global The rate of web site deployment increased sharply around the world, and fostered development of international standards for protocols and content formatting. Berners-Lee continued to stay involved in guiding web standards, such as the markup languages to compose web pages, and he advocated his vision of a Semantic Web (sometimes known as Web 3.0) based around machine-readability and interoperability standards. World Wide Web Conference In May 1994, the first International WWW Conference, organized by Robert Cailliau, was held at CERN; the conference has been held every year since. World Wide Web Consortium The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in September/October 1994 in order to create open standards for the Web. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet. A year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission; and in 1996, a third continental site was created in Japan at Keio University. W3C comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made the Web available freely, with no patent and no royalties due. The W3C decided that its standards must be based on royalty-free technology, so they can be easily adopted by anyone. Netscape and Microsoft, in the middle of a browser war, ignored the W3C and added elements to HTML ad hoc (e.g., blink and marquee). Finally, in 1995, Netscape and Microsoft came to their senses and agreed to abide by the W3C's standard. The W3C published the standard for HTML 4 in 1997, which included Cascading Style Sheets (CSS), giving designers more control over the appearance of web pages without the need for additional HTML tags. The W3C could not enforce compliance so none of the browsers were fully compliant. This frustrated web designers who formed the Web Standards Project (WaSP) in 1998 with the goal of cajoling compliance with standards. A List Apart and CSS Zen Garden were influential websites that promoted good design and adherence to standards. Nevertheless, AOL halted development of Netscape and Microsoft was slow to update IE. Mozilla and Apple both released browsers that aimed to be more standards compliant (Firefox and Safari), but were unable to dislodge IE as the dominant browser. Commercialization, dot-com boom and bust, aftermath As the Web grew in the mid-1990s, web directories and primitive search engines were created to index pages and allow people to find things. Commercial use restrictions on the Internet were lifted in 1995 when NSFNET was shut down. In the US, the online service America Online (AOL) offered their users a connection to the Internet via their own internal browser, using a dial-up Internet connection. In January 1994, Yahoo! was founded by Jerry Yang and David Filo, then students at Stanford University. Yahoo! Directory became the first popular web directory. Yahoo! Search, launched the same year, was the first popular search engine on the World Wide Web. Yahoo! became the quintessential example of a first mover on the Web. Online shopping began to emerge with the launch of Amazon's shopping site by Jeff Bezos in 1995 and eBay by Pierre Omidyar the same year. By 1994, Marc Andreessen's Netscape Navigator superseded Mosaic in popularity, holding the position for some time. Bill Gates outlined Microsoft's strategy to dominate the Internet in his Tidal Wave memo in 1995. With the release of Windows 95 and the popular Internet Explorer browser, many public companies began to develop a Web presence. At first, people mainly anticipated the possibilities of free publishing and instant worldwide information. By the late 1990s, the directory model had given way to search engines, corresponding with the rise of Google Search, which developed new approaches to relevancy ranking. Directory features, while still commonly available, became after-thoughts to search engines. Netscape had a very successful IPO valuing the company at $2.9 billion despite the lack of profits and triggering the dot-com bubble. Increasing familiarity with the Web led to the growth of direct Web-based commerce (e-commerce) and instantaneous group communications worldwide. Many dot-com companies, displaying products on hypertext webpages, were added into the Web. Over the next 5 years, over a trillion dollars was raised to fund thousands of startups consisting of little more than a website. During the dot-com boom, many companies vied to create a dominant web portal in the belief that such a website would best be able to attract a large audience that in turn would attract online advertising revenue. While most of these portals offered a search engine, they were not interested in encouraging users to find other websites and leave the portal and instead concentrated on "sticky" content. In contrast, Google was a stripped-down search engine that delivered superior results. It was a hit with users who switched from portals to Google. Furthermore, with AdWords, Google had an effective business model. AOL bought Netscape in 1998. In spite of their early success, Netscape was unable to fend off Microsoft. Internet Explorer and a variety of other browsers almost completely replaced it. Faster broadband internet connections replaced many dial-up connections from the beginning of the 2000s. With the bursting of the dot-com bubble, many web portals either scaled back operations, floundered, or shut down entirely. AOL disbanded Netscape in 2003. Web server software Web server software was developed to allow computers to act as web servers. The first web servers supported only static files, such as HTML (and images), but now they commonly allow embedding of server side applications. Web framework software enabled building and deploying web applications. Content management systems (CMS) were developed to organize and facilitate collaborative content creation. Many of them were built on top of separate content management frameworks. After Robert McCool joined Netscape, development on the NCSA HTTPd server languished. In 1995, Brian Behlendorf and Cliff Skolnick created a mailing list to coordinate efforts to fix bugs and make improvements to HTTPd. They called their version of HTTPd, Apache. Apache quickly became the dominant server on the Web. After adding support for modules, Apache was able to allow developers to handle web requests with a variety of languages including Perl, PHP and Python. Together with Linux and MySQL, it became known as the LAMP platform. Following the success of Apache, the Apache Software Foundation was founded in 1999 and produced many open source web software projects in the same collaborative spirit. Browser wars After graduating from UIUC, Andreessen and Jim Clark, former CEO of Silicon Graphics, met and formed Mosaic Communications Corporation in April 1994 to develop the Mosaic Netscape browser commercially. The company later changed its name to Netscape, and the browser was developed further as Netscape Navigator, which soon became the dominant web client. They also released the Netsite Commerce web server which could handle SSL requests, thus enabling e-commerce on the Web. SSL became the standard method to encrypt web traffic. Navigator 1.0 also introduced cookies, but Netscape did not publicize this feature. Netscape followed up with Navigator 2 in 1995 introducing frames, Java applets and JavaScript. In 1998, Netscape made Navigator open source and launched Mozilla. Microsoft licensed Mosaic from Spyglass and released Internet Explorer 1.0 that year and IE2 later the same year. IE2 added features pioneered at Netscape such as cookies, SSL, and JavaScript. The browser wars became a competition for dominance when Explorer was bundled with Windows. This led to the United States v. Microsoft Corporation antitrust lawsuit. IE3, released in 1996, added support for Java applets, ActiveX, and CSS. At this point, Microsoft began bundling IE with Windows. IE3 managed to increase Microsoft's share of the browser market from under 10% to over 20%. IE4, released the following year, introduced Dynamic HTML setting the stage for the Web 2.0 revolution. By 1998, IE was able to capture the majority of the desktop browser market. It would be the dominant browser for the next fourteen years. Google released their Chrome browser in 2008 with the first JIT JavaScript engine, V8. Chrome overtook IE to become the dominant desktop browser in four years, and overtook Safari to become the dominant mobile browser in two. At the same time, Google open sourced Chrome's codebase as Chromium. Ryan Dahl used Chromium's V8 engine in 2009 to power an event driven runtime system, Node.js, which allowed JavaScript code to be used on servers as well as browsers. This led to the development of new software stacks such as MEAN. Thanks to frameworks such as Electron, developers can bundle up node applications as standalone desktop applications such as Slack. Acer and Samsung began selling Chromebooks, cheap laptops running ChromeOS capable of running web apps, in 2011. Over the next decade, more companies offered Chromebooks. Chromebooks outsold MacOS devices in 2020 to become the second most popular OS in the world. Other notable web browsers emerged including Mozilla's Firefox, Opera's Opera browser and Apple's Safari. Web 1.0 Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content". Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and the now-defunct GeoCities. Some common design elements of a Web 1.0 site include: Static pages rather than dynamic HTML. Content provided from the server's filesystem rather than a relational database management system (RDBMS). Pages built using Server Side Includes or Common Gateway Interface (CGI) instead of a web application written in a dynamic programming language such as Perl, PHP, Python or Ruby. The use of HTML 3.2-era elements such as frames and tables to position and align elements on a page. These were often used in combination with spacer GIFs. Proprietary HTML extensions, such as the <blink> and <marquee> tags, introduced during the first browser war. Online guestbooks. GIF buttons, graphics (typically 88×31 pixels in size) promoting web browsers, operating systems, text editors and various other products. HTML forms sent via email. Support for server side scripting was rare on shared servers during this period. To provide a feedback mechanism for web site visitors, mailto forms were used. A user would fill in a form, and upon clicking the form's submit button, their email client would launch and attempt to send an email containing the form's details. The popularity and complications of the mailto protocol led browser developers to incorporate email clients into their browsers. Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze". 2004–present: The Web as platform, ubiquity Web 2.0 Web pages were initially conceived as structured documents based upon HTML. They could include images, video, and other content, although the use of media was initially relatively limited and the content was mainly static. By the mid-2000s, new approaches to sharing and exchanging content, such as blogs and RSS, rapidly gained acceptance on the Web. The video-sharing website YouTube launched the concept of user-generated content. As new technologies made it easier to create websites that behaved dynamically, the Web attained greater ease of use and gained a sense of interactivity which ushered in a period of rapid popularization. This new era also brought into existence social networking websites, such as Friendster, MySpace, Facebook, and Twitter, and photo- and video-sharing websites such as Flickr and, later, Instagram which gained users rapidly and became a central part of youth culture. Wikipedia's user-edited content quickly displaced the professionally-written Microsoft Encarta. The popularity of these sites, combined with developments in the technology that enabled them, and the increasing availability and affordability of high-speed connections made video content far more common on all kinds of websites. This new media-rich model for information exchange, featuring user-generated and user-edited websites, was dubbed Web 2.0, a term coined in 1999 by Darcy DiNucci and popularized in 2004 at the Web 2.0 Conference. The Web 2.0 boom drew investment from companies worldwide and saw many new service-oriented startups catering to a newly "democratized" Web. JavaScript made the development of interactive web applications possible. Web pages could run JavaScript and respond to user input, but they could not interact with the network. Browsers could submit data to servers via forms and receive new pages, but this was slow compared to traditional desktop applications. Developers that wanted to offer sophisticated applications over the Web used Java or nonstandard solutions such as Adobe Flash or Microsoft's ActiveX. Microsoft added a little-noticed feature called XMLHttpRequest to Internet Explorer in 1999, which enabled a web page to communicate with the server while remaining visible. Developers at Oddpost used this feature in 2002 to create the first Ajax application, a webmail client that performed as well as a desktop application. Ajax apps were revolutionary. Web pages evolved beyond static documents to full-blown applications. Websites began offering APIs in addition to webpages. Developers created a plethora of Ajax apps including widgets, mashups and new types of social apps. Analysts called it Web 2.0. Browser vendors improved the performance of their JavaScript engines and dropped support for Flash and Java. Traditional client server applications were replaced by cloud apps. Amazon reinvented itself as a cloud service provider. The use of social media on the Web has become ubiquitous in everyday life. The 2010s also saw the rise of streaming services, such as Netflix. In spite of the success of Web 2.0 applications, the W3C forged ahead with their plan to replace HTML with XHTML and represent all data in XML. In 2004, representatives from Mozilla, Opera, and Apple formed an opposing group, the Web Hypertext Application Technology Working Group (WHATWG), dedicated to improving HTML while maintaining backward compatibility. For the next several years, websites did not transition their content to XHTML; browser vendors did not adopt XHTML2; and developers eschewed XML in favor of JSON. By 2007, the W3C conceded and announced they were restarting work on HTML and in 2009, they officially abandoned XHTML. In 2019, the W3C ceded control of the HTML specification, now called the HTML Living Standard, to WHATWG. Microsoft rewrote their Edge browser in 2021 to use Chromium as its code base in order to be more compatible with Chrome. Security, censorship and cybercrime The increasing use of encrypted connections (HTTPS) enabled e-commerce and online banking. Nonetheless, the 2010s saw the emergence of various controversial trends, such as internet censorship and the growth of cybercrime, including web-based cyberattacks and ransomware. Mobile Early attempts to allow wireless devices to access the Web used simplified formats such as i-mode and WAP. Apple introduced the first smartphone in 2007 with a full-featured browser. Other companies followed suit and in 2011, smartphone sales overtook PCs. Since 2016, most visitors access websites with mobile devices which led to the adoption of responsive web design. Apple, Mozilla, and Google have taken different approaches to integrating smartphones with modern web apps. Apple initially promoted web apps for the iPhone, but then encouraged developers to make native apps. Mozilla announced Web APIs in 2011 to allow webapps to access hardware features such as audio, camera or GPS. Frameworks such as Cordova and Ionic allow developers to build hybrid apps. Mozilla released a mobile OS designed to run web apps in 2012, but discontinued it in 2015. Google announced specifications for Accelerated Mobile Pages (AMP), and progressive web applications (PWA) in 2015. AMPs use a combination of HTML, JavaScript, and Web Components to optimize web pages for mobile devices; and PWAs are web pages that, with a combination of web workers and manifest files, can be saved to a mobile device and opened like a native app. Web 3.0 and Web3 The extension of the Web to facilitate data exchange was explored as an approach to create a Semantic Web (sometimes called Web 3.0). This involved using machine-readable information and interoperability standards to enable context-understanding programs to intelligently select information for users. Continued extension of the Web has focused on connecting devices to the Internet, coined Intelligent Device Management. As Internet connectivity becomes ubiquitous, manufacturers have started to leverage the expanded computing power of their devices to enhance their usability and capability. Through Internet connectivity, manufacturers are now able to interact with the devices they have sold and shipped to their customers, and customers are able to interact with the manufacturer (and other providers) to access a lot of new content. This phenomenon has led to the rise of the Internet of Things (IoT), where modern devices are connected through sensors, software, and other technologies that exchange information with other devices and systems on the Internet. This creates an environment where data can be collected and analyzed instantly, providing better insights and improving the decision-making process. Additionally, the integration of AI with IoT devices continues to improve their capabilities, allowing them to predict customer needs and perform tasks, increasing efficiency and user satisfaction. Web3 (sometimes also referred to as Web 3.0) is an idea for a decentralized Web based on public blockchains, smart contracts, digital tokens and digital wallets. Beyond Web 3.0 The next generation of the Web is often termed Web 4.0, but its definition is not clear. According to some sources, it is a Web that involves artificial intelligence, the internet of things, pervasive computing, ubiquitous computing and the Web of Things among other concepts. According to the European Union, Web 4.0 is "the expected fourth generation of the World Wide Web. Using advanced artificial and ambient intelligence, the internet of things, trusted blockchain transactions, virtual worlds and XR capabilities, digital and real objects and environments are fully integrated and communicate with each other, enabling truly intuitive, immersive experiences, seamlessly blending the physical and digital worlds". Historiography Historiography of the Web poses specific challenges, including disposable data, missing links, lost content and archived websites, which have consequences for web historians. Sites such as the Internet Archive aim to preserve content. See also Fediverse History of email History of hypertext History of the Internet History of telecommunication History of web syndication technology List of websites founded before 1995 Webring Online services before the World Wide Web Minitel NABU Network Quantum Link / AOL CompuServe GEnie Usenet Bulletin board system Prestel Scrapbook :Category:Pre–World Wide Web online services References Further reading External links Web History: first 30 years "A Little History of the World Wide Web: from 1945 to 1995", Dan Connolly, W3C, 2000 "The World Wide Web: Past, Present and Future", Tim Berners-Lee, August 1996 The History of the Web Web Development History A Brief(ish) History of the Web Universe, Brian Kardell Web History Community Group, W3C The history of the Web, W3C info.cern.ch, the first website World Wide Web World Wide Web World Wide Web
History of the World Wide Web
[ "Technology" ]
7,962
[ "Computers", "History of computing" ]
4,193,364
https://en.wikipedia.org/wiki/Cylinder%20head%20porting
Cylinder head porting refers to the process of modifying the intake and exhaust ports of an internal combustion engine to improve their air flow. Cylinder heads, as manufactured, are usually suboptimal for racing applications due to being designed for maximum durability. Ports can be modified for maximum power, minimum fuel consumption, or a combination of the two, and the power delivery characteristics can be changed to suit a particular application. Port modifications When a modification is decided upon through testing with an air flow bench, the original port wall material can be reshaped by hand with die grinders or by numerically controlled milling machines. For major modifications the ports must be welded up or similarly built up to add material where none existed. The Ford two-liter F2000 engine in stock trim equipped with the head shown above was capable of delivering 115 horsepower at 5500 rpm for a BMEP of 136 psi. This aftermarket Pro Stock racing head was used in an engine capable of 1300 horsepower at 9500 rpm with a BMEP of 238 psi. A BMEP of 238 puts it close to the limit for a naturally aspirated gas-burning engine. Naturally aspirated Formula One engines typically achieved BMEP values of 220 psi. Cam profiles, engine RPM, engine height constraints and other limitations contribute to the difference in engine power with the Ford unit as well, but the difference in port design is a major factor. Port components Wave dynamics When the valve opens, the air doesn’t flow in, it decompresses into the low-pressure region below it. All the air on the upstream side of the moving disturbance boundary is completely isolated and unaffected by what happens on the downstream side. The air at the runner entrance does not move until the wave reaches all the way to the end. It is only then that the entire runner can begin to flow. Up until that point all that can happen is the higher pressure gas filling the volume of the runner decompresses or expands into the low-pressure region advancing up the runner. (Once the low-pressure wave reaches the open end of the runner it reverses sign, the onrushing air forces a high pressure wave down the runner. Not shown in this animation.) Conversely, the closing of the valve does not immediately stop flow at the runner entrance, which continues completely unaffected until the signal that the valve closed reaches it. The closing valve causes a buildup of pressure that travels up the runner as a positive wave. The runner entrance continues to flow at full speed, forcing the pressure to rise until the signal reaches the entrance. This very considerable pressure rise can be seen on the graph below, it rises far above atmospheric pressure. It is this phenomenon that enables the so-called “ram tuning” to occur, and it is what is being “tuned” by tuned intake and exhaust systems. The principle is the same as in the water hammer effect so well known to plumbers. The speed that the signal can travel is the speed of sound within the runner. This is why port/runner volumes are so important; the volumes of successive parts of the port/runner control the flow during all transition periods. That is, any time a change occurs in the cylinder – whether positive or negative – such as when the piston reaches maximum speed. This point occurs at different points depending on the length of the connecting rod and the throw of the crank, and varies with the connecting rod ratio (rod/stroke). For normal automotive design this point is almost always between 69 and 79 degrees ATDC, with higher rod ratios favoring the later position. It only occurs at 1/2 stroke (90 degrees) with a connecting rod of infinite length. The wave/flow activity in a real engine is vastly more complex than this but the principle is the same. At first glance this wave travel might seem to be blindingly fast and not very significant but a few calculations show the opposite is true. In an intake runner at room temperature the sonic speed is about and traverses a port/runner in 0.9 milliseconds. The engine using this system, running at 8500 rpm, takes a very considerable 46 crank degrees before any signal from the cylinder can reach the runner end (assuming no movement of the air in the runner). 46 degrees, during which nothing but the volume of the port/runner supplies the demands of the cylinder. This not only applies to the initial signal but to any and every change in the pressure or vacuum developed in the cylinder. Using a shorter runner to reduce the delay is not feasible because, at the end of the cycle, the long runner now continues to flow at full speed disregarding the rising pressure in the cylinder and providing pressure to the cylinder when it is needed most. The runner length also controls the timing of the returning waves and cannot be altered. A shorter runner would flow earlier but also would die earlier while returning the positive waves much too quickly (tuned to a higher RPM) and those waves would be weaker. The key is to find the optimum balance of all the factors for the engine requirements. Further complicating the system is the fact that the piston dome, the signal source, continually moves. First moving down the cylinder, thus increasing the distance the signal must travel. Then moving back up at the end of the intake cycle when the valve is still open past BDC. The signals coming from the piston dome, after the initial runner flow has been established, must fight upstream against whatever velocity has been developed at that instant, delaying it further. The signals developed by the piston do not have a clean path up the runner either. Large portions of it bounce off the rest of the combustion chamber and resonate inside the cylinder until an average pressure is reached. Also, temperature variations due to the changing pressures and absorption from hot engine parts cause changes in the local sonic velocity. When the valve closes, it causes a pile up of gas giving rise to a strong positive wave that must travel up the runner. The wave activity in the port/runner does not stop but continues to reverberate for some time. When the valve next opens, the remaining waves influence the next cycle. The graph above shows the intake runner pressure over 720 crank degrees of an engine with a intake port/runner running at 4500 rpm, which is its torque peak (close to maximum cylinder filling and BMEP for this engine). The two pressure traces are taken from the valve end (blue) and the runner entrance (red). The blue line rises sharply as the intake valve closes. This causes a pile up of air, which becomes a positive wave reflected back up the runner and the red line shows that wave arriving at the runner entrance later. Note how the suction wave during cylinder filling is delayed even more by having to fight upstream against the inrushing air and the fact that the piston is further down the bore, increasing the distance. The goal of tuning is to arrange the runners and valve timing so that there is a high-pressure wave in the port during the opening of the intake valve to get flow going quickly and then to have a second high pressure wave arrive just before valve closing so the cylinder fills as much as possible. The first wave is what is left in the runner from the previous cycle, while the second is primarily created during the current cycle by the suction wave changing sign at the runner entrance and arriving back at the valve in time for valve closing. The factors involved are often contradictory and requires a careful balancing act to work. When it does work, it is possible to see volumetric efficiencies of 140%, similar to that of a decent supercharger, but it only occurs over a limited RPM range. Porting and polishing It is popularly held that enlarging the ports to the maximum possible size and applying a mirror finish is what porting entails. However, that is not so. Some ports may be enlarged to their maximum possible size (in keeping with the highest level of aerodynamic efficiency), but those engines are highly developed, very-high-speed units where the actual size of the ports has become a restriction. Larger ports flow more fuel/air at higher RPMs but sacrifice torque at lower RPMs due to lower fuel/air velocity. A mirror finish of the port does not provide the increase that intuition suggests. In fact, within intake systems, the surface is usually deliberately textured to a degree of uniform roughness to encourage fuel deposited on the port walls to evaporate quickly. A rough surface on selected areas of the port may also alter flow by energizing the boundary layer, which can alter the flow path noticeably, possibly increasing flow. This is similar to what the dimples on a golf ball do. Flow bench testing shows that the difference between a mirror-finished intake port and a rough-textured port is typically less than 1%. The difference between a smooth-to-the-touch port and an optically mirrored surface is not measurable by ordinary means. Exhaust ports may be smooth-finished because of the dry gas flow and in the interest of minimizing exhaust by-product build-up. A 300- to 400-grit finish followed by a light buff is generally accepted to be representative of a near optimal finish for exhaust gas ports. The reason that polished ports are not advantageous from a flow standpoint is that at the interface between the metal wall and the air, the air speed is zero (see boundary layer and laminar flow). This is due to the wetting action of the air and indeed all fluids. The first layer of molecules adheres to the wall and does not move significantly. The rest of the flow field must shear past, which develops a velocity profile (or gradient) across the duct. For surface roughness to impact flow appreciably, the high spots must be high enough to protrude into the faster-moving air toward the center. Only a very rough surface does this. Two-stroke porting In addition to all the considerations given to a four-stroke engine port, two-stroke engine ports have additional ones: Scavenging quality/purity: The ports are responsible for sweeping as much exhaust out of the cylinder as possible and refilling it with as much fresh mixture as possible without a large amount of the fresh mixture also going out the exhaust. This takes careful and subtle timing and aiming of all the transfer ports. Power band width: Since two-strokes are very dependent on wave dynamics, their power bands tend to be narrow. While struggling to get maximum power, care must always be taken to ensure that the power profile does not get too sharp and hard to control. Time area: Two-stroke port duration is often expressed as a function of time/area. This integrates the continually changing open port area with the duration. Wider ports increase time/area without increasing duration while higher ports increase both. Timing: In addition to time area, the relationship between all the port timings strongly determine the power characteristics of the engine. Wave Dynamic considerations: Although four-strokes have this problem, two-strokes rely much more heavily on wave action in the intake and exhaust systems. The two-stroke port design has strong effects on the wave timing and strength. Heat flow: The flow of heat in the engine is heavily dependent on the porting layout. Cooling passages must be routed around ports. Every effort must be made to keep the incoming charge from heating up but at the same time many parts are cooled primarily by that incoming fuel/air mixture. When ports take up too much space on the cylinder wall, the ability of the piston to transfer its heat through the walls to the coolant is hampered. As ports get more radical, some areas of the cylinder get thinner, which can then overheat. Piston ring durability: A piston ring must ride on the cylinder wall smoothly with good contact to avoid mechanical stress and assist in piston cooling. In radical port designs, the ring has minimal contact in the lower stroke area, which can suffer extra wear. The mechanical shocks induced during the transition from partial to full cylinder contact can shorten the life of the ring considerably. Very wide ports allow the ring to bulge out into the port, exacerbating the problem. Piston skirt durability: The piston must also contact the wall for cooling purposes but also must transfer the side thrust of the power stroke. Ports must be designed so that the piston can transfer these forces and heat to the cylinder wall while minimizing flex and shock to the piston. Engine configuration: Engine configuration can be influenced by port design. This is primarily a factor in multi-cylinder engines. Engine width can be excessive for even two cylinder engines of certain designs. Rotary disk valve engines with wide sweeping transfers can be so wide as to be impractical as a parallel twin. The V-twin and fore-and-aft engine designs are used to control overall width. Cylinder distortion: Engine sealing ability, cylinder, piston and piston ring life all depend on reliable contact between cylinder and piston/piston ring so any cylinder distortion reduces power and engine life. This distortion can be caused by uneven heating, local cylinder weakness, or mechanical stresses. Exhaust ports that have long passages in the cylinder casting conduct large amounts of heat to one side of the cylinder while on the other side the cool intake may be cooling the opposite side. The thermal distortion resulting from the uneven expansion reduces both power and durability although careful design can minimize the problem. Combustion turbulence: The turbulence remaining in the cylinder after transfer persists into the combustion phase to help burning speed. Unfortunately, good scavenging flow is slower and less turbulent. Methods The die grinder is the stock in trade of the head porter and are used with a variety of carbide cutters, grinding wheels and abrasive cartridges. The complex and sensitive shapes required in porting necessitate a good degree of artistic skill with a hand tool. Until recently, CNC machining was used only to provide the basic shape of the port but hand finishing was usually still required because some areas of the port were not accessible to a CNC tool. New developments in CNC machining now allow this process to be fully automated with the assistance of CAD/CAM software. 5-Axis CNC controls using specialized fixtures like tilting rotary tables allow the cutting tool full access to the entire port. The combination of CNC and CAM software give the porter full control over the port shape and surface finish. Measurement of the interior of the ports is difficult but must be done accurately. Sheet metal templates are made up, taking the shape from an experimental port, for both cross-sectional and lengthwise shape. Inserted in the port these templates are then used as a guide for shaping the final port. Even a slight error might cause a loss in flow so measurement must be as accurate as possible. Confirmation of the final port shape and automated replication of the port is now done using digitizing. Digitizing is where a probe scans the entire shape of the port collecting data that can then be used by CNC machine tools and CAD/CAM software programs to model and cut the desired port shape. This replication process usually produces ports that flow within 1% of each other. This kind of accuracy, repeatability, time has never before been possible. What used to take eighteen hours or more now takes less than three. Summary The internal aerodynamics involved in porting is counter-intuitive and complex. Successfully optimizing ports requires an air flow bench, a thorough knowledge of the principles involved, and engine simulation software. Although a large portion of porting knowledge has been accumulated by individuals using "cut and try" methods over time, the tools and knowledge now exist to develop a porting design with a measure of certainty. References External links Free demo engine simulator used to generate graph above Cylinder head porting techniques The Brzezinski "UnderCover" Cast Iron Cylinder Head Porting Technique A 5-axis CNC cylinder head porting machine in action. A number of articles about porting. Kinematic Models for Design Digital Library (KMODDL) - Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering. Engine technology Vehicle modifications
Cylinder head porting
[ "Technology" ]
3,299
[ "Engine technology", "Engines" ]
4,193,972
https://en.wikipedia.org/wiki/Advanced%20Light%20Source
The Advanced Light Source (ALS) is a research facility at Lawrence Berkeley National Laboratory in Berkeley, California. One of the world's brightest sources of ultraviolet and soft x-ray light, the ALS is the first "third-generation" synchrotron light source in its energy range, providing multiple extremely bright sources of intense and coherent short-wavelength light for use in scientific experiments by researchers from around the world. It is funded by the US Department of Energy (DOE) and operated by the University of California. The current director is Dimitri Argyriou. Users The ALS serves about 2,000 researchers ("users") every year from academic, industrial, and government laboratories worldwide. Experiments at the ALS are performed at nearly 40 beamlines that can operate simultaneously over 5,000 hours per year, resulting in nearly 1,000 scientific publications annually in a wide variety of fields. Any qualified researcher can propose to use an ALS beamline. Peer review is used to select from among the most important proposals received from researchers who apply for beam time at the ALS. No charge is made for beam time if a user's research is nonproprietary (i.e., the user plans to publish the results in the open literature). About 16% of users come from outside the US. How it works Electron bunches traveling near the speed of light are forced into a nearly circular path by magnets in the ALS storage ring. Between these magnets there are straight sections where the electrons are forced into a slalom-like path by dozens of magnets of alternating polarity in devices called "undulators." Under the influence of these magnets, electrons emit beams of electromagnetic radiation, from the infrared through the visible, ultraviolet, and x-ray regimes. The resulting beams, collimated along the direction of the electrons' path, shine down beamlines to instruments at experiment endstations. Research areas Lower-energy soft x-ray light is the ALS' specialty, filling an important niche and complementing other DOE light source facilities. Higher-energy x-rays are also available from locations where superconducting magnets create "superbends" in the electrons' path. Soft x-rays are used to characterize the electronic structure of matter and to reveal microscopic structures with elemental and chemical specificity. Research in materials science, biology, chemistry, physics, and the environmental sciences make use of these capabilities. Ongoing research topics and techniques Probing the electronic structure of matter Testing optics and photoresists for next generation photolithography Understanding magnetic materials 3D biological imaging Protein crystallography Ozone photochemistry X-ray microscopy of cells Chemical reaction dynamics Atomic and molecular physics Extreme ultraviolet lithography Synchrotron infrared nano-spectroscopy (SINS) Scientific and technological innovations and advancements Longer-lasting lithium-ion batteries for electric vehicles and mobile electronics Nanoscale magnetic imaging for compact data storage Plastic solar cells that are flexible and easy to produce Harnessing "artificial photosynthesis" for clean, renewable energy Fine-tuning combustion for cleaner-burning fuels More effective chemical reactions for fuel cells, pollution control, or fuel refinement Using microbes to clean up toxins in the environment Cheaper biofuels from abundant, renewable plants Solving protein structures for rational drug design Producing ever-smaller transistors for more powerful computers History When the ALS was first proposed in the early 1980s by former LBNL director David Shirley, skeptics doubted the use of a synchrotron optimized for soft x-rays and ultraviolet light. According to former ALS director Daniel Chemla, "The scientific case for a third-generation soft x-ray facility such as the ALS had always been fundamentally sound. However, getting the larger scientific community to believe it was an uphill battle." The 1987 Reagan administration budget allocated $1.5 million for the construction of the ALS. The planning and design process began in 1987, ground was broken in 1988, and construction was completed in 1993. The new building incorporated a 1930s-era domed structure designed by Arthur Brown, Jr. (designer of the Coit Tower in San Francisco) to house E. O. Lawrence's 184-inch cyclotron, an advanced version of his first cyclotron for which he received the 1939 Nobel Prize in Physics. The ALS was commissioned in March 1993, and the official dedication took place on the morning of October 22, 1993. ALS-U A new project called ALS-U is working to upgrade the ALS. Recent accelerator physics breakthroughs now enable the production of highly focused beams of soft x-ray light that are at least 100 times brighter than those of the existing ALS. The storage ring will receive a number of new upgrades, as well as a new accumulator ring. The new ring will use powerful, compact magnets arranged in a dense, circular array called a multibend achromat (MBA) lattice. In combination with other improvements to the accelerator complex, the upgraded machine will produce bright, steady beams of high-energy light to probe matter with unprecedented detail. References External links Lawrence Berkeley National Laboratory Synchrotron radiation facilities Laboratories in California Berkeley Hills Buildings and structures in Berkeley, California University and college laboratories in the United States 1993 establishments in California Buildings and structures completed in 1993
Advanced Light Source
[ "Materials_science" ]
1,077
[ "Materials testing", "Synchrotron radiation facilities" ]
4,194,127
https://en.wikipedia.org/wiki/Count%20room
A count room or counting room is a room that is designed and equipped for the purpose of counting large volumes of currency. Count rooms are operated by central banks and casinos, as well as some large banks and armored car companies that transport currency. A count room may be divided into two separate areas, one for counting banknotes (sometimes referred to as soft count) and one for counting coins (sometimes referred to as hard count). Some high-volume cash businesses, especially casinos, will operate two distinct rooms. Surveillance Most count rooms are equipped with closed-circuit television cameras and sometimes sound recording equipment to assist in detecting theft, fraud, or collusion among the count room personnel. References Banking terms Rooms Casinos
Count room
[ "Engineering" ]
143
[ "Rooms", "Architecture" ]
4,194,183
https://en.wikipedia.org/wiki/International%20Centre%20for%20Theoretical%20Physics
The Abdus Salam International Centre for Theoretical Physics (ICTP) is a research center for physical and mathematical sciences, located in Trieste, Friuli-Venezia Giulia, Italy. The center operates under a tripartite agreement between the Italian Government, UNESCO, and the International Atomic Energy Agency. It is located near the Miramare Park, about 10 kilometres from the downtown of Trieste city, Italy. The centre was founded in 1964 by Pakistani Nobel Laureate Abdus Salam. ICTP is part of the Trieste System, a network of national and international scientific institutes in Trieste, promoted by the Italian physicist Paolo Budinich. Mission Foster the growth of advanced studies and research in physical and mathematical sciences, especially in support of excellence in developing countries; Develop high-level scientific programmes keeping in mind the needs of developing countries, and provide an international forum of scientific contact for scientists from all countries; Conduct research at the highest international standards and maintain a conducive environment of scientific inquiry for the entire ICTP community. Research Research at ICTP is carried out by seven scientific sections: High Energy, Cosmology and Astroparticle Physics Condensed Matter and Statistical Physics Mathematics Earth System Physics Science, Technology and Innovation Quantitative Life Sciences New Research Areas (which includes studies related to Energy and Sustainability and Computing Sciences) The scientific community at ICTP includes staff research scientists, postdoctoral fellows and long- and short-term visitors engaged in independent or collaborative research. Throughout the year, the sections organize conferences, workshops, seminars and colloquiums in their respective fields. ICTP also has visitor programmes specifically for scientific visitors from developing countries, including programmes under federation and associateship schemes. Postgraduate programmes ICTP offers educational training through its pre-PhD programmes and degree programmes (conducted in collaboration with other institutes). Pre-PhD programmes Postgraduate diploma programmes in Condensed Matter Physics, High Energy Physics, Mathematics, Earth System Physics, and Quantitative Life Sciences for students from developing countries. The Sandwich Training Educational Programme (STEP) for students from developing countries already enrolled in PhD programmes in the fields of physics and mathematics. In collaboration with other institutes, ICTP offers masters and doctoral degrees in physics and mathematics. Joint ICTP/SISSA PhD Programme in Physics and Mathematics Joint PhD Programme in Earth Science and Fluid Mechanics Joint Laurea Magistralis in Physics Joint ICTP/Collegio Carlo Alberto Program in Economics International Master, Physics of Complex Systems Master of Advanced Studies in Medical Physics Masters in High Performance Computing In addition, ICTP collaborates with local laboratories, including Elettra Synchrotron Light Laboratory, to provide fellowships and laboratory opportunities. Prizes and awards ICTP has instituted awards to honour and encourage high-level research in the fields of physics and mathematics. The Dirac Medal – For scientists who have made significant contributions to theoretical physics. The ICTP Prize – For young scientists from and working in developing countries. ICO/ICTP Gallieno Denardo Award – For significant contributions to the field of optics. The Ramanujan Prize – For young mathematicians from developing countries. The Walter Kohn Prize – Given jointly by ICTP and the Quantum ESPRESSO foundation, for work in quantum mechanical materials or molecular modelling, performed by a young scientist working in a developing country. Partner institutes One of ICTP's goals is to set up regional centres of excellence around the globe. The idea is to bring ICTP's unique blend of high-quality physics and mathematics education and high-level science meetings closer to scientists everywhere. On February 6, 2012, ICTP opened a partner institute (ICTP South American Institute for Fundamental Research) in São Paulo, Brazil. Its activities are modelled on those of the ICTP and include schools and workshops, as well as a visiting scientists programme. On October 18, 2018, a partner institute (ICTP-EAIFR, the East African Institute for Fundamental Research), was inaugurated in Kigali, Rwanda. In November 2018, ICTP opened the International Centre for Theoretical Physics Asia-Pacific (ICTP-AP) in Beijing, China, in collaboration with the University of the Chinese Academy of Sciences. Journal In 2007 ICTP created the peer-reviewed open-access Journal "African Review of Physics" under the then name "African Physical Review". See also International School for Advanced Studies University of Trieste Joint Institute for Nuclear Research Institute for Theoretical Physics (disambiguation) Center for Theoretical Physics (disambiguation) References External links International Atomic Energy Agency International research institutes for mathematics International research institutes Physics research institutes Physics organizations Research institutes established in 1964 Trieste UNESCO Abdus Salam Research institutes in Italy Italy and the United Nations Theoretical physics institutes
International Centre for Theoretical Physics
[ "Physics" ]
947
[ "Theoretical physics", "Theoretical physics institutes" ]
4,194,311
https://en.wikipedia.org/wiki/Stealth%20wallpaper
For computer network security, stealth wallpaper is a material designed to prevent an indoor Wi-Fi network from extending or "leaking" to the outside of a building, where malicious persons may attempt to eavesdrop or attack a network. While it is simple to prevent all electronic signals from passing through a building by covering the interior with metal, stealth wallpaper accomplishes the more difficult task of blocking Wi-Fi signals while still allowing cellphone signals to pass through. The first stealth wallpaper was originally designed by UK defense contractor BAE Systems In 2012, The Register reported that a commercial wallpaper had been developed by Grenoble Institute of Technology and the Centre Technique du Papier with planned sale in 2013. This wallpaper blocks three selected Wi-Fi frequencies. Nevertheless, it does allow GSM and 4G signals to pass through the network, therefore allowing cell phone use to remain unaffected by the wallpaper. See also Electromagnetic shielding Faraday cage TEMPEST Wallpaper Wireless security References External links Azcom: Stealth Wallpaper Prevents Wi-Fi Signals Escaping without Blocking Mobile Phone Signals The Register: Wifi Blocking Wallpaper BAE Systems research and development Computer network security Wi-Fi
Stealth wallpaper
[ "Technology", "Engineering" ]
236
[ "Cybersecurity engineering", "Wireless networking", "Wi-Fi", "Computer networks engineering", "Computer network security" ]
4,194,692
https://en.wikipedia.org/wiki/Microspore
Microspores are land plant spores that develop into male gametophytes, whereas megaspores develop into female gametophytes. The male gametophyte gives rise to sperm cells, which are used for fertilization of an egg cell to form a zygote. Megaspores are structures that are part of the alternation of generations in many seedless vascular cryptogams, all gymnosperms and all angiosperms. Plants with heterosporous life cycles using microspores and megaspores arose independently in several plant groups during the Devonian period. Microspores are haploid, and are produced from diploid microsporocytes by meiosis. Morphology The microspore has three different types of wall layers. The outer layer is called the perispore, the next is the exospore, and the inner layer is the endospore. The perispore is the thickest of the three layers while the exospore and endospore are relatively equal in width. Seedless vascular plants In heterosporous seedless vascular plants, modified leaves called microsporophylls bear microsporangia containing many microsporocytes that undergo meiosis, each producing four microspores. Each microspore may develop into a male gametophyte consisting of a somewhat spherical antheridium within the microspore wall. Either 128 or 256 sperm cells with flagella are produced in each antheridium. The only heterosporous ferns are aquatic or semi-aquatic, including the genera Marsilea, Regnellidium, Pilularia, Salvinia, and Azolla. Heterospory also occurs in the lycopods in the spikemoss genus Selaginella and in the quillwort genus Isoëtes. Types of seedless vascular plants: Water ferns Spikemosses Quillworts Gymnosperms In seed plants the microspores develop into pollen grains each containing a reduced, multicellular male gametophyte. The megaspores, in turn, develop into reduced female gametophytes that produce egg cells that, once fertilized, develop into seeds. Pollen cones or microstrobili usually develop toward the tips of the lower branches in clusters up to 50 or more. The microsporangia of gymnosperms develop in pairs toward the bases of the scales, which are therefore called microsporophylls. Each of the microsporocytes in the microsporangia undergoes meiosis, producing four haploid microspores. These develop into pollen grains, each consisting of four cells and, in conifers, a pair of external air sacs. The air sacs give the pollen grains added buoyancy that helps with wind dispersal. Types of Gymnosperms: Conifers Pines Ginkgos Cycads Gnetophytes Angiosperms As the anther of a flowering plant develops, four patches of tissue differentiate from the main mass of cells. These patches of tissue contain many diploid microsporocyte cells, each of which undergoes meiosis producing a quartet of microspores. Four chambers (pollen sacs) lined with nutritive tapetal cells are visible by the time the microspores are produced. After meiosis, the haploid microspores undergo several changes: The microspore divides by mitosis producing two cells. The first of the cells (the generative cell) is small and is formed inside the second larger cell (the tube cell). The members of each part of the microspores separate from each other. A double-layered wall then develops around each microspore. These steps occur in sequence and when complete, the microspores have become pollen grains. Embryogenesis Although it is not the usual route of a microspore, this process is the most effective way of yielding haploid and double haploid plants through the use of male sex hormones. Under certain stressors such as heat or starvation, plants select for microspore embryogenesis. It was found that over 250 different species of angiosperms responded this way. In the anther, after a microspore undergoes microsporogenesis, it can deviate towards embryogenesis and become star-like microspores. The microspore can then go one of four ways: Become an embryogenic microspore, undergo callogenesis to organogenesis (haploid/double haploid plant), become a pollen-like structure or die. Microspore embryogenesis is used in biotechnology to produce double haploid plants, which are immediately fixed as homozygous for each locus in only one generation. The haploid microspore is stressed to trigger the embryogenesis pathway and the resulting haploid embryo either doubles its genome spontaneously or with the help of chromosome doubling agents. Without this double haploid technology, conventional breeding methods would take several generations of selection to produce a homozygous line. See also Microsporangium Spore Megaspore References Plant reproduction
Microspore
[ "Biology" ]
1,050
[ "Behavior", "Plant reproduction", "Plants", "Reproduction" ]
4,194,905
https://en.wikipedia.org/wiki/M94%20Group
The M94 Group (Canes I Group or Canes Venatici I Group) is a loose, extended group of galaxies located about 13 million light-years away in the constellations Canes Venatici and Coma Berenices. The group is one of many groups that lies within the Virgo Supercluster (i.e. the Local Supercluster) and one of the closest groups to the Local Group. Although the galaxies in this cluster appear to be from a single large cloud-like structure, many of the galaxies within the group are only weakly gravitationally bound, and some have not yet formed stable orbits around the center of this group. Instead, most of the galaxies in this group appear to be moving with the expansion of the universe. Members The table below lists galaxies that have been consistently identified as group members in the Nearby Galaxies Catalog, the Lyons Groups of Galaxies (LGG) Catalog, and the three group lists created from the Nearby Optical Galaxy sample of Giuricin et al. Additionally, NGC 4105 and DDO 169 are frequently but not consistently identified as members of this group in the references cited above. The brightest member in this galaxy group is questionable and partly depends on the analysis used to determine group members. The LGG Catalog identifies M106 as part of this group, which would make it the brightest galaxy within the group. However, the other catalogs cited above do not identify M106 as a group member, in which case M94 would be the brightest galaxy within the group. Canes Venatici Cloud This galaxy group is sometimes erroneously called the Canes Venatici Cloud, a larger structure of which it is a member. A galaxy cloud is a supercluster substructure. The CVn Cloud used in this manner is identified by Tully and de Vaucoleurs. See also M96 Group Sculptor Group Canes II Group (CVn II Group) References External links Galaxy clusters Virgo Supercluster Canes Venatici Coma Berenices
M94 Group
[ "Astronomy" ]
417
[ "Galaxy clusters", "Constellations", "Coma Berenices", "Canes Venatici", "Astronomical objects" ]
4,194,949
https://en.wikipedia.org/wiki/Hydrochloric%20acid%20%28data%20page%29
This page provides supplementary chemical data on Hydrochloric acid. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions. Structure and properties References Chemical data pages Chemical data pages cleanup
Hydrochloric acid (data page)
[ "Chemistry" ]
71
[ "Chemical data pages", "nan" ]
4,195,502
https://en.wikipedia.org/wiki/Pentamethylcyclopentadiene
1,2,3,4,5-Pentamethylcyclopentadiene is a cyclic diene with the formula , often written , where Me is . It is a colorless liquid. 1,2,3,4,5-Pentamethylcyclopentadiene is the precursor to the ligand 1,2,3,4,5-pentamethylcyclopentadienyl, which is often denoted Cp* () and read as "C P star", the "star" signifying the five methyl groups radiating from the core of the ligand. Thus, the 1,2,3,4,5-pentamethylcyclopentadiene's formula is also written Cp*H. In contrast to less-substituted cyclopentadiene derivatives, Cp*H is not prone to dimerization. Synthesis Pentamethylcyclopentadiene is commercially available. It was first prepared from tiglaldehyde and 2-butenyllithium, via 2,3,4,5-tetramethylcyclopent-2-enone, with a Nazarov cyclization reaction as a key step. Alternatively, 2-butenyllithium adds to ethyl acetate followed by acid-catalyzed dehydrocyclization: Organometallic derivatives Cp*H is a precursor to organometallic compounds containing the ligand, commonly called Cp*−. Some representative reactions leading to such Cp*–metal complexes follow: Deprotonation with n-butyllithium: Cp*H + C4H9Li → Cp*Li + C4H10 Synthesis of (pentamethylcyclopentadienyl)titanium trichloride: Cp*Li + TiCl4 → Cp*TiCl3 + LiCl Synthesis of (pentamethylcyclopentadienyl)iron dicarbonyl dimer from iron pentacarbonyl: 2 Cp*H + 2 Fe(CO)5]] → [η5-Cp*Fe(CO)2]2 + H2 + 6 CO This method is analogous to the route to the related Cp complex, see cyclopentadienyliron dicarbonyl dimer. Some Cp* complexes are prepared using silyl transfer: Cp*Li + Me3SiCl → Cp*SiMe3 + LiCl Cp*SiMe3 + TiCl4 → Cp*TiCl3 + Me3SiCl A now-obsolete route to Cp* complexes involves the use of hexamethyl Dewar benzene. This method was traditionally used for preparation of the chloro-bridged dimers [Cp*IrCl2]2 and [Cp*RhCl2]2, but has been discontinued with the increased commercial availability of Cp*H. Such syntheses rely on a hydrohalic acid induced rearrangement of hexamethyl Dewar benzene to a substituted pentamethylcyclopentadiene prior to reaction with the hydrate of either iridium(III) chloride or rhodium(III) chloride. Comparison to other Cp ligands Complexes of pentamethylcyclopentadienyl differ in several ways from the more common cyclopentadienyl (Cp) derivatives. Being more electron-rich, Cp*− is a stronger donor and dissociation, like ring-slippage, is more difficult with Cp* than with Cp. The fluorinated ligand, (trifluoromethyl)tetramethylcyclopentadienyl, C5Me4CF3, combines the properties of Cp and Cp*: it possesses the steric bulk of Cp* but has electronic properties similar to Cp, the electron-donation from the methyl groups being "canceled out" by the electron-accepting nature of the trifluoromethyl substituent. Its steric bulk stabilizes complexes with fragile ligands. Its bulk also attenuates intermolecular interactions, decreasing the tendency to form polymeric structures. Its complexes also tend to be more soluble in non-polar solvents. The methyl group in Cp* complexes can undergo C–H activation leading to "tuck-in complexes". Bulky cyclopentadienyl ligands are known that are far more sterically encumbered than Cp*. See also Cyclopentadiene Methylcyclopentadiene References Cyclopentadienes Ligands
Pentamethylcyclopentadiene
[ "Chemistry" ]
944
[ "Ligands", "Coordination chemistry" ]
4,195,833
https://en.wikipedia.org/wiki/Levant%20Mine%20and%20Beam%20Engine
Levant Mine and Beam Engine is a National Trust property at Trewellard, Pendeen, near St Just, Cornwall, England, UK. Its main attraction is that it has the world's only Cornish beam engine still operated by steam on its original site. There is also a visitor centre, a short underground tour, and the South West Coast Path leads to Botallack Mine, via a cliff-top footpath. In 1919 the engine used to transport men between the different levels of the mine failed, leading to the deaths of thirty-one men. Since 2006, the area has been part of the UNESCO World Heritage Site, Cornwall and West Devon Mining Landscape. Location The property is on the site of the former Levant Mine, established in 1820 and closed in 1930, where tin and copper ores were raised. The mine reached a depth of about 600 metres. It got the nickname "mine under the sea", because tunnels were driven up to 2.5 km from the cliffs under the sea. The surviving beam engine was built by Harvey's of Hayle. History The mine yields both copper and tin and was opened in 1820 with twenty shares of £20 each. From first opening, to circa 1883, the mine gave a profit of £171,000 from approximately £1,300,000 worth of ore. In 1882 the mine was taken over by new owners on a 21 year lease, who replaced machinery and improved the surface-works. In 1883 three shafts were open. One shaft is occupied by the man-engine, a second by a pumping-engine and the third for hauling out the skips. Since the introductions of skips, for bringing ore to the surface, two shafts were abandoned. There were six engines on site, pumping-engine, cylinder – pumps water from the mine stamping, cylinder – breaks up the ore winding-engine or whim, cylinder – raises the ore to the surface man-engine, cylinder crushing-machine, cylinder winding-engine, cylinder. A description of the working conditions of the mine was described in The Cornishman newspaper in 1883. Around 366 men, boys, and girls were employed compared with about 600 prior to 1882. The mine was worked in three, eight-hour shifts, (except on Sunday) with fifty to sixty men working underground in each shift. Access to the underground levels (i.e. passages) was by ladder and the temperature was around . The men were all more or less working in a nude state and sweating profusely. They were provided with spring water which was stored in huge canteens. Few are able to work underground after the age of 35. The width of the levels are high and wide, while the width of the lode is from to wide. Thus a quantity of hard rock on each side of the lode has to be cut away at great expense. The levels are expanded by explosives. First a hole is made by hand-drill deep, taking about two hours and the hole is charged with gunpowder. Premature ignition causes many injuries and fatalities. A cylinder engine raised the ore to the surface in skips on two parallel inclines, one ascending as the other was lowered. 1919 disaster On 20 October 1919 an accident killed 31 miners, when a metal bracket at the top of a rod broke on the man engine. To use the man engine, the miners stepped on to a ladder, were transported up or down, climbed off onto a sollar, waited for the ladder to reset its position, then stepped back on to the ladder, repeating the process. The rod broke in several pieces and heavy timbers crashed down the shaft. A large scale rescue operation was able to save some of the miners. The engine was not replaced and the lower levels of the mine were abandoned. Minerals and ores silver bismuth calcspar aragonite vitreous copper ore or grey sulphuret of copper Mineral Statistics From Robert Hunt's Mineral Statistics of the United Kingdom. See also Man engine for an account of the accident in the mine on 20 October 1919. Geevor Tin Mine, just to the northeast of the Levant complex. South Crofty Wheal Jane Consolidated Mines Devon Great Consols Botallack Mine also known as Crown Mine References External links Levant Mine and Beam Engine information at the National Trust Cornwall Record Office Online Catalogue for Levant Levant Mine Self-guided trail Copper mines in Cornwall Grade II listed buildings in Cornwall Grade II listed industrial buildings Industrial archaeological sites in Cornwall Mining equipment Mining museums in Cornwall National Trust properties in Cornwall 1919 disasters in the United Kingdom Preserved beam engines St Just in Penwith Steam museums in England Tin mines in Cornwall
Levant Mine and Beam Engine
[ "Engineering" ]
939
[ "Mining equipment" ]
4,196,082
https://en.wikipedia.org/wiki/Arc%20converter
The arc converter, sometimes called the arc transmitter, or Poulsen arc after Danish engineer Valdemar Poulsen who invented it in 1903, was a variety of spark transmitter used in early wireless telegraphy. The arc converter used an electric arc to convert direct current electricity into radio frequency alternating current. It was used as a radio transmitter from 1903 until the 1920s when it was replaced by vacuum tube transmitters. One of the first transmitters that could generate continuous sinusoidal waves, it was one of the first technologies used to transmit sound (amplitude modulation) by radio. It is on the list of IEEE Milestones as a historic achievement in electrical engineering. History Elihu Thomson discovered that a carbon arc shunted with a series tuned circuit would "sing". This "singing arc" was probably limited to audio frequencies. Bureau of Standards credits William Duddell with the shunt resonant circuit around 1900. The English engineer William Duddell discovered how to make a resonant circuit using a carbon arc lamp. Duddell's "musical arc" operated at audio frequencies, and Duddell himself concluded that it was impossible to make the arc oscillate at radio frequencies. Valdemar Poulsen succeeded in raising the efficiency and frequency to the desired level. Poulsen's arc could generate frequencies of up to 200 kilohertz and was patented in 1903. After a few years of development the arc technology was transferred to Germany and Great Britain in 1906 by Poulsen, his collaborator Peder Oluf Pedersen and their financial backers. In 1909 the American patents as well as a few arc converters were bought by Cyril Frank Elwell. The subsequent development in Europe and the United States was rather different, since in Europe there were severe difficulties for many years implementing the Poulsen technology, whereas in the United States an extended commercial radiotelegraph system was soon established with the Federal Telegraph Company. Later the US Navy also adopted the Poulsen system. Only the arc converter with passive frequency conversion was suitable for portable and maritime use. This made it the most important mobile radio system for about a decade until it was superseded by vacuum tube systems. In 1922, the Bureau of Standards stated, "the arc is the most widely used transmitting apparatus for high-power, long-distance work. It is estimated that the arc is now responsible for 80 per cent of all the energy actually radiated into space for radio purposes during a given time, leaving amateur stations out of consideration." Description This new, more-refined method for generating continuous-wave radio signals was initially developed by Danish inventor Valdemar Poulsen. The spark-gap transmitters in use at that time produced damped wave which wasted a large portion of their radiated power transmitting strong harmonics on multiple frequencies that filled the RF spectrum with interference. Poulsen's arc converter produced undamped or continuous waves (CW) on a single frequency. There are three types for an arc oscillator: Duddell arc (and other early types) In the first type of arc oscillator, the AC current in the condenser is much smaller than the DC supply current , and the arc is never extinguished during an output cycle. The Duddell arc is an example of the first type, but the first type is not practical for RF transmitters. Poulsen arc In the second type of arc oscillator, the condenser AC discharge current is large enough to extinguish the arc but not large enough to restart the arc in the opposite direction. This second type is the Poulsen arc. Quenched spark gap In the third type of arc oscillator, the arc extinguishes but may reignite when the condenser current reverses. The third case is a quenched spark gap and produces damped oscillations. Continuous or ‘undamped’ waves (CW) were an important feature, since the use of damped waves from spark-gap transmitters resulted in lower transmitter efficiency and communications effectiveness, while polluting the RF spectrum with interference. The Poulsen arc converter had a tuned circuit connected across the arc. The arc converter consisted of a chamber in which the arc burned in hydrogen gas between a carbon cathode and a water-cooled copper anode. Above and below this chamber there were two series field coils surrounding and energizing the two poles of the magnetic circuit. These poles projected into the chamber, one on each side of the arc to provide a magnetic field. It was most successful when operated in the frequency range of a few kilohertz to a few tens of kilohertz. The antenna tuning had to be selective enough to suppress the arc converter's harmonics. Keying Since the arc took some time to strike and operate in a stable fashion, normal on-off keying could not be used. Instead, a form of frequency-shift keying was employed. In this compensation-wave method, the arc operated continuously, and the key altered the frequency of the arc by one to five percent. The signal at the unwanted frequency was called the compensation-wave. In arc transmitters up to 70 kW, the key typically shorted out a few turns in the antenna coil. For larger arcs, the arc output would be transformer coupled to the antenna inductor, and the key would short out a few bottom turns of the grounded secondary. Therefore, the "mark" (key closed) was sent at one frequency, and the "space" (key open) at another frequency. If these frequencies were far enough apart, and the receiving station's receiver had adequate selectivity, the receiving station would hear standard CW when tuned to the "mark" frequency. The compensation wave method used a lot of spectrum bandwidth. It not only transmitted on the two intended frequencies, but also the harmonics of those frequencies. Arc converters are rich in harmonics. Sometime around 1921, the Preliminary International Communications Conference prohibited the compensation wave method because it caused too much interference. The need for the emission of signals at two different frequencies was eliminated by the development of uniwave methods. In one uniwave method, called the ignition method, keying would start and stop the arc. The arc chamber would have a striker rod that shorted out the two electrodes through a resistor and extinguished the arc. The key would energize an electromagnet that would move the striker and reignite the arc. For this method to work, the arc chamber had to be hot. The method was feasible for arc converters up to about 5 kW. The second uniwave method is the absorption method, and it involves two tuned circuits and a single-pole, double-throw, make-before-break key. When the key is down, the arc is connected to the tuned antenna coil and antenna. When the key is up, the arc is connected to a tuned dummy antenna called the back shunt. The back shunt was a second tuned circuit consisting of an inductor, a capacitor, and load resistor in series. This second circuit is tuned to roughly the same frequency as the transmitted frequency; it keeps the arc running, and it absorbs the transmitter power. The absorption method is apparently due to W. A. Eaton. The design of switching circuit for the absorption method is significant. It is switching a high voltage arc, so the switch's contacts must have some form of arc suppression. Eaton had the telegraph key drive electromagnets that operated a relay. That relay used four sets of switch contacts in series for each of the two paths (one to the antenna and one to the back shunt). Each relay contact was bridged by a resistor. Consequently, the switch was never completely open, but there was a lot of attenuation. See also History of radio Transmitter Mercury arc valve Tikker References . Revised to April 24, 1921. http://www.forgottenbooks.org . Elihu Thomson made singing arc before Duddell, p. 125. Further reading . History of radio in 1925. Page 25: "Professor Elihu Thomson, of America, applied for a patent on an arc method of producing high-frequency currents. His invention incorporated a magnetic blowout and other essential features of the arc of to-day, but the electrodes were of metal and not enclosed in a gas chamber." Cites to US Patent 500630. Pages 30–31 (1900): "William Du Bois Duddell, of London, applied for a patent on a static method of generating alternating currents from a direct-current supply, which method followed very closely upon the lines of that of Elihu Thomson of 1892. Duddell suggested electrodes of carbon, but he proposed no magnetic blow-out. He stated that his invention could be used for producing oscillations of high frequency and constant amplitude which could "be used with advantage in wireless telegraphy," especially where it was "required to tune the transmitter to syntony." Duddell's invention (Br. Pat. 21,629/00) became the basis for the Poulsen Arc, and also of an interesting transmitter evolved by Von Lepel." Page 31 (1903): "Valdemar Poulsen, of Copenhagen, successfully applied for a patent upon a generator, as disclosed by Duddell in 1900, plus magnetic blow-out proposed by Thomson in 1892, and a hydrogenous vapour in which to immerse the arc. (Br. Pate 15,599/03; U.S. Pat 789,449.)" Also Ch. IV, pp 75–77, "The Poulsen Arc". Refinements by C. F. Elwell. Cyril Frank Elwell - Pioneer of American and European Wireless Communications, Talking Pictures and founder of C.F. Elwell Limited, 1922-1925 by Ian L. Sanders. Published by Castle Ridge Press, 2013. (Details the development of the arc generator in the United States and Europe by Elwell.) External links http://oz6gh.byethost33.com/poulsenarc.htm, Modulation of the Poulsen arc, from the book Radio Telephony, 1918 by Alfred N. Goldsmith. https://web.archive.org/web/20120210081832/http://www.stenomuseet.dk/person/hb.ukref.htm, English summary of the Danish Ph.D. dissertation, The Arc Transmitter - a Comparative Study of the Invention, Development and Innovation of the Poulsen System in Denmark, England and the United States, by Hans Buhl, 1995 http://pe2bz.philpem.me.uk/Comm/-%20ELF-VLF/-%20Info/-%20History/PoulsenArcOscillator/poulsen1.htm https://www.gukit.ru/sites/default/files/ogpage_files/2017/09/Dugovoy_peredatchik.pdf - From the electric arc of Petrov to the radio broadcast of speech. History of radio technology Radio electronics Electric arcs Telecommunications-related introductions in 1902 Electric power conversion History of electronic engineering
Arc converter
[ "Physics", "Engineering" ]
2,343
[ "Electric arcs", "Physical phenomena", "Radio electronics", "Plasma phenomena", "Electronic engineering", "History of electronic engineering" ]
4,196,163
https://en.wikipedia.org/wiki/Fixation%20index
The fixation index (FST) is a measure of population differentiation due to genetic structure. It is frequently estimated from genetic polymorphism data, such as single-nucleotide polymorphisms (SNP) or microsatellites. Developed as a special case of Wright's F-statistics, it is one of the most commonly used statistics in population genetics. Its values range from 0 to 1, with 0 being no differentiation and 1 being complete differentiation. Interpretation This comparison of genetic variability within and between populations is frequently used in applied population genetics. The values range from 0 to 1. A zero value implies complete panmixia; that is, that the two populations are interbreeding freely. A value of one implies that all genetic variation is explained by the population structure, and that the two populations do not share any genetic diversity. For idealized models such as Wright's finite island model, FST can be used to estimate migration rates. Under that model, the migration rate is , where is the migration rate per generation, and is the mutation rate per generation. The interpretation of FST can be difficult when the data analyzed are highly polymorphic. In this case, the probability of identity by descent is very low and FST can have an arbitrarily low upper bound, which might lead to misinterpretation of the data. Also, strictly speaking FST is not a distance in the mathematical sense, as it does not satisfy the triangle inequality. Definition Two of the most commonly used definitions for FST at a given locus are based on 1) the variance of allele frequencies among populations, and on 2) the probability of identity by descent. If is the average frequency of an allele in the total population, is the variance in the frequency of the allele among different subpopulations, weighted by the sizes of the subpopulations, and is the variance of the allelic state in the total population, FST is defined as Wright's definition illustrates that FST measures the amount of genetic variance that can be explained by population structure. This can also be thought of as the fraction of total diversity that is not a consequence of the average diversity within subpopulations, where diversity is measured by the probability that two randomly selected alleles are different, namely . If the allele frequency in the th population is and the relative size of the th population is , then Alternatively, where is the probability of identity by descent of two individuals given that the two individuals are in the same subpopulation, and is the probability that two individuals from the total population are identical by descent. Using this definition, FST can be interpreted as measuring how much closer two individuals from the same subpopulation are, compared to the total population. If the mutation rate is small, this interpretation can be made more explicit by linking the probability of identity by descent to coalescent times: Let T0 and T denote the average time to coalescence for individuals from the same subpopulation and the total population, respectively. Then, This formulation has the advantage that the expected time to coalescence can easily be estimated from genetic data, which led to the development of various estimators for FST. Estimation In practice, none of the quantities used for the definitions can be easily measured. As a consequence, various estimators have been proposed. A particularly simple estimator applicable to DNA sequence data is: where and represent the average number of pairwise differences between two individuals sampled from different sub-populations () or from the same sub-population (). The average pairwise difference within a population can be calculated as the sum of the pairwise differences divided by the number of pairs. However, this estimator is biased when sample sizes are small or if they vary between populations. Therefore, more elaborate methods are used to compute FST in practice. Two of the most widely used procedures are the estimator by Weir & Cockerham (1984), or performing an Analysis of molecular variance. A list of implementations is available at the end of this article. FST in humans FST values depend strongly on the choice of populations. Closely related ethnic groups, such as the Danes vs. the Dutch, or the Portuguese vs. the Spaniards show values significantly below 1%, indistinguishable from panmixia. Within Europe, the most divergent ethnic groups have been found to have values of the order of 7% (Sámi vs. Sardinians). Larger values are found if highly divergent homogenous groups are compared: the highest such value found was at close to 46%, between Mbuti and Papuans. A genetic distance of 0.125 implies that kinship between unrelated individuals of the same ancestry relative to the world population is equivalent to kinship between half siblings in a randomly mating population. This also implies that if a human from a given ancestral population has a mixed half-sibling, that human is closer genetically to an unrelated individual of their ancestral population than to their mixed half-sibling. Genetic distances in human populations Autosomal genetic distances based on classical markers In their study The History and Geography of Human Genes (1994), Cavalli-Sforza, Menozzi and Piazza provide some of the most detailed and comprehensive estimates of genetic distances between human populations, within and across continents. Their initial database contains 76,676 gene frequencies (using 120 blood polymorphisms), corresponding to 6,633 samples in different locations. By culling and pooling such samples, they restrict their analysis to 491 populations. They focus on aboriginal populations that were at their present location at the end of the 15th century when the great European migrations began. When studying genetic difference at the world level, the number is reduced to 42 representative populations, aggregating subpopulations characterized by a high level of genetic similarity. For these 42 populations, Cavalli-Sforza and coauthors report bilateral distances computed from 120 alleles. Among this set of 42 world populations, the greatest genetic distance observed is between Mbuti Pygmies and Papua New Guineans, where the Fst distance is 0.4573, while the smallest genetic distance (0.0021) is between the Danish and the English. When considering more disaggregated data for 26 European populations, the smallest genetic distance (0.0009) is between the Dutch and the Danes, and the largest (0.0667) is between the Lapps and the Sardinians. The mean genetic distance among the 861 available pairings of the 42 selected populations was found to be 0.1338.. The following table shows Fst calculated by Cavalli-Sforza (1994) for some populations: Autosomal genetic distances based on SNPs A 2012 study based on International HapMap Project data estimated FST between the three major "continental" populations of Europeans (combined from Utah residents of Northern and Western European ancestry from the CEPH collection and Italians from Tuscany), East Asians (combining Han Chinese from Beijing, Chinese from metropolitan Denver and Japanese from Tokyo, Japan) and Sub-Saharan Africans (combining Luhya of Webuye, Kenya, Maasai of Kinyawa, Kenya and Yoruba of Ibadan, Nigeria). It reported a value close to 12% between continental populations, and values close to panmixia (smaller than 1%) within continental populations. Autosomal genetic distances based on whole exome sequencing (WES) Pairwise Fst values among several populations based on whole exome sequencing (WES) in 2016: Programs for calculating FST Arlequin Fstat SMOGD diveRsity (R package) hierfstat (R package) FinePop (R package) Microsatellite Analyzer (MSA) VCFtools DnaSP Popoolation2 Modules for calculating FST BioPerl BioPython References Further reading Evolution and the Genetics of Populations Volume 2: the Theory of Gene Frequencies, pg 294–295, S. Wright, Univ. of Chicago Press, Chicago, 1969 A haplotype map of the human genome, The International HapMap Consortium, Nature 2005 See also Genetic distance F-statistics QST_(genetics) Coefficient of inbreeding Coefficient of relationship Hardy-Weinberg principle Wahlund effect External links BioPerl - Bio::PopGen::PopStats Population genetics Mathematical and theoretical biology
Fixation index
[ "Mathematics" ]
1,723
[ "Applied mathematics", "Mathematical and theoretical biology" ]
4,196,664
https://en.wikipedia.org/wiki/HackThisSite
HackThisSite.org (HTS) is an online hacking and security website founded by Jeremy Hammond. The site is maintained by members of the community after he left the organization. It aims to provide users with a way to learn and practice basic and advanced "hacking" skills through a series of challenges in a safe and legal environment. The organization has a user base of over a million, though the number of active members is believed to be much lower. The most users online at the same time was 19,950 on February5, 2018 at . HackThisSite involves a small, loose team of developers and moderators who maintain its website, IRC server, and related projects. It produces an e-zine which it releases at various hacker conventions and through its hackbloc portal. Hard copies of the magazine are published by Microcosm and Quimbys. It also has a short news/blog section run by developers. IRC and forums HackThisSite is known for its IRC network, where many users converse on a plethora of topics ranging from current events to technical issues with programming and Unix-based operating systems. Mostly, the HackThisSite IRC network serves as a social gathering of like-minded people to discuss anything. Although there are many channels on the IRC network, the main channel, #hackthissite, has a +R flag which requires users to register their nick (username) before they may join the channel. This requirement helps reduce botnets in the main channel, because they would have to register every nick. Following the split from its former sister site CriticalSecurity.Net, HackThisSite retained one main set of forums. The Hackbloc forums also had many HackThisSite users involved, but they were taken down. Before the split, the CriticalSecurity.net forums had most HTS discussion, specifically related to help with the challenges on the site as well as basic hacking questions. The Hackbloc forums were more for focused hacktivist discussion as well as a place for people to discuss news and plan future projects. Many people criticize the forums as being too beginner-focused compared to IRC, most likely because many new users visit the forums to ask for help with the challenges. HackThisSite is taking steps to try to attract more qualified users to its forums. Members contribute original texts to the articles area of the site. This area is broken down into different sections on a range of topics. Some of these sections include Ethics, HTS Challenge Tutorials, and Political Activism. The topics covered in these articles range widely in complexity. Topics range from walkthroughs for the missions provided by HackThisSite, to articles regarding advanced techniques in a plethora of programming languages. Mission challenges HackThisSite is also host to a series of "missions" aimed at simulating real world hacks. These range from ten basic missions where one attempts to exploit relatively simple server-side scripting errors, to difficult programming and application cracking missions. The missions work on a system of points where users are awarded scores based on their completion of missions. In general, the missions become steadily more difficult as the user advances through a particular mission category. Basic and realistic challenges The Web hacking challenges includes eleven Basic Web Challenges. Each challenge consists of an authentication page with a password entry box, plus other files which are to be exploited or attacked in order to gain the correct password. Successful authentication to the main challenge page will advance the user to the next challenge. These challenges are typically considered simple and are used as an introduction to hacking. There are sixteen Realistic Missions which attempt to mimic real, moderate to difficult hacking, in real life situations. Each mission is a complete web site featuring multiple pages and scripts. Users must successfully exploit one or more of the web sites pages to gain access to required data or to produce changes. Programming missions A Programming Challenges section also exists. This section currently consists of twelve challenges charging the user to write a program which will perform a specified function within a certain number of seconds after activation. These programming challenges range from simple missions such as parsing the contents, to reverse-engineering an encryption algorithm. These help users develop and practice on-the-go programming skills. Application missions The goal of application challenges is generally to extract a key from an application, which usually involves some form of reverse-engineering. Other challenges involve program manipulation. New missions More recently, HTS came out with logic challenges, which moo, HTS's official bot, proclaimed were "not meant as a challenge to overcome like the rest of HTS challenges." Instead, the logic challenges were meant to be overcome by the participant alone from solving. In April 2009, they were disabled and all points earned from logic challenges were removed. Reasons included concern that the answers could have been easily found elsewhere on the internet. Likewise, the "extended basic" missions are of recent creation. These are designed to be code review missions where partakers learn how to read code and search for flaws. A set of 10 easter eggs hidden around HTS were known as the "HTS missions." For example, one of these "missions" was the fake Admin Panel. Developers later decided to remove HTS easter eggs, as some allowed XSS and SQL exploits and many members submitted false bug reports as a result. Steganography missions Steganography missions are also available on the website. The goal in these missions is to extract the hidden message from the media file provided. There are 17 steganography missions available. Controversy There has been criticism that HackThisSite's self-description as a "hacker training ground" encourages people to break the law. Many people related to the site state that although some of the skills taught can be used for illegal activities, HackThisSite does not participate in or support such activities. Despite this, several individual members have been arrested and convicted for illegal activity (most notably Jeremy Hammond, founder of HackThisSite). phpBB/HowDark incident In November 2004 the (now defunct) HackThisSite-based HowDark Security Group notified the phpBB Group, makers of the phpBB bulletin software, of a serious vulnerability in the product. The vulnerability was kept under wraps while it was brought to the attention of the phpBB admins, who after reviewing, proceeded to downplay its risks. Unhappy with the Groups' failure to take action, HowDark then published the bug on the bugtraq mailing-list. Malicious users found and exploited the vulnerability which led to the takedown of several phpBB-based bulletin boards and websites. Only then did the admins take notice and release a fix. Slowness to patch the vulnerability by end-users led to an implementation of the exploit in the Perl/Santy worm (read full article) which defaced upwards of 40,000 websites and bulletin boards within a few hours of its release. Protest Warrior incident On March 17, 2005, Jeremy Hammond, the founder of HackThisSite, was arrested following an FBI investigation into an alleged hacking of conservative political activist group Protest Warrior. His apartment was raided by the Chicago FBI, and all electronic equipment was seized. The federal government claimed that a select group of HackThisSite hackers gained access to the Protest Warrior user database, procured user credit-card information and conspired to run scripts that would automatically wire money to a slew of non-profit organizations. The plot was uncovered when a hacker said to have been disgruntled with the progress of the activity's turned informant. Internal problems Administrators, developers, and moderators on HackThisSite are arranged in a democratic but highly anarchical fashion. This structure appears to work at most times. When disputes arise, however, loyalties tend to become very confusing. Therefore, HackThisSite has had a long history of administrators, developers, and moderators turning darkside or severely impairing or completely taking down the site. In the last major attack to occur, several blackhat dissidents gained root-level access to the website and proceeded to "rm -rf" the entire site. Subsequently, HTS was down for months as a result. See also Hacker (computer security) Hacktivism References External links Official Website [[Category:Hacking (computer security baypas )]] Computing websites
HackThisSite
[ "Technology" ]
1,715
[ "Computing websites" ]
4,196,982
https://en.wikipedia.org/wiki/Pyrylium
Pyrylium is a cation (positive ion) with formula , consisting of a six-membered ring of five carbon atoms, each with one hydrogen atom, and one positively charged oxygen atom. The bonds in the ring are conjugated as in benzene, giving it an aromatic character. In particular, because of the positive charge, the oxygen atom is trivalent. Pyrilium is a mono-cyclic and heterocyclic compound, one of the oxonium ions. Synthesis Pyrylium salts are easily produced from simple starting materials through a condensation reaction. Pyrylium salts with aromatic substituents, such 2,4,6-triphenylpyrylium tetrafluoroborate, can be obtained from two moles of acetophenone, one mole of benzaldehyde, and excess tetrafluoroboric acid. For pyrylium salts with alkyl substituents, such as 2,4,6-trimethylpyrylium salts, the best method uses the Balaban-Nenitzescu-Praill synthesis from tertiary butanol and acetic anhydride in the presence of tetrafluoroboric, perchloric, or trifluoromethanesulfonic acids. Hydroxide bases open and hydrolyze pyridine to an enedione base that cyclizes in very strong acids to a pyrylium cation. Enolizing conditions (strong acid) force pyrones to their pyrylium tautomer. Chemical properties Pyrylium and its derivatives form stable salts with a variety of anions. Like other oxonium ions, pyrylium is unstable in neutral water. However, pyrylium is much less reactive than ordinary oxonium ions because of aromatic stabilization. The highly electronegative oxygen strongly perturbs the orbitals in the aromatic ring, and pyrylium derivatives are extremely resistant to electrophilic aromatic substitution. Pyrylium cations react with nucleophiles at the ortho and para positions, typically through ANRORC. 2,4,6-Triphenylpyrylium salts are converted by hydroxide bases into a stable 1,5-enedione (pseudobase), but 2,4,6-trimethylpyrylium salts on treatment with hot alkali hydroxides afford an unstable pseudobase that undergoes an intramolecular condensation yielding 3,5-dimethylphenol. In warm deuterium oxide, 2,4,6-trimethylpyrylium salts undergo isotopic exchange of 4-methyl hydrogens faster than for the 2- and 6-methyl groups, allowing the synthesis of regioselectively deuterated compounds. Derivatives Pyrylium's electrophilicity makes them useful materials for producing other compounds with stronger aromatic character. Pyrylium salts afford pyridines with ammonia, pyridinium salts with primary amines, pyridine-N-oxides with hydroxylamine, phosphabenzenes with phosphine derivatives, thiopyrylium salts with hydrogen sulfide, and benzene derivatives with acetonitrile or nitromethane. Many important cations are formally derived from pyrylium by substitution of various functional groups for some or all the hydrogens in the ring. 2,4,6-Triphenylpyrylium reacts with primary amines to give pyridinium derivatives called "Katritzky salts"; they are commonly used in metal-catalyzed nucleophilic displacement of the amine. Pyrones A pyrylium cation with a hydroxyl anion substituent in the 2-position is not the zwitterionic aromatic compound (1), but the neutral unsaturated lactone 2-pyrone or pyran-2-one (2). Important representatives of this class are the coumarins. Likewise a 4-hydroxyl pyrylium compound is a γ-pyrone or pyran-4-one (4), to which group belong compounds such as maltol. 2-Pyrones are known to react with alkynes in a Diels–Alder reaction to form arene compounds with expulsion of carbon dioxide, for example: Polycyclic oxonium arenes Chromenylium ion One bicyclic pyrylium ion is called benzopyrylium ion (IUPAC: chromenylium ion) (formula: , molar mass: 131.15 g/mol, exact mass: 131.04968983). It can be seen as a charged derivative of 2H-1-benzopyran (IUPAC: 2H-chromene, ), or a (charged) substituted heterocyclic derivative of naphthalene (). In biology, the 2-phenylbenzopyrylium (2-phenylchromenylium) ion is referred to as flavylium. A class of flavylium-derived compounds are anthocyanidins and anthocyanins, pigments that are responsible for the colors of many flowers. Naphthoxanthenium cation Higher polycyclic derivatives of pyrylium also exist. One good example is naphthoxanthenium. This dye is highly stable, aromatic, and planar. It absorbs in the UV and blue region and presents exceptional photophysical properties. It can be synthesized by chemical or photochemical reactions. See also 6-membered aromatic rings with one carbon replaced by another group: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, stibabenzene, bismabenzene, thiopyrylium, selenopyrylium, telluropyrylium Pyran, (pyrones lacking the ketone group) References Oxygen heterocycles Six-membered rings Cations Oxonium compounds
Pyrylium
[ "Physics", "Chemistry" ]
1,316
[ "Cations", "Ions", "Matter" ]
4,197,478
https://en.wikipedia.org/wiki/Greenville%20Memorial%20Auditorium
Greenville Memorial Auditorium was a 7,500-seat multi-purpose arena built in 1958 that was located in Greenville, South Carolina. It hosted local sporting events, concerts and the Ringling Brothers Circus until the Bon Secours Wellness Arena opened in 1998. It hosted professional wrestling throughout its history, especially in the 1970s and 1980s, with NWA Jim Crockett Promotions cards held every Monday night. It hosted the Southern Conference men's basketball tournaments in 1972, 1975, and 1976. Lynyrd Skynyrd performed there on October 19, 1977, the last concert played by the original band prior to its fatal plane crash that took three of its members the next day en route to Baton Rouge, Louisiana. The arena was imploded on September 20, 1997. References Defunct college basketball venues in the United States Indoor arenas in South Carolina Furman Paladins basketball Monuments and memorials in South Carolina Sports venues completed in 1958 Sports venues demolished in 1997 Sports venues in Greenville, South Carolina Demolished sports venues in the United States 1958 establishments in South Carolina 1997 disestablishments in South Carolina Buildings and structures demolished by controlled implosion
Greenville Memorial Auditorium
[ "Engineering" ]
230
[ "Buildings and structures demolished by controlled implosion", "Architecture" ]
4,197,576
https://en.wikipedia.org/wiki/Mos%20Teutonicus
(Latin for "German custom") was a postmortem funerary custom used in Europe in the Middle Ages as a means of transporting, and solemnly disposing of, the bodies of high-status individuals. Nobles would often undergo Mos Teutonicus since their burial plots were often located far away from their place of death. The process involved the removal of the flesh from the body, so that the bones of the deceased could be transported hygienically from distant lands back home. Background During the Middle Ages, nobles often died far away from where they had wanted to be buried. They often wanted their hearts to be buried at their homes, thus their bodies had to travel far distances. King Charlemagne outlawed cremation, deeming destruction of the bones as destruction of the soul. Anyone who cremated a person's bones was subject to the death penalty. Thus, the practice of Mos Teutonicus came about as a way to preserve the bones over long distances without destroying them. Mos Teutonicus can even be seen being practiced in the 10th and 11th centuries during the rule of the Holy Roman Empire. Examples of this include rulers from the Ottone and Salian dynasties in which the rulers were transported to burial locations far from their place of death. During the Second Crusade for the Holy Land it was not thought fit for aristocrats who fell in battle, or died of natural causes, to be buried away from their homeland in Muslim territory. The transportation of the whole body over long distances was impractical and unhygienic due to decomposition. Mos Teutonicus was especially important in warmer climates, such as around the Mediterranean Sea, since the body was subject to faster decay. German aristocrats were particularly concerned that burial should not take place in the Holy Land, but rather on home soil. The Florentine chronicler Boncompagno was the first to connect the procedure specifically with German aristocrats, and coins the phrase , meaning 'the Germanic custom'. English and French aristocrats generally preferred embalming to , involving the burial of the entrails and heart in a separate location from the corpse. One of the advantages of was that it was relatively economical in comparison with embalming, and was more hygienic. Corpse preservation was very popular in medieval society. The decaying body was seen as representative of something sinful and evil. Embalming and , along with tomb effigies, were a way of giving the corpse an illusion of stasis and removed the uneasy image of putrefaction and decay. In 1270, the body of King Louis IX, who died in Tunis, which was Muslim territory, was subject to the process of for its transportation back to France. Process The process of Mos Teutonicus began with the cadaver being dismembered to facilitate the next stage in the process, in which the body parts were boiled in water, wine, milk, or vinegar for several hours. The boiling had the effect of separating the flesh from the bone. The heart and intestines needed to be removed in order to allow for proper transfer of the bones. Any residual was scraped from the bones, leaving a completely clean skeleton. Both the flesh and internal organs could be buried immediately, or preserved with salt in the same manner as animal meat. The bones could then be sprinkled with perfumes or fragrances. The bones, and any preserved flesh, would then be transported back to the deceased's home for ceremonial interment. Medieval society generally regarded entrails as ignoble and there was no great solemnity attached to their disposal, especially among German aristocrats. Prohibition of the practice Although the Church had a high regard for the practice, Pope Boniface VIII was known to have an especial repugnance of Mos Teutonicus because of his ideal of bodily integrity. In his bull of 1300, De Sepulturis, Boniface forbade the practice. The papal bull issued which banned this practice was often misinterpreted as prohibition against human dissection. This may have hindered anatomical research, if anatomists feared repercussions and punishment as a result of medical autopsies, but De Sepulturis only prohibited the act of Mos Teutonicus, not dissection in general (medieval physicians were known to have widely practiced dissection and autopsy, though most had an assistant perform the actual incisions and manipulations of cadavers). The practice of Mos Teutonicus eventually stopped in the 15th century. Bio-archeological effects The process or Mos Teutonicus often did not produce clean cuts during de-fleshing. As a result, it is noticed that nobles that had their bodies undergo Mos Teutonicus have cut marks on their bones from the de-fleshing process. Mos Teutonicus was also able to preserve the bones of higher-class individuals better than lower-class individuals. The bones would not have been subject to outside elements so there is limited evidence of perhaps chew marking from animals. In addition, the bones were not subject to flesh decay and were boiled in either water or wine, preventing further degradation. Thus, the bones of higher-class individuals were better preserved than lower-class individuals. See also Excarnation Notes References Further reading Crusades Death customs Archaeology of death Ritual Traditions Commemoration Cultural aspects of death 15th-century disestablishments in Europe Medieval culture 12th-century quotations
Mos Teutonicus
[ "Biology" ]
1,135
[ "Behavior", "Human behavior", "Ritual" ]
4,197,595
https://en.wikipedia.org/wiki/Stannabenzene
Stannabenzene (C5H6Sn) is the parent representative of a group of organotin compounds that are related to benzene with a carbon atom replaced by a tin atom. Stannabenzene itself has been studied by computational chemistry, but has not been isolated. Stable derivatives of stannabenzene Stable derivatives of stannabenzene have been isolated. The 2-stannanaphthalene depicted below is stable in an inert atmosphere at temperatures below 140 °C. The tin to carbon bond in this compound is shielded from potential reactants by two very bulky groups, one tert-butyl group and the even larger 2,4,6-tris[bis(trimethylsilyl)methyl]phenyl or Tbt group. The two Sn-C bonds have bond lengths of 202.9 and 208.1 pm which are shorter than those for Sn-C single bonds (214 pm) and comparable to that of known Sn=C double bonds (201.6 pm). The C-C bonds show little variation with bond lengths between 135.6 and 144.3 pm signaling that this compound is aromatic. Tbt-substituted 9-stannaphenanthrene was reported in 2005. At room temperature it forms the [4+2] cycloadduct. Tbt-substituted stannabenzene was reported in 2010. At room-temperature it quantitatively forms the DA dimer. See also 6-membered aromatic rings with one carbon replaced by another group: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium References Tin heterocycles Six-membered rings Hypothetical chemical compounds Tin(IV) compounds
Stannabenzene
[ "Chemistry" ]
412
[ "Theoretical chemistry", "Hypotheses in chemistry", "Hypothetical chemical compounds" ]
4,197,790
https://en.wikipedia.org/wiki/Germabenzene
Germabenzene (C5H6Ge) is the parent representative of a group of chemical compounds containing in their molecular structure a benzene ring with a carbon atom replaced by a germanium atom. Germabenzene itself has been studied theoretically, and synthesized with a bulky 2,4,6-tris[bis(trimethylsilyl)methyl]phenyl or Tbt group. Also, stable naphthalene derivatives do exist in the laboratory such as the 2-germanaphthalene-containing substance represented below. The germanium to carbon bond in this compound is shielded from potential reactants by a Tbt group. This compound is aromatic just as the other carbon group representatives silabenzene and stannabenzene. See also 6-membered aromatic rings with one carbon replaced by another group: borabenzene, silabenzene, germabenzene, stannabenzene, pyridine, phosphorine, arsabenzene, bismabenzene, pyrylium, thiopyrylium, selenopyrylium, telluropyrylium References Germanium heterocycles Germanium(IV) compounds Six-membered rings Hypothetical chemical compounds
Germabenzene
[ "Chemistry" ]
266
[ "Theoretical chemistry", "Hypotheses in chemistry", "Hypothetical chemical compounds" ]
16,180,789
https://en.wikipedia.org/wiki/Pinacidil
{{chembox | Verifiedfields = changed | Watchedfields = changed | verifiedrevid = 458266470 | ImageFile = Pinacidil structure.svg | ImageSize = | IUPACName = N-cyano-''N-pyridin-4-yl-N''-(1,2,2-trimethylpropyl)guanidine | OtherNames = |Section1= |Section2= |Section6= |Section7= }}Pinacidil is a cyanoguanidine drug that opens ATP-sensitive potassium channels producing peripheral vasodilatation of arterioles. It reduces blood pressure and peripheral resistance and produces fluid retention. Synthesis Condensation of 4-isothiocyanotopyridine [76105-84-5] (1) and 3,3-dimethyl-2-butanamine [3850-30-4] (2) gives thiourea [67027-06-9] (3). Treatment of that intermediate with a mixture of triphenylphosphine, carbon tetrachloride, and triethylamine leads to the unsymmetrical carbodiimide, CID:20501933 (4'). Addition of cyanamid affords pinacidil (5'''). References External links Potassium channel openers 4-Pyridyl compounds Cyanamides
Pinacidil
[ "Chemistry" ]
314
[ "Cyanamides", "Functional groups" ]
16,181,056
https://en.wikipedia.org/wiki/Parietin
Parietin is the predominant cortical pigment of lichens in the genus Caloplaca, a secondary product of the lichen Xanthoria parietina, and a pigment found in the roots of curled dock (Rumex crispus). It has an orange-yellow color and absorbs blue light. It is also known as physcion. It has also been shown to protect lichens against UV-B light, at high altitudes in alpine regions. The UV-B light stimulates production of parietin and the parietin protects the lichens from damage. Lichens in arctic regions such as Svalbard retain this capability though they do not encounter damaging levels of UV-B, a capability that could help protect the lichens in case of ozone layer thinning. It has also shown anti-fungal activity against barley powdery mildew and cucumber powdery mildew, more efficiently in the latter case than treatments with fenarimol and polyoxin B. It reacts with KOH to form a deep, reddish-magenta compound. Effect on human cancer cells Also found in rhubarb, the orange compound appears to have potential to suppress 6-phosphogluconate dehydrogenase, or 6PGD. 6PGD is the third enzyme of the pentose phosphate pathway, or PPP, an oxidative process fueling growth in a still-relatively-unknown way. But it appears that arresting the chemical machinery at its third step could be promising for oncology. The parietin, identified from an FDA database of 2,000 known suppressors of 6PGD, killed half the human leukemia cells over two days in the laboratory. The pigment also slowed the growth of other human cancer cells in mouse models, according to the study. A more-potent derivative of the parietin called S3 may even cut the growth of lung cancer cells implanted in mice by two-thirds, over the course of 11 days. The compound also appears to be non-toxic to healthy cells. References Further reading Caloplaca coralloides chemistry Anthraquinone dyes Antifungals Dihydroxyanthraquinones Phenol ethers Lichen products
Parietin
[ "Chemistry" ]
466
[ "Natural products", "Lichen products" ]
16,182,186
https://en.wikipedia.org/wiki/List%20of%20works%20designed%20with%20the%20golden%20ratio
Many works of art are claimed to have been designed using the golden ratio. However, many of these claims are disputed, or refuted by measurement. The golden ratio, an irrational number, is approximately 1.618; it is often denoted by the Greek letter φ (phi). Early history Various authors have claimed that early monuments have golden ratio proportions, often on conjectural interpretations, using approximate measurements, and only roughly corresponding to 1.618. For example, claims have been made about golden ratio proportions in Egyptian, Sumerian and Greek vases, Chinese pottery, Olmec sculptures, and Cretan and Mycenaean products from the late Bronze Age. These predate by some 1,000 years the Greek mathematicians first known to have studied the golden ratio. However, the historical sources are obscure, and the analyses are difficult to compare because they employ differing methods. It is claimed, for instance, that Stonehenge (3100 BC – 2200 BC) has golden ratio proportions between its concentric circles. Kimberly Elam proposes this relation as early evidence of human cognitive preference for the golden ratio. However, others point out that this interpretation of Stonehenge "may be doubtful" and that the geometric construction that generates it can only be surmised. As another example, Carlos Chanfón Olmos states that the sculpture of King Gudea (c. 2350 BC) has golden proportions between all of its secondary elements repeated many times at its base. The Great Pyramid of Giza (constructed c. 2570 BC by Hemiunu) exhibits the golden ratio according to various pyramidologists, including Charles Funck-Hellet. John F. Pile, interior design professor and historian, has claimed that Egyptian architects sought the golden proportions without mathematical techniques and that it is common to see the 1.618:1 ratio, along with many other simpler geometrical concepts, in their architectural details, art, and everyday objects found in tombs. In his opinion, "That the Egyptians knew of it and used it seems certain." From before the beginning of these theories, other historians and mathematicians have proposed alternative theories for the pyramid designs that are not related to any use of the golden ratio, and are instead based on purely rational slopes that only approximate the golden ratio. The Egyptians of those times apparently did not know the Pythagorean theorem; the only right triangle whose proportions they knew was the 3:4:5 triangle. Ancient and medieval architecture Greece The Acropolis of Athens (468–430 BC), including the Parthenon, according to some studies, has many proportions that approximate the golden ratio. Other scholars question whether the golden ratio was known to or used by Greek artists and architects as a principle of aesthetic proportion. Building the Acropolis is calculated to have been started around 600 BC, but the works said to exhibit the golden ratio proportions were created from 468 BC to 430 BC. The Parthenon (447–432 BC), was a temple of the Greek goddess Athena. The Parthenon's facade as well as elements of its facade and elsewhere are claimed to be circumscribed by a progression of golden rectangles. Some more recent studies dispute the view that the golden ratio was employed in the design. Hemenway claims that the Greek sculptor Phidias (c. 480–c. 430 BC) used the divine proportion in some of his sculptures. He created Athena Parthenos in Athens and Statue of Zeus (one of the Seven Wonders of the Ancient World) in the Temple of Zeus at Olympia. He is believed to have been in charge of other Parthenon sculptures, although they may have been executed by his disciple or peers. In the early 20th century, American mathematician Mark Barr proposed the Greek letter phi (φ), the first letter of Phidias's name, to denote the golden ratio. Lothar Haselberger claims that the Temple of Apollo in Didyma (c. 334 BC), designed by Daphnis of Mileto and Paionios of Ephesus, has golden proportions. It is claimed that the upper level of 21 rows and the lower level of 34 rows of the Ancient Theatre of Epidaurus form an approximation of the Golden number since 21 and 34 are successive Fibonacci numbers with their ratio at and a careful examination of the theatre's center reveals two back-to-back triangles balanced by the Golden number. Prehispanic Mesoamerican architecture Between 1950 and 1960, Manuel Amabilis applied some of the analysis methods of Frederik Macody Lund and Jay Hambidge in several designs of prehispanic buildings, such as El Toloc and La Iglesia de Las Monjas (the Nuns Church), a notable complex of Terminal Classic buildings constructed in the Puuc architectural style at Chichen Itza. According to his studies, their proportions are concretized from a series of polygons, circles and pentagrams inscribed, as Lund found in his studies of Gothic churches. Manuel Amabilis published his studies along with several self-explanatory images of other pre-columbian buildings made with golden ratio proportions in La Arquitectura Precolombina de Mexico. The work was awarded the gold medal and the title of Academico by the Real Academia de Bellas Artes de San Fernando (Spain) in the Fiesta de la Raza (Columbus Day) of 1929. The Castle of Chichen Itza was built by the Maya civilization between the 11th and 13th centuries AD as a temple to the god Kukulcan. John Pile claims that its interior layout has golden ratio proportions. He says that the interior walls are placed so that the outer spaces are related to the central chamber by the golden ratio. Islamic architecture The Great Mosque of Kairouan (built by Uqba ibn Nafi c. 670 C.E.) uses the golden ratio in the design including its plan, the prayer space, court, and minaret, but the ratio does not appear in the original parts of the mosque. Buddhist architecture The Stupa of Borobudur in Java, Indonesia (built eighth to ninth century AD), the largest known Buddhist stupa, has the dimension of the square base related to the diameter of the largest circular terrace as 1.618:1, according to Pile. Romanesque architecture The Romanesque style of architecture prevailed in Europe between 900 and 1200, a period which ends with the transition to Gothic architecture. The contrast between Romanesque and Gothic concepts in religious buildings can be understood in the epistolary between St. Bernard, Cistercian, and the Abbot Suger of the order of Cluny, the initiator of Gothic art in St. Denis. One of the most beautiful works of Romanesque Cistercian is the Sénanque Abbey in Provence. The Sénanque abbatial was founded in 1148 and consecrated in 1178. It was initiated in life of St Bernard of Clairvaux. "La Lumière à Sénanque" (The Light in Sénanque), a chapter of Cîteaux : commentarii cistercienses, a publication of the Cistercian Order. Its author, Kim Lloveras i Montserrat, made in 1992 a complete study of the abbatial, and argues that the abbatial church was designed using a system of measures founded in the golden ratio, and that the instruments used for its construction were the "Vescica" and the medieval squares used by the constructors, both designed with the golden ratio. The "Vescica" of Sénanque is located in the cloister of the monastery, in front of the Chapter, the site of the workshop. Gothic architecture In his 1919 book Ad Quadratum, Frederik Macody Lund, a historian who studied the geometry of several Gothic structures, claims that the Cathedral of Chartres (begun in the 12th century), the Notre-Dame of Laon (1157–1205), and the Notre-Dame de Paris (1160) are designed according to the golden ratio. Other scholars argue that until Luca Pacioli's 1509 De Divina Proportione (see next section), the golden ratio was unknown to artists and architects, although this is not likely the case since the ratio was explicitly defined by Euclid. A 2003 conference on medieval architecture resulted in the book Ad Quadratum: The Application of Geometry to Medieval Architecture. According to a summary by one reviewer: Most of the contributors consider that the setting out was done ad quadratum, using the sides of a square and its diagonal. This gave an incommensurate ratio of [square root of (2)] by striking a circular arc (which could easily be done with a rope rotating around a peg). Most also argued that setting out was done geometrically rather than arithmetically (with a measuring rod). Some considered that setting out also involved the use of equilateral or Pythagorean triangles, pentagons, and octagons. Two authors believe the Golden Section (or at least its approximation) was used, but its use in medieval times is not supported by most architectural historians. The Australian architectural historian John James made a detailed study of the Cathedral of Chartres. In his work The Master Masons of Chartres he says that Bronze, one of the master masons, used the golden ratio. It was the same relation as between the arms of their metal square: Bronze by comparison was an innovator, in practical rather than in philosophic things. Amongst other things Bronze was one of the few masters to use the fascinating ratio of the golden mean. For the builder, the most important function Fi, as we write the golden mean, is that if the uses is consistently he will find that every subdivision, no matter how accidentally it may have been derived, will fit somewhere into the series. Is not too difficult a ratio to reproduce, and Bronze could have had the two arms of his metal square cut to represent it. All he would than have had to do was to place the square on the stone and, using the string draw between the corners, relate any two lengths by Phi. Nothing like making life easy. Art Renaissance De divina proportione, written by Luca Pacioli in Milan in 1496–1498, published in Venice in 1509, features 60 drawings by Leonardo da Vinci, some of which illustrate the appearance of the golden ratio in geometric figures. Starting with part of the work of Leonardo da Vinci, this architectural treatise was a major influence on generations of artists and architects. Vitruvian Man, created by Leonardo da Vinci around the year 1492, is based on the theories of the man after which the drawing takes its name, Vitruvius, who in De Architectura: The Planning of Temples (c. I BC) pointed that the planning of temples depends on symmetry, which must be based on the perfect proportions of the human body. Some authors feel there is no actual evidence that Da Vinci used the golden ratio in Vitruvian Man; however, Olmos (1991) observes otherwise through geometrical analysis. He also proposes Leonardo da Vinci's self portrait, Michelangelo's David (1501–1504), Albrecht Dürer's Melencolia I and the classic violin design by the masters of Cremona (Guarneri, Stradivari and several members of the Amati family) as having similar regulator lines related to the golden ratio. Da Vinci's Mona Lisa (c. 1503–1506) "has been the subject of so many volumes of contradicting scholarly and popular speculations that it virtually impossible to reach any unambiguous conclusions" with respect to the golden ratio, according to Livio. The Tempietto chapel at the Monastery of Saint Peter in Montorio, Rome, built by Bramante, has relations to the golden ratio in its elevation and interior lines. Baroque José Villagrán García has claimed that the golden ratio is an important element in the design of the Mexico City Metropolitan Cathedral (circa 1667–1813). Olmos claims the same for the design of the cities of Coatepec (1579), Chicoaloapa (1579) and Huejutla (1580), as well as the Mérida Cathedral, the Acolman Temple, Christ Crucified by Diego Velázquez (1639) and The Immaculate Conception by Bartolomé Esteban Murillo. Neo-Impressionism Matila Ghyka and others contend that Georges Seurat used golden ratio proportions in paintings like Parade de cirque, Le Pont de Courbevoie, and Bathers at Asnières. However, there is no direct evidence to support these claims. While the golden ratio appears to govern the geometric structure of Seurat's Parade de cirque (Circus Sideshow), modern consensus among art historians is that Seurat never used this "divine proportion" in his work. The final study of Parade, executed prior to the oil on canvas, is divided horizontally into fourths and vertically into sixths (4 : 6 ratio) corresponding to the dimensions of the canvas, which is one and one-half times wider than its vertical dimension. These axes do not correspond precisely to the golden section, 1 : 1.6, as might have been expected. Rather, they correspond to basic mathematical divisions (simple ratios that appear to approximate the golden section), as noted by Seurat with citations from the mathematician, inventor, esthetician Charles Henry. Cubism The idea of the Section d'Or (or Groupe de Puteaux) originated in the course of conversations between Albert Gleizes, Jean Metzinger and Jacques Villon. The group's title was suggested by Villon, after reading a 1910 translation of Leonardo da Vinci's A Treatise on Painting by Joséphin Péladan. Péladan attached great mystical significance to the golden section (), and other similar geometric configurations. For Villon, this symbolized his belief in order and the significance of mathematical proportions, because it reflected patterns and relationships occurring in nature. Jean Metzinger and the Duchamp brothers were passionately interested in mathematics. Jean Metzinger, Juan Gris and possibly Marcel Duchamp at this time were associates of Maurice Princet, an amateur mathematician credited for introducing profound and rational scientific arguments into Cubist discussions. The name 'Section d'Or' represented simultaneously a continuity with past traditions and current trends in related fields, while leaving open future developments in the arts. Surrealism The Sacrament of the Last Supper (1955): The canvas of this surrealist masterpiece by Salvador Dalí is a golden rectangle. A huge dodecahedron, with edges in golden ratio to one another, is suspended above and behind Jesus and dominates the composition. De Stijl Some works in the Dutch artistic movement called De Stijl, or neoplasticism, exhibit golden ratio proportions. Piet Mondrian used the golden section extensively in his neoplasticist, geometrical paintings, created circa 1918–38. Mondrian sought proportion in his paintings by observation, knowledge and intuition, rather than geometrical or mathematical methods. Recent architecture Mies van der Rohe The Farnsworth House, designed by Ludwig Mies van der Rohe, has been described as "the proportions, within the glass walls, approach 1:2" and "with a width to length ratio of 1:1.75 (nearly the golden section)" and has been studied with his other works in relation to the golden ratio. Le Corbusier The Swiss architect Le Corbusier, famous for his contributions to the modern international style, centered his design philosophy on systems of harmony and proportion. Le Corbusier's faith in the mathematical order of the universe was closely bound to the golden ratio and the Fibonacci number, which he described as "rhythms apparent to the eye and clear in their relations with one another. And these rhythms are at the very root of human activities. They resound in man by an organic inevitability, the same fine inevitability which causes the tracing out of the Golden Section by children, old men, savages and the learned." Le Corbusier explicitly used the golden ratio in his system for the scale of architectural proportion. He saw this system as a continuation of the long tradition of Vitruvius, Leonardo da Vinci's Vitruvian Man, the work of Leon Battista Alberti, and others who used the proportions of the human body to improve the appearance and function of architecture. In addition to the golden ratio, Le Corbusier based the system on human measurements, Fibonacci numbers, and the double unit. He took Leonardo's suggestion of the golden ratio in human proportions to an extreme: he sectioned his model human body's height at the navel with the two sections in golden ratio, then subdivided those sections in golden ratio at the knees and throat; he used these golden ratio proportions in the Modulor system. In The Modulor: A Harmonious Measure to the Human Scale, Universally Applicable to Architecture and Mechanics Le Corbusier reveals he used his system in the Marseilles Unité d'habitation (in the general plan and section, the front elevation, plan and section of the apartment, in the woodwork, the wall, the roof and some prefabricated furniture), a small office in 35 rue de Sèvres, a factory in Saint-Die and the United Nations Headquarters building in New York City. Many authors claim that the shape of the facade of the second is the result of three golden rectangles; however, each of the three rectangles that can actually be appreciated have different heights. Josep Lluís Sert Catalan architect Josep Lluis Sert, a disciple of Le Corbusier, applied the measures of the Modulor in all his particular works, including the Sert's House in Cambridge and the Joan Miró Foundation in Barcelona. Neo-Gothic According to the official tourism page of Buenos Aires, Argentina, the ground floor of the Palacio Barolo (1923), designed by Italian architect Mario Palanti, is built according to the golden ratio. Post-modern Another Swiss architect, Mario Botta, bases many of his designs on geometric figures. Several private houses he designed in Switzerland are composed of squares and circles, cubes and cylinders. In a house he designed in Origlio, the golden ratio is the proportion between the central section and the side sections of the house. Music Ernő Lendvai analyzes Béla Bartók's works as being based on two opposing systems, that of the golden ratio and the acoustic scale, though other music scholars reject that analysis. The musicologist Roy Howat has observed that the formal boundaries of Debussy's La mer correspond exactly to the golden section. Trezise finds the intrinsic evidence "remarkable", but cautions that no written or reported evidence suggests that Debussy consciously sought such proportions. Leonid Sabaneyev hypothesizes that the separate time intervals of the musical pieces connected by the "culmination event", as a rule, are in the ratio of the golden section. However, the author attributes this incidence to the instinct of the musicians: "All such events are timed by author's instinct to such points of the whole length that they divide temporary durations into separate parts being in the ratio of the golden section." Ron Knott exposes how the golden ratio is unintentionally present in several pieces of classical music: An article of American Scientist ("Did Mozart use the Golden mean?", March/April 1996), reports that John Putz found that there was considerable deviation from ratio section division in many of Mozart's sonatas and claimed that any proximity to this number can be explained by constraints of the sonata form itself. Derek Haylock claims that the opening motif of Ludwig van Beethoven's Symphony No. 5 in C minor, Op. 67 (c. 1804–08), occurs exactly at the golden mean point 0.618 in bar 372 of 601 and again at bar 228 which is the other golden section point (0.618034 from the end of the piece) but he has to use 601 bars to get these figures. This he does by ignoring the final 20 bars that occur after the final appearance of the motif and also ignoring bar 387. According to author Leon Harkleroad, "Some of the most misguided attempts to link music and mathematics have involved Fibonacci numbers and the related golden ratio." With few exceptions, numerators for the meter signatures (over 100) in Karlheinz Stockhausen's Klavierstück IX are either Fibonacci or Lucas numbers. References Bibliography External links Nexux Network Journal – Architecture and Mathematics Online. Kim Williams Books Mathematical artworks Golden ratio Golden ratio
List of works designed with the golden ratio
[ "Mathematics" ]
4,317
[ "Golden ratio" ]
16,182,677
https://en.wikipedia.org/wiki/Side-stick
A side-stick or sidestick controller is an aircraft control stick that is located on the side console of the pilot, usually on the righthand side, or outboard on a two-seat flightdeck. Typically this is found in aircraft that are equipped with fly-by-wire control systems. The throttle controls are typically located to the left of a single pilot or centrally on a two-seat flightdeck. Only one hand is required to operate them; two handed operation is neither possible nor necessary. Prevalence The side-stick is used in many modern military fighter aircraft, such as the F-16 Fighting Falcon, Mitsubishi F-2, Dassault Rafale, and F-22 Raptor, F-35 Lightning 2, Chengdu J-20, AIDC F-CK 1 Ching-Kuo and also on civil aircraft, such as the Sukhoi Superjet 100, Airbus A320 and all subsequent Airbus aircraft, including the largest passenger jet in service, the Airbus A380. It is also used in new helicopter models such as the Bell 525. Compared to centre sticks A side-stick arrangement contrasts with the more conventional design where the stick is located in the centre of the cockpit between the pilot's legs, called a "centre stick". A side-stick arrangement allows HOTAS and increases ejection seat safety for the pilot as there is less interference amongst flight controls. Handling of dual input situations In Airbus' implementation, input values of both side-sticks are normally added up, except when the "priority takeover button" is held down. In such a scenario, any inputs on the other side-stick will be ignored. Holding this button down for a minimum of 40 seconds will result in the other side-stick being disabled. This can reversed by pressing the button on either side-stick again. A green light will activate on the side of the pilot currently on control. In contrast, on the side of the other pilot, a red light will turn on to indicate that their side-stick's inputs are being ignored. While the inputs are added up, the sum is clamped to the value of the maximum possible deflection a single side-stick; but this still means that when both side-stick are deflected 50% in the same direction, the resulting effective input will be that of a fully deflected side-stick, despite neither one being deflected over 50%. In addition, because the inputs are added up, any deflection of the other side-stick in the opposite direction will in effect be subtracted, resulting in the inputs partially cancelling each other out. In fact, if two inputs have opposite directions but equal magnitudes, the sum will be zero, and thus the flight control surfaces would remain in their current positions. In addation to visual indications, detection of more than a single side-stick deflection greater than 2° from neutral without the priority takeover button being held down results in an aural "DUAL INPUT" warning being played every five seconds. Due to this aural warning having the lowest priority, it will not be played if there are warnings with a higher priority, such as those from the EGPWS, as those will take precedence, posing a potential risk for pilots. Examples of this occurring include the 2009 crash of Air France Flight 447 (an Airbus A330 flying from Rio de Janeiro to Paris), the 2010 crash of Afriqiyah Airways Flight 771 an Airbus A330 from flying Johannesburg to Tripoli and the 2014 crash of Indonesia AirAsia Flight 8501 (an Airbus A320 flying from Surabaya to Singapore).. Comparison of passive and active side-sticks Passive side-sticks In the centre stick design, like traditional airplane yokes, both the pilot flying, PF's, and pilot not flying, PNF's, controls are mechanically connected together so each pilot has a sense of the control inputs of the other. In aircraft with passive side-sticks, on the other hand, they and move independently from each other, and do not offer any haptic feedback on what the other pilot is inputting. This can lead to "dual input" situations, which should be avoided. To see how dual input situations are handled, see Active side-sticks However a later, significant, development is the 'active' side-stick, which is in the new Gulfstream G500/G600 series business jet aircraft. In this system, movements in one side-stick produce the same actions in the other side-stick and therefore provides valuable feedback to the other pilot. This addresses the earlier criticisms of the 'passive' side-stick. The 'active' side-stick also provides tactile feedback to the pilot during manual flight. In fact the three largest avionics manufacturers, Honeywell, Rockwell Collins and Thales, believe it will become the standard for all new fly-by-wire aircraft. In 2015 Ratier-Figeac as a subsidiary of UTC Aerospace Systems, and supplier of ‘passive’ side-sticks to Airbus since the 1980s became the supplier of ‘active’ side-sticks for the Irkut MC-21. This is the first airliner to use them. Such an active side-stick can also be used to increase adherence to a safe flight envelope by applying a force feedback when the pilot makes a control input that would bring the aircraft closer to (or beyond) the borders of the safe flight envelope. This reduces the risk of pilots entering dangerous states of flights outside the operational borders while maintaining the pilots' final authority and increasing their situation awareness. See also Centre stick Yoke (aeronautics) Fly-by-wire Dual control (aviation) Rudder pedals Accidents Air France Flight 447 Afriqiyah Airways Flight 771 Armavia Flight 967 Indonesia AirAsia Flight 8501 References External links Formation stick from Popular Science 1945. Design Aircraft controls
Side-stick
[ "Engineering" ]
1,204
[ "Design" ]
16,183,619
https://en.wikipedia.org/wiki/Donkey%20sentence
In semantics, a donkey sentence is a sentence containing a pronoun which is semantically bound but syntactically free. They are a classic puzzle in formal semantics and philosophy of language because they are fully grammatical and yet defy straightforward attempts to generate their formal language equivalents. In order to explain how speakers are able to understand them, semanticists have proposed a variety of formalisms including systems of dynamic semantics such as Discourse representation theory. Their name comes from the example sentence "Every farmer who owns a donkey beats it", in which "it" acts as a donkey pronoun because it is semantically but not syntactically bound by the indefinite noun phrase "a donkey". The phenomenon is known as donkey anaphora. Examples The following sentences are examples of donkey sentences. ("Every man who owns a donkey sees it") — Walter Burley (1328), Every farmer who owns a donkey beats it. If a farmer owns a donkey, he beats it. Every police officer who arrested a murderer insulted him." Analysis of donkey sentences The goal of formal semantics is to show how sentences of a natural language such as English could be translated into a formal logical language, and so would then be amenable to mathematical analysis. Following Russell, it is typical to translate indefinite noun phrases using an existential quantifier, as in the following simple example from Burchardt et al: "A woman smokes." is translated as The prototypical donkey sentence, "Every farmer who owns a donkey beats it.", requires careful consideration for adequate description (though reading "each" in place of "every" does simplify the formal analysis). The donkey pronoun in this case is the word it. Correctly translating this sentence will require using a universal quantifier for the indefinite noun phrase "a donkey", rather than the expected existential quantifier. The naive first attempt at translation given below is not a well-formed sentence, since the variable is left free in the predicate . It may be attempted to extend the scope of the existential quantifier to bind the free instance of , but it still does not give a correct translation. This translation is incorrect since it is already true if there exists any object that is not a donkey: Given any object to be substituted for , substituting any non-donkey object for makes the material conditional true (since its antecedent is false), and so existential clause is true for every choice of . A correct translation into first-order logic for the donkey sentence seems to be , indicating that indefinites must sometimes be interpreted as existential quantifiers, and other times as universal quantifiers. There is nothing wrong with donkey sentences: they are grammatically correct, they are well-formed and meaningful, and their syntax is regular. However, it is difficult to explain how donkey sentences produce their semantic results, and how those results generalize consistently with all other language use. If such an analysis were successful, it might allow a computer program to accurately translate natural language forms into logical form. It is unknown how natural language users agree – apparently effortlessly – on the meaning of sentences such as the examples. There may be several equivalent ways of describing this process. In fact, Hans Kamp (1981) and Irene Heim (1982) independently proposed very similar accounts in different terminology, which they called discourse representation theory (DRT) and file change semantics (FCS), respectively. Theories of donkey anaphora It is usual to distinguish two main kinds of theories about the semantics of donkey pronouns. The most classical proposals fall within the so-called description-theoretic approach, a label that is meant to encompass all the theories that treat the semantics of these pronouns as akin to, or derivative from, the semantics of definite descriptions. The second main family of proposals goes by the name dynamic theories, and they model donkey anaphora – and anaphora in general – on the assumption that the meaning of a sentence lies in its potential to change the context (understood as the information shared by the participants in a conversation). Description-theoretic approaches Description-theoretic approaches are theories of donkey pronouns in which definite descriptions play an important role. They were pioneered by Gareth Evans's E-type approach, which holds that donkey pronouns can be understood as referring terms whose reference is fixed by description. For example, in "Every farmer who owns a donkey beats it.", the donkey pronoun "it" can be expanded as a definite description to yield "Every farmer who owns a donkey beats the donkey he/she owns." This expanded sentence can be interpreted along the lines of Russell's theory of descriptions. Later authors have attributed an even larger role to definite descriptions, to the point of arguing that donkey pronouns have the semantics, and even the syntax, of definite descriptions. Approaches of the latter kind are usually called D-type. Discourse representation theory Donkey sentences became a major force in advancing semantic research in the 1980s, with the introduction of discourse representation theory (DRT). During that time, an effort was made to settle the inconsistencies which arose from the attempts to translate donkey sentences into first-order logic. The solution that DRT provides for the donkey sentence problem can be roughly outlined as follows: The common semantic function of non-anaphoric noun phrases is the introduction of a new discourse referent, which is in turn available for the binding of anaphoric expressions. No quantifiers are introduced into the representation, thus overcoming the scope problem that the logical translations had. Dynamic Predicate Logic Dynamic Predicate Logic models pronouns as first-order logic variables, but allows quantifiers in a formula to bind variables in other formulae. History Walter Burley, a medieval scholastic philosopher, introduced donkey sentences in the context of the theory of supposition theory, the medieval equivalent of reference theory. Peter Geach reintroduced donkey sentences as a counterexample to Richard Montague's proposal for a generalized formal representation of quantification in natural language. His example was reused by David Lewis (1975), Gareth Evans (1977) and many others, and is still quoted in recent publications. See also References Further reading Abbott, Barbara. 'Donkey Demonstratives'. Natural Language Semantics 10 (2002): 285–298. Barker, Chris. 'Individuation and Quantification'. Linguistic Inquiry 30 (1999): 683–691. Barker, Chris. 'Presuppositions for Proportional Quantifiers'. Natural Language Semantics 4 (1996): 237–259. Brasoveanu, Adrian. Structured Nominal and Modal Reference. Rutgers University PhD dissertation, 2007. Brasoveanu, Adrian. 'Uniqueness Effects in Donkey Sentences and Correlatives'.Sinn und Bedeutung 12 (2007):1. Burgess, John P. ' E Pluribus Unum: Plural Logic and Set Theory', Philosophia Mathematica 12 (2004): 193–221. Cheng, Lisa LS and C.-T. James Huang. 'Two Types of Donkey Sentences'. Natural Language Semantics 4 (1996): 121–163. Cohen, Ariel. Think Generic! Stanford, California: CSLI Publications, 1999. Conway, L. and S. Crain. 'Donkey Anaphora in Child Grammar'. In Proceedings of the North East Linguistics Society (NELS) 25. University of Massachusetts Amherst, 1995. Evans, Gareth. 'Pronouns'. Linguistic Inquiry 11 (1980): 337–362. Geurts, Bart. Presuppositions and Pronouns. Oxford: Elsevier, 1999. Harman, Gilbert. 'Anaphoric Pronouns as Bound Variables: Syntax or Semantics?' Language 52 (1976): 78–81. Heim, Irene. 'E-Type Pronouns and Donkey Anaphora'. Linguistics and Philosophy 13 (1990): 137–177. Just, MA. 'Comprehending Quantified Sentences: The Relation between Sentencepicture and Semantic Memory Verification'. Cognitive Psychology 6 (1974): 216–236. Just, MA and PA Carpenter. 'Comprehension of Negation with Quantification'. Journal of Verbal Learning and Verbal Behavior 10 (1971): 244–253. Kadmon, N. Formal Pragmatics: Semantics, Pragmatics, Presupposition, and Focus. Oxford: Blackwell Publishers, 2001. Kamp, Hans and Reyle, U. From Discourse to Logic. Dordrecht: Kluwer, 1993. Kanazawa, Makoto. 'Singular Donkey Pronouns Are Semantically Singular'. Linguistics and Philosophy 24 (2001): 383–403. Kanazawa, Makoto. 'Weak vs. Strong Readings of Donkey Sentences and Monotonicity Inference in a Dynamic Setting'. Linguistics and Philosophy 17 (1994): 109–158. Krifka, Manfred. 'Pragmatic Strengthening in Plural Predications and Donkey Sentences'. In Proceedings from Semantics and Linguistic Theory (SALT) 6. Ithaca, New York: Cornell University, 1996. Pages 136–153. Lappin, Shalom. 'An Intensional Parametric Semantics for Vague Quantifiers'. Linguistics and Philosophy 23 (2000): 599–620. Lappin, Shalom and Nissim Francez. 'E-type Pronouns, i-Sums, and Donkey Anaphora'. Linguistics and Philosophy 17 (1994): 391–428. Lappin, Shalom. 'Donkey Pronouns Unbound'. Theoretical Linguistics 15 (1989): 263–286. Lewis, David. Parts of Classes, Oxford: Blackwell Publishing, 1991. Lewis, David. 'General Semantics'. Synthese 22 (1970): 18–27. Moltmann, Friederike. 'Unbound Anaphoric Pronouns: E-Type, Dynamic and Structured Propositions Approaches'. Synthese 153 (2006): 199–260. Moltmann, Friederike. 'Presuppositions and Quantifier Domains'. Synthese 149 (2006): 179–224. Montague, Richard. 'Universal Grammar'. Theoria 26 (1970): 373–398. Neale, Stephen. Descriptions. Cambridge: MIT Press, 1990. Neale, Stephen. 'Descriptive Pronouns and Donkey Anaphora'. Journal of Philosophy 87 (1990): 113–150. Partee, Barbara H. 'Opacity, Coreference, and Pronouns'. Synthese 21 (1970): 359–385. Quine, Willard Van Orman. Word and Object. Cambridge, Massachusetts: MIT Press, 1970. Rooij, Robert van. 'Free Choice Counterfactual Donkeys'. Journal of Semantics 23 (2006): 383–402. Yoon, Y-E. Weak and Strong Interpretations of Quantifiers and Definite NPs in English and Korean. University of Texas at Austin PhD dissertation, 1994. Notes External links The Handbook of Philosophical Logic Discourse Representation Theory Introduction to Discourse Representation Theory SEP Entry Archive of CSI 5386 Donkey Sentence Discussion Barker, Chris. 'A Presuppositional Account of Proportional Ambiguity'. In Proceedings of Semantic and Linguistic Theory (SALT) 3. Ithaca, New York: Cornell University, 1993. Pages 1–18. Brasoveanu, Adrian. 'Donkey Pluralities: Plural Information States vs. Non-Atomic Individuals'. In Proceedings of Sinn und Bedeutung 11. Edited by E. Puig-Waldmüller. Barcelona: Pompeu Fabra University, 2007. Pages 106–120. Evans, Gareth. 'Pronouns, Quantifiers, and Relative Clauses (I)'. Canadian Journal of Philosophy 7 (1977): 467–536. Geurts, Bart. 'Donkey Business'. Linguistics and Philosophy 25 (2002): 129–156. Huang, C-T James. 'Logical Form'. Chapter 3 in Government and Binding Theory and the Minimalist Program: Principles and Parameters in Syntactic Theory edited by Gert Webelhuth. Oxford and Cambridge: Blackwell Publishing, 1995. Pages 127–177. Kamp, Hans. 'A Theory of Truth and Semantic Representation'. In J. Groenendijk and others (eds.). Formal Methods in the Study of Language. Amsterdam: Mathematics Center, 1981. Kitagawa, Yoshihisa. 'Copying Variables'. Chapter 2 in Functional Structure(s), Form and Interpretation: Perspectives from East Asian Languages. Edited by Yen-hui Audrey Li and others. Routledge, 2003. Pages 28–64. Lewis, David. 'Adverbs of Quantification'. In Formal Semantics of Natural Language. Edited by Edward L Keenan. Cambridge: Cambridge University Press, 1975. Pages 3–15. Montague, Richard. 'The Proper Treatment of Quantification in Ordinary English'. In KJJ Hintikka and others (eds). Proceedings of the 1970 Stanford Workshop on Grammar and Semantics. Dordrecht: Reidel, 1973. Pages 212–242. Pronouns Quantifier (logic) Semantics Formal semantics (natural language)
Donkey sentence
[ "Mathematics" ]
2,729
[ "Basic concepts in set theory", "Predicate logic", "Quantifier (logic)", "Mathematical logic" ]
16,184,041
https://en.wikipedia.org/wiki/Howard%20Wilson%20Emmons
Howard Wilson Emmons (1912–1998) was an American professor in the department of Mechanical Engineering at Harvard University. During his career he conducted original research on fluid mechanics, combustion and fire safety. Today he is most widely known for his pioneering work in the field of fire safety engineering. He has been called "the father of modern fire science" for his contribution to the understanding of flame propagation and fire dynamics. He also helped design the first supersonic wind tunnel, identified a signature of the transition to turbulence in boundary layer flows (now known as "Emmons spots"), and was the first to observe compressor stall in a gas turbine compressor (still a major item of research today). He initiated studies on diffusion flames inside a boundary layer, and Emmons problem is named after him. He was eventually awarded the Timoshenko Medal by the American Society of Mechanical Engineers and the 1968 Sir Alfred Egerton Gold Medal from The Combustion Institute. Upon Professor Emmons' death, Professor Patrick Pagni wrote, "It is not possible to properly summarize the magnitude of Professor Emmons' unique contributions to the establishment of fire safety science as a discipline, other than to call him "Mr. Fire Research". He continues to be remembered through the Emmons Lecture at International Symposium of The International Association for Fire Safety Science and the Howard W. Emmons Distinguished Scholar Endowment at Worcester Polytechnic Institute. Biography Born in Morristown, New Jersey on August 30, 1912. Bachelor of Engineering in mechanical engineering from Stevens Institute of Technology in 1933. Master of Engineering in mechanical engineering from Stevens Institute of Technology in 1935. Doctor of Science in mechanical engineering for Harvard University in 1938. Advisors were John Finnie Downie Smith and Charles Harold Berry. Worked briefly for Westinghouse and the University of Pennsylvania. Professor at Harvard from 1940 onwards. Notable student was Richard Ernest Kronauer, who later became an expert on human circadian rhythms. US National Academy of Engineering member in 1965. US National Academy of Sciences member in 1966. Wife Dorothy Children Beverly, Scott, and Keith Died November 20, 1998 Awards and honors American Physical Society Fellow, elected 1946 Honorary ScD from Stevens Institute of Technology, 1963 US National Academy of Engineering member, 1965 US National Academy of Sciences member, 1966 Egerton Gold Medal from the Combustion Institute, 1968 100th Anniversary Medal from Stevens Institute of Technology, 1970 Timoshenko Medal from ASME, 1971 Stevens Honor Award Medallion from Stevens Institute of Technology, 1970 Named Fire Protection Man of the Year by the Society of Fire Protection Engineers, 1982 Office of Naval Research Prize from the American Physical Society, 1982 Fluid Dynamics Prize (APS), 1982 Arthur B. Guise Medal by the Society of Fire Protection Engineers, 1986 Selected publications Sole Author The Drop Condensation of Vapors Harvard University Thesis (S.D.), 1938. Gas dynamics tables for air Dover: New York, NY, 1947. Fundamentals of Gas Dynamics Princeton University Press: Princeton NJ, 1958. Fluid mechanics and combustion Proceedings of the 13th International Symposium on Combustion, p. 1-18 Pittsburgh, Pa., Combustion Institute, 1971. “The Further History of Fire Science” Combustion Science and Technology, 40, 1984 (reprinted in Fire Technology, 21(3), 1985 ) Joint Thermodynamic properties of helium to 50.000K by Wilbert James Lick, Howard Wilson Emmons Harvard University Press: Cambridge, MA, 1962. Transport properties of helium from 200 to 50.000K by Wilbert James Lick, Howard Wilson Emmons Harvard University Press: Cambridge, MA, 1965. The fire whirl by Howard W. Emmons and Shuh-Jing Ying Proceedings of the 11th International Symposium on Combustion, p. 475-486 Pittsburgh, Pa., Combustion Institute, 1967. See also Howard W Emmons, Memorial Tributes: National Academy of Engineering, Volume 10 (2002) National Academy of Engineering TV show where Howard Emmons speaks of the 1980 MGM Las Vegas fire and of the fire code Harvard The Web of Mechanicians Howard W. Emmons Papers at WPI Notes References Howard W. Emmons, Authority on Fire Safety, Dies at 86, Harvard University Gazette (Dec 3 1998). Kronauer, Land, Stone, and Abernathy, Howard Wilson Emmons, Faculty of Arts and Sciences - Memorial Minute, Harvard University Gazette (March 1, 2007). Bryner, S.L., ed. "Symposium in Memory of Professor Howard Emmons", Fifteenth Meeting of the UJNR Panel on Fire Safety, Volume 2, March 2000. Land, R.I. and Trefethen, L.M. "A Tribute To Howard Wilson Emmons, 1912–1998", Journal of Fluids Engineering 121(2), p. 234-235 (June 1999). Beyler, Craig. "Guest Editorial: Professor Howard Emmons 1912-1998", Fire Technology 35(1), p. 1 (Feb 1999). External links 1912 births 1998 deaths People from Morristown, New Jersey Harvard John A. Paulson School of Engineering and Applied Sciences alumni Stevens Institute of Technology alumni University of Pennsylvania faculty 20th-century American physicists Members of the United States National Academy of Sciences Harvard University faculty Thermodynamicists Members of the United States National Academy of Engineering Fellows of the American Physical Society
Howard Wilson Emmons
[ "Physics", "Chemistry" ]
1,082
[ "Thermodynamics", "Thermodynamicists" ]
16,184,365
https://en.wikipedia.org/wiki/Kurt%20Sch%C3%BCtte
Kurt Schütte (14 October 1909 – 18 August 1998) was a German mathematician who worked on proof theory and ordinal analysis. The Feferman–Schütte ordinal, which he showed to be the precise ordinal bound for predicativity, is named after him. He was the doctoral advisor of 16 students, including Wolfgang Bibel, Wolfgang Maaß, Wolfram Pohlers, and Martin Wirsing. Publications Beweistheorie, Springer, Grundlehren der mathematischen Wissenschaften, 1960; new edition trans. into English as Proof Theory, Springer-Verlag 1977 Vollständige Systeme modaler und intuitionistischer Logik, Springer 1968 with Wilfried Buchholz: Proof Theory of Impredicative Subsystems of Analysis, Bibliopolis, Naples 1988 with Helmut Schwichtenberg: Mathematische Logik, in Fischer, Hirzebruch et al. (eds.) Ein Jahrhundert Mathematik 1890-1990, Vieweg 1990 References External links Kurt Schütte at the Mathematics Genealogy Project 1909 births 1998 deaths People from Salzwedel People from the Province of Saxony Mathematical logicians 20th-century German mathematicians
Kurt Schütte
[ "Mathematics" ]
264
[ "Proof theorists", "Mathematical logic", "Proof theory", "Mathematical logicians" ]
16,184,995
https://en.wikipedia.org/wiki/Nokia%201112
The Nokia 1112 is a low-end GSM mobile phone sold by Nokia. The 1112 was released in 2006. With graphical icons and large font sizes the Nokia 1112 is an easy to use mobile phone that aims at first-time mobile phone users. As a dual-band device it operates on GSM-900/1800 or GSM-850/1900 networks. It has a 96 x 68 pixels resolution monochrome display with white backlighting and an integrated handsfree speaker. The cell phone has built in utilities, such as a calculator and a stopwatch and it supports polyphonic ringtones. Beside other basic features like SMS and picture messaging it has a speaking clock and alarm. Its internal memory is 4 MB in size, most of it reserved for the 30+ low-bitrate polyphonic melodies, and also enabling it to hold up 200 phonebook entries. The battery powers the phone for up to over 5 hours talk time, or up to 15 days if left in stand-by mode. See also List of Nokia products References External links Nokia 1112 official product page Nokia 1112 user guide 1112 Mobile phones introduced in 2006 Mobile phones with user-replaceable battery
Nokia 1112
[ "Technology" ]
244
[ "Mobile technology stubs", "Mobile phone stubs" ]
16,185,838
https://en.wikipedia.org/wiki/Opium%20Law
The Opium Law (Opiumwet in Dutch) is the section of the Dutch law which covers nearly all psychotropic drugs. Origin and history In 1912, the First International Opium Conference took place in The Hague, where agreements were made about the trade in opium; this initiated the introduction of the Opium Law, which took place 7 years later. In 1919, the first Opium Law (later known as List I of the Opium Law) was introduced, and on 12 May 1928 the second Opium Law (later known as List II of the Opium Law) was introduced. The first Opium Law was created to regulate drugs with a high addiction or abuse factor, or that are physically harmful. As the name indicates the main reason for introduction was to regulate the Opium trade and later to control various other addictive drugs like morphine, cocaine, heroin, barbiturates, amphetamines and several decades later, benzodiazapines, which were used both medically and recreationally. Except for the addition of new drugs to List I and II of the Opium Law, the Opium Law remained unchanged until 1976. After the rise of a new youth culture which revolved much around the use of drugs like cannabis and LSD, and with hashish being openly used, a change of law was needed by the government, to properly control all drugs, but with a clear definition between drugs with an unacceptable degree of addictiveness or physical harm (known as hard drugs), and drugs with an acceptable degree of addictiveness or physical harm (known as soft drugs). In 1976 these changes officially took effect, and the Opium Law was edited to include the changes in the law. In the same year, a decision was made by the Dutch government to discontinue prosecuting cannabis and hashish offenses, provided the person did not sell hard drugs, did not advertise and carried less than a specified maximum amount of cannabis or hashish. In 1980, the decision to not prosecute cannabis and hashish dealers, under certain conditions, was publicly announced by the Dutch government. Many people thereby concluded that this decision would also allow the sale in coffee shops, and coffee shops began selling cannabis and hashish. This led to an enormous rise in the number of coffee shops in the 80's and 90's, and because of this, new regulations were demanded by the government to regulate the sale of cannabis products by coffee shops. In 1996 the laws were changed again to include new regulations for coffee shops. The terms coffee shops had to follow were: No advertisement No hard drugs No entrance to coffee shops by persons under the age of 18 No sale of more than 5 grams of cannabis products per person, per day Coffee shops are not allowed to have more than 500 grams of cannabis in stock at any time Since 01-01-2020, no new changes have been made to the Opium Law. Most of the changes in law since 1996 have been additions of new psychoactive substances, ADB-FUBINACA has been one of the latest on this list being added on 01-01-2020. New guidelines for coffee shops have been made, but they are not covered by the Opium Law. List I drugs The following drugs and intermediates are classified as List I drugs of the Opium Law: acetorphine acetyl-alpha-methylfentanyl acetyldihydrocodeine acetylmethadol alphacetylmethadol alphameprodine alphamethadol alphamethylfentanyl alphamethylthiofentanyl alphaprodine alfentanil allylprodine amphetamine amineptine anileridine benzethidine benzylmorphine betacetylmethadol beta-hydroxy-3-methylfentanyl beta-hydroxyfentanyl betameprodine betamethadol betaprodine bezitramide bolkaf (all parts of the papaver somniferum plant, after harvesting, excluding seeds) brolamphetamine cathinone 2C-B (2,5-dimethoxy-4-bromophenethylamine) 2C-I (2,5-dimethoxy-4-iodophenethylamine) 2C-T-2 (2,5-dimethoxy-4-ethylthiophenethylamine) 2C-T-7 (2,5-dimethoxy-4-(n)-propylthiophenethylamine) clonitazene coca leaf (leaves of the plants of the species Erythroxylon) cocaine codeine codoxime concentrate of bolkaf (the material obtained by subjecting bolkaf to a treatment for the concentration of its alkaloids) desomorphine dexamphetamine dextromoramide dextropropoxyphene diampromide diethylthiambutene DET (N,N-diethyltryptamine) diphenoxide diphenoxylate dihydrocodeine dihydroetorphine dihydromorphine dimepheptanol dimenoxadol DMA (2,5-dimethoxyamphetamine) DOET (2,5-dimethoxy-4-ethylamphetamine) DOM (2,5-dimethoxy-4-methylamphetamine) dimethylthiambutene DMT (N,N-dimethyltryptamine) dioxaphetylbutyrate dipipanone DMHP (1,2-dimethylheptyl-delta-3-THC) drotebanol ecgonine (3-hydroxy-2-tropanecarbonic acid) MDEA (N-ethyl-3,4-methylenedioxyamphetamine) ethylmethylthiambutene ethylmorphine eticyclidine etonitazene etorphine etoxeridine etryptamine fentanyl fenethylline furethidine GHB (4-hydroxybutyric acid) hemp oil (concentrate of plants from the Cannabis species (hemp) obtained by extraction of hemp or hashish, if not mixed with oil) heroin (diamorphine) hydrocodone hydromorphinol hydromorphone MDOH (N-hydroxy-methylenedioxyamphetamine) hydroxypethidine isomethadone ketobemidone levamphetamine levophenacylmorphan levomethamphetamine levomethorphan levomoramide levorphanol lysergide mecloqualone mescaline (3,4,5-trimethoxyphenethylamine) methamphetamine methamphetamine racemate metazocine methadone methadone intermediate (4-cyano-2-dimethylamino-4,4-diphenylbutane) methaqualone methcathinone MMDA (2-methoxy-4,5-methylenedioxyamphetamine) 4-methylaminorex methyldesorphine methyldihydromorphine MDMA (3,4-methylenedioxymethamphetamine) methylphenidate 3-methylfentanyl MPPP (1-methyl-4-phenyl-4-piperidinol propionate ester) 4-MTA (4-methylthioamphetamine) 3-methylthiofentanyl metopon moramide intermediate (2-methyl-3-morpholino-1,1-diphenylpropane-carboxylic acid) morpheridine morphine morphine-methobromide morphine-N-oxide myrophine nicocodeine nicodicodine nicomorphine noracymethadol norcodeine norlevorphanol normethadone normorphine norpipanone opium (the harvested milk, obtained from the plant Papaver somniferum) oxycodone oxymorphone para-fluorofentanyl parahexyl PMA (para-methoxyamphetamine) PMMA (para-methoxymethamphetamine) PEPAP (1-fenethyl-4-fenyl-4-piperidinolacetate ester) pethidine pethidine Intermediate A (4-cyano-1-methyl-4-phenylpiperidine) pethidine Intermediate B (4-phenylpiperidine-4-carbonic acid ethylester) pethidine Intermediate C (1-methyl-4-phenylpiperidine-4-carbonic acid) phenadoxone phenampromide phenazocine phencyclidine phenmetrazine phenomorphan phenoperidine pholcodine piminodine piritramide proheptazine properidine propiram psilocine psilocybine racemethorphan racemoramide racemorphan remifentanil rolicyclidine secobarbital sufentanil temazepam† tenamphetamine tenocyclidine tetrahydrocannabinol thebacon thebaïne thiofentanyl tilidine TMA-2 (2,4,5-trimethoxyamphetamine) trimeperidine TMA (3,4,5-trimethoxyamfetamine) zipeprol The esters and derivatives of ecgonine, which can be turned into ecgonine and cocaine; The mono- and di-alkylamide-, the pyrrolidine- and morpholine derivatives of lysergic acid, and the thereby introduction of methyl-, acetyl- or halogen groups obtained substances; Fiveworthy nitrogen-substituted morphinederivates, of which morphine-N-oxide-derivatives, like codeine-N-oxide; The isomers and stereoisomers of tetrahydrocannabinol; The ethers, esters and enantiomers of the above mentioned substances, with exception of dextromethorphan (INN) as enantiomer of levomethorphan and racemethorphan, and with exception of dextrorphanol (INN) as enantiomere of levorphanol and racemorphan; Formulations which contain one or more of the above mentioned substances. †Formulations of 20 mg or more of temazepam are classed under List I. List II drugs The following drugs are classified as List II drugs of the Opium Law: allobarbital alprazolam amobarbital amfepramone aminorex barbital benzphetamine bromazepam brotizolam buprenorphine butalbital butobarbital camazepam cathine chlordiazepoxide clobazam clonazepam clorazepate clotiazepam cloxazolam cyclobarbital delorazepam diazepam estazolam ethchlorvynol ethinamate ethylloflazepate ethylamphetamine fencamfamine fenproporex fludiazepam flunitrazepam flurazepam glutethimide halazepam haloxazolam hashish (a usually solid mixture of the excreted resin obtained from plants of the Cannabis species (hemp), with plant materials of these plants) hemp (all parts of the plant from the Cannabis species (hemp), of which the resin has not been extracted, with exception of the seeds) ketazolam lefetamine loprazolam lorazepam lormetazepam mazindole medazepam mefenorex meprobamate mesocarb methylphenobarbital methyprylon midazolam nimetazepam nitrazepam nordiazepam oxazepam oxazolam pemoline pentazocine pentobarbital phendimetrazine phenobarbital phentermine pinazepam pipradrol prazepam pyrovalerone secbutabarbital temazepam (formulations containing less than 20 mg) tetrazepam triazolam vinylbital zolpidem Formulations which contain one or more of the above mentioned substances, with exception of hemp oil. Medical use Even though List I substances are officially classified as hard drugs, several of them are often prescribed by licensed doctors. For example, nearly all opioids are List I drugs, but they are commonly prescribed to cancer and HIV patients, as well as sufferers of chronic pain. Two stimulants which are both prescribed for ADD/ADHD and narcolepsy; dexamphetamine and methylphenidate, are also List I drugs of the Opium Law. On the other hand, all barbiturates except for secobarbital are List II drugs, while none of them, except for phenobarbital, are prescribed today. In theory, a licensed doctor could prescribe any substance they think is needed for the correct treatment of their patient, both List I and List II substances of the Opium Law, though substances which aren't available as commercial pharmaceutical preparations have to be custom prepared by the designated pharmacy. All prescriptions for List I and some List II substances (amobarbital, buprenorphine, butalbital, cathine, cyclobarbital, flunitrazepam, temazepam, glutethimide, hemp, pentazocine and pentobarbital) of the Opium Law have to be written in full in letters, and have to contain the name and initials, address, city and telephone number of the licensed prescriber issuing the prescriptions, as well as the name and initials, address and city of the person the prescription is issued to. If the prescription is issued for an animal, the data of the owner should be used instead, and a description of the animal has to be included on the prescription. References External links Opiumwet on the official website of the Dutch government Dutch legislation Drug control law Drug policy of the Netherlands Health law in the Netherlands
Opium Law
[ "Chemistry" ]
3,045
[ "Drug control law", "Regulation of chemicals" ]
16,185,953
https://en.wikipedia.org/wiki/Promoter%20bashing
Promoter bashing is a technique used in molecular biology to identify how certain regions of a DNA strand, commonly promoters, affect the transcription of downstream genes. Under normal circumstances, proteins bind to the promoter and activate or repress transcription. In a promoter bashing assay, specific point mutations or deletions are made in specific regions of the promoter and the transcription of the gene is then measured. The contribution of a region of the promoter can be observed by the level of transcription. If a mutation or deletion changes the level of transcription, then it is known that that region of the promoter may be a binding site or other regulatory element. Promoter bashing is often done with deletions from either the 5' or 3' end of the DNA strand; this assay is easier to perform based on repeated restriction digestion and gel-purifying fragments of specific sizes. It is often easiest to ligate the promoter into the reporter, generate a large amount of the reporter construct using PCR or growth in bacteria, and then perform serial restriction digests on this sample. The ability of upstream promoters can be easily assayed by removing segments from the 5' end, and the same for the 3' end of the strand for downstream promoters. As the promoter commonly contains binding sequences for proteins affecting transcription, those proteins are also necessary when testing the effects of the promoter. Proteins which associate with the promoter can be identified using an electrophoretic mobility shift assay (EMSA), and the effects of inclusion or exclusion of the proteins with the mutagenized promoters can be assessed in the assay. This allows the use of promoter bashing to not only discover the location on the DNA strand which affects transcription, but also the proteins which affect that strand. The effects of protein interactions with each other as well as the binding sites can also be assayed in this way; candidate proteins must instead be identified by protein/protein interaction assays instead of an EMSA. Procedure This is an example procedure for a promoter bashing assay, adapted from Boulin et al.: Clone the region of DNA thought to act as a promoter. Cloning is necessary for the assay because it ensures that the promoter is the only factor affecting expression. This step often involves extraction of the DNA from the organism it resides in and PCR amplification. Sequence the region. DNA Sequencing is necessary to identify differences in mutated promoters from the wild-type promoter, and to correlate those differences with differences in gene expression. Additionally, it helps with the restriction digest of the region. Digest with appropriate restriction endonucleases. The region can be digested to remove elements which are thought to not be part of the promoter. Additionally, the reporter gene must be inserted a set distance from the promoter for most promoters. In some methods of promoter bashing, multiple restriction digests are used to systematically remove elements of the promoters—this method ensures that the regions of the promoter removed do not contribute to reporter expression. Mutagenize the promoter. Mutating the promoter is necessary if the method of removing part of the promoter with restriction digestion is not used. Many mutated strands can be generated, and the strands sequenced and the activities of the promoters assayed. This is often necessary because one mutation cannot be guaranteed to inactivate a binding site. Non-directed PCR-based mutagenesis can also be used; the parameters of the mutagenic PCR reaction can be adjusted to introduce a reasonable number of mutations. However, the random nature of PCR requires that more strands are assayed downstream of this step. Ligate to reporter gene. The promoters to be assayed must be ligated to a reporter gene so that gene expression levels can be measured. The reporter gene must be a sufficient distance from the promoter that the promoter affects it as a wild-type promoter would affect a gene. This can be verified with the positive control (full promoter). Transform cells of interest with the various promoter:reporter constructs. The promoter and reporter constructs must be ligated into a plasmid and transformed into cells in which that plasmid can be expressed to measure the activity of each promoter sequence. Proteins which affect the promoter must also be added to those cells—often those proteins are placed on the same or different plasmid under the regulation of a constitutively active promoter. Measure reporter-gene transcription rates. The gene products are assayed and the rates of reporter transcription are measured. From the data received from assaying the different promoters, the effects of various parts of the promoter can be ascertained. However, it is possible that there may not be enough data present and the assay must be re-run with a different promoter region and/or different mutations. See also Site-directed mutagenesis Restriction digest DNA footprinting References Molecular biology techniques Molecular genetics
Promoter bashing
[ "Chemistry", "Biology" ]
1,005
[ "Molecular genetics", "Molecular biology techniques", "Molecular biology" ]
16,187,387
https://en.wikipedia.org/wiki/Derivation%20of%20the%20Routh%20array
The Routh array is a tabular method permitting one to establish the stability of a system using only the coefficients of the characteristic polynomial. Central to the field of control systems design, the Routh–Hurwitz theorem and Routh array emerge by using the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. The Cauchy index Given the system: Assuming no roots of lie on the imaginary axis, and letting = The number of roots of with negative real parts, and = The number of roots of with positive real parts then we have Expressing in polar form, we have where and from (2) note that where Now if the ith root of has a positive real part, then (using the notation y=(RE[y],IM[y])) and and Similarly, if the ith root of has a negative real part, and and From (9) to (11) we find that when the ith root of has a positive real part, and from (12) to (14) we find that when the ith root of has a negative real part. Thus, So, if we define then we have the relationship and combining (3) and (17) gives us and Therefore, given an equation of of degree we need only evaluate this function to determine , the number of roots with negative real parts and , the number of roots with positive real parts. In accordance with (6) and Figure 1, the graph of vs , varying over an interval (a,b) where and are integer multiples of , this variation causing the function to have increased by , indicates that in the course of travelling from point a to point b, has "jumped" from to one more time than it has jumped from to . Similarly, if we vary over an interval (a,b) this variation causing to have decreased by , where again is a multiple of at both and , implies that has jumped from to one more time than it has jumped from to as was varied over the said interval. Thus, is times the difference between the number of points at which jumps from to and the number of points at which jumps from to as ranges over the interval provided that at , is defined. In the case where the starting point is on an incongruity (i.e. , i = 0, 1, 2, ...) the ending point will be on an incongruity as well, by equation (17) (since is an integer and is an integer, will be an integer). In this case, we can achieve this same index (difference in positive and negative jumps) by shifting the axes of the tangent function by , through adding to . Thus, our index is now fully defined for any combination of coefficients in by evaluating over the interval (a,b) = when our starting (and thus ending) point is not an incongruity, and by evaluating over said interval when our starting point is at an incongruity. This difference, , of negative and positive jumping incongruities encountered while traversing from to is called the Cauchy Index of the tangent of the phase angle, the phase angle being or , depending as is an integer multiple of or not. The Routh criterion To derive Routh's criterion, first we'll use a different notation to differentiate between the even and odd terms of : Now we have: Therefore, if is even, and if is odd: Now observe that if is an odd integer, then by (3) is odd. If is an odd integer, then is odd as well. Similarly, this same argument shows that when is even, will be even. Equation (15) shows that if is even, is an integer multiple of . Therefore, is defined for even, and is thus the proper index to use when n is even, and similarly is defined for odd, making it the proper index in this latter case. Thus, from (6) and (23), for even: and from (19) and (24), for odd: Lo and behold we are evaluating the same Cauchy index for both: Sturm's theorem Sturm gives us a method for evaluating . His theorem states as follows: Given a sequence of polynomials where: 1) If then , , and 2) for and we define as the number of changes of sign in the sequence for a fixed value of , then: A sequence satisfying these requirements is obtained using the Euclidean algorithm, which is as follows: Starting with and , and denoting the remainder of by and similarly denoting the remainder of by , and so on, we obtain the relationships: or in general where the last non-zero remainder, will therefore be the highest common factor of . It can be observed that the sequence so constructed will satisfy the conditions of Sturm's theorem, and thus an algorithm for determining the stated index has been developed. It is in applying Sturm's theorem (28) to (29), through the use of the Euclidean algorithm above that the Routh matrix is formed. We get and identifying the coefficients of this remainder by , , , , and so forth, makes our formed remainder where Continuing with the Euclidean algorithm on these new coefficients gives us where we again denote the coefficients of the remainder by , , , , making our formed remainder and giving us The rows of the Routh array are determined exactly by this algorithm when applied to the coefficients of (20). An observation worthy of note is that in the regular case the polynomials and have as the highest common factor and thus there will be polynomials in the chain . Note now, that in determining the signs of the members of the sequence of polynomials that at the dominating power of will be the first term of each of these polynomials, and thus only these coefficients corresponding to the highest powers of in , and , which are , , , , ... determine the signs of , , ..., at . So we get that is, is the number of changes of sign in the sequence , , , ... which is the number of sign changes in the sequence , , , , ... and ; that is is the number of changes of sign in the sequence , , , ... which is the number of sign changes in the sequence , , , , ... Since our chain , , , , ... will have members it is clear that since within if going from to a sign change has not occurred, within going from to one has, and likewise for all transitions (there will be no terms equal to zero) giving us total sign changes. As and , and from (18) , we have that and have derived Routh's theorem - The number of roots of a real polynomial which lie in the right half plane is equal to the number of changes of sign in the first column of the Routh scheme. And for the stable case where then by which we have Routh's famous criterion: In order for all the roots of the polynomial to have negative real parts, it is necessary and sufficient that all of the elements in the first column of the Routh scheme be different from zero and of the same sign. References Hurwitz, A., "On the Conditions under which an Equation has only Roots with Negative Real Parts", Rpt. in Selected Papers on Mathematical Trends in Control Theory, Ed. R. T. Ballman et al. New York: Dover 1964 Routh, E. J., A Treatise on the Stability of a Given State of Motion. London: Macmillan, 1877. Rpt. in Stability of Motion, Ed. A. T. Fuller. London: Taylor & Francis, 1975 Felix Gantmacher (J.L. Brenner translator) (1959) Applications of the Theory of Matrices, pp 177–80, New York: Interscience. Article proofs Control theory Signal processing Polynomials
Derivation of the Routh array
[ "Mathematics", "Technology", "Engineering" ]
1,610
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Applied mathematics", "Polynomials", "Control theory", "Article proofs", "Algebra", "Dynamical systems" ]
16,187,793
https://en.wikipedia.org/wiki/Gary%20J.%20Van%20Berkel
Gary J Van Berkel, born in 1959, is the research scientist who led the Organic and Biological Mass Spectrometry Group at Oak Ridge National Laboratory until his retirement from there in 2018. He is currently owner and CSO of Van Berkel Ventures, LLC, an analytical measurement science, innovation, research, consulting and writing firm in Oak Ridge, TN. He is best known for his research on electrochemistry of electrosprays. Early life and education 1982 B.A. Lawrence University 1987 Ph.D. Washington State University Research interests Electrochemistry of electrospray Atmospheric pressure surface sampling and ionization Mass spectrometry imaging Awards 2016 R&D 100 Award "Open Port Sampling Interfaces for Mass Spectrometry" 2014 Battelle "Inventor of the Year" 2013 Oak Ridge National Laboratory "Scientist of the Year" 2013 Oak Ridge National Laboratory "Inventor of the Year" 2013 Rapid Communications in Mass Spectrometry Beynon Prize 2010 R&D 100 Award "Liquid Microjunction Surface Sampling Probe for Mass Spectrometry" 2005 Biemann Medal Select publications References 21st-century American chemists Mass spectrometrists Living people Lawrence University alumni Year of birth missing (living people)
Gary J. Van Berkel
[ "Physics", "Chemistry" ]
251
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
16,188,434
https://en.wikipedia.org/wiki/Fission%20Product%20Pilot%20Plant
The Fission Product Pilot Plant, building 3515 at Oak Ridge National Laboratory (ORNL), was built in 1948 to extract radioactive isotopes from liquid radioactive waste. It was formerly known as the 'ruthenium-106 tank arrangement'. It is a relatively small facility; the task of extracting radioactive isotopes later took place at a number of specialised buildings nearby. References differ as to when the plant was built; 'radioactive waste management at ORNL' says that it was completed in 1957, the 1955 Annual Report has engineering drawings indicating that the building was fully designed in 1955, but other references suggest that there was a building on the site in 1948. Contamination issues The plant was extensively contaminated during operation, particularly by waste produced while flushing out the tanks inside for maintenance. Traces of human feces were found in the tanks. End of life Operations at FPPP ended in the early 1960s, and the plant was entombed in concrete up to 1.5 metres (5') thick; there was a proposal made in 1993 for dismantling the plant by robot from the inside, but it's not clear whether this was carried out. References http://www.osti.gov/bridge/purl.cover.jsp?purl=/392043-Fisb6G/webviewable/ A proposal for disposing of FPPP Radioactive waste management at ORNL 1955 Annual Report on the radioisotope production programs at ORNL, pages 10 through 14 describe the 'F3P' lab. Purposes of various buildings on the ORNL site Buildings and structures in Roane County, Tennessee Oak Ridge National Laboratory
Fission Product Pilot Plant
[ "Physics" ]
340
[ "Nuclear and atomic physics stubs", "Nuclear physics" ]
16,189,288
https://en.wikipedia.org/wiki/Creative%20Technology
Creative Technology Ltd., or Creative Labs Pte Ltd., is a Singaporean multinational electronics company mainly dealing with audio technologies and products such as speakers, headphones, sound cards and other digital media. Founded by Sim Wong Hoo, Creative was highly influential in the advancement of PC audio in the 1990s following the introduction of its Sound Blaster card and technologies; the company continues to develop Sound Blaster products including embedding them within partnered mainboard manufacturers and laptops. The company also has overseas offices in Shanghai, Tokyo, Dublin and the Silicon Valley. Creative Technology has been listed on the Singapore Exchange (SGX) since 1994. History 1981–1996 Creative Technology was founded in 1981 by childhood friends and Ngee Ann Polytechnic schoolmates Sim Wong Hoo and Ng Kai Wa. Originally a computer repair shop in Pearl's Centre in Chinatown, the company eventually developed an add-on memory board for the Apple II computer. Later, Creative spent $500,000 developing the Cubic CT, an IBM-compatible PC adapted for the Chinese language and featuring multimedia features like enhanced color graphics and a built-in audio board capable of producing speech and melodies. With lack of demand for multilingual computers and few multimedia software applications available, the Cubic was a commercial failure. Shifting focus from language to music, Creative developed the Creative Music System, a PC add-on card. Sim established Creative Labs, Inc. in the United States' Silicon Valley and convinced software developers to support the sound card, renamed Game Blaster and marketed by RadioShack's Tandy division. The success of this audio interface led to the development of the standalone Sound Blaster sound card, introduced at the 1989 COMDEX show just as the multimedia PC market, fueled by Intel's 386 CPU and Microsoft Windows 3.0, took off. The success of Sound Blaster helped grow Creative's revenue from US$5.4 million in 1989 to US$658 million in 1994. In 1993, the year after Creative's initial public offering, in 1992, former Ashton-Tate CEO Ed Esber joined Creative Labs as CEO to assemble a management team to support the company's rapid growth. Esber brought in a team of US executives, including Rich Buchanan (graphics), Gail Pomerantz (marketing), and Rich Sorkin (sound products, and later communications, OEM and business development). This group played key roles in reversing a brutal market share decline caused by intense competition from Media Vision at the high end and Aztech at the low end. Sorkin, in particular, dramatically strengthened the company's brand position through crisp licensing and an aggressive defense of Creative's intellectual property positions while working to shorten product development cycles. At the same time, Esber and the original founders of the company had differences of opinion on the strategy and positioning of the company. Esber exited in 1995, followed quickly by Buchanan and Pomerantz. Following Esber's departure, Sorkin was promoted to General Manager of Audio and Communication Products and later Executive Vice-president of Business Development and Corporate Investments, before leaving Creative in 1996 to run Elon Musk's first startup and Internet pioneer Zip2. By 1996, Creative's revenues had peaked at US$1.6 billion. With pioneering investments in VOIP and media streaming, Creative was well-positioned to take advantage of the Internet era, but ventured into the CD-ROM market and was eventually forced to write off nearly US$100 million in inventory when the market collapsed due to a flood of cheaper alternatives. 1997–2011 The firm had maintained a strong foothold in the ISA PC audio market until 14 July 1997 when Aureal Semiconductor entered the soundcard market with their very competitive PCI AU8820 Vortex 3D sound technology. The firm at the time was in development of their own in house PCI audio cards but were finding little success adopting the PCI standard. In January 1998 in order to quickly facilitate a working PCI audio technology, the firm made the acquisition of Ensoniq for US$77 million. On 5 March 1998 the firm sued Aureal with patent infringement claims over a MIDI caching technology held by E-mu Systems. Aureal filed a counterclaim stating the firm was intentionally interfering with its business prospects, had defamed them, commercially disparaged, engaged in unfair competition with intent to slow down Aureals sales, and acted fraudulently. The suit had come only days after Aureal gained a fair market with the AU8820 Vortex1. In August 1998, the Sound Blaster Live! was the firm's first sound card developed for the PCI bus in order to compete with upcoming Aureal AU8830 Vortex2 sound chip. Aureal at this time were making fliers comparing their new AU8830 chips to the now shipping Sound Blaster Live!. The specifications within these fliers comparing the AU8830 to the Sound Blaster Live! EMU10K1 chip sparked another flurry of lawsuits against Aureal, this time claiming Aureal had falsely advertised the Sound Blaster Live!'s capabilities. In December 1999, after numerous lawsuits, Aureal won a favourable ruling but went bankrupt as a result of legal costs and their investors pulling out. Their assets were acquired by Creative through the bankruptcy court in September 2000 for US$32 million. The firm had in effect removed their only major direct competitor in the 3D gaming audio market, excluding their later acquisition of Sensaura. In April 1999, the firm launched the NOMAD line of digital audio players that would later introduce the MuVo and ZEN series of portable media players. In November 2004, the firm announced a $100 million marketing campaign to promote their digital audio products, including the ZEN range of MP3 players. The firm applied for on 5 January 2001 and was awarded the patent on 9 August 2005. The Zen patent was awarded to the firm for the invention of user interface for portable media players. This opened the way for potential legal action against Apple's iPod and the other competing players. The firm took legal actions against Apple in May 2006. In August, 2006, Creative and Apple entered into a broad settlement, with Apple paying Creative $100 million for the licence to use the Zen patent. The firm then joined the "Made for iPod" program. On 22 March 2005, The Inquirer reported that Creative Labs had agreed to settle in a class action lawsuit about the way its Audigy and Extigy soundcards were marketed. The firm offered customers who purchased the cards up to a $62.50 reduction on the cost of their next purchase of its products, while the lawyers involved in filing the dispute against Creative received a payment of approximately $470,000. In 2007, Creative voluntarily delisted itself from NASDAQ, where it had the symbol of CREAF. Its stocks are now solely on the Singapore Exchange (SGX-ST). In early 2008, Creative Labs' technical support centre, located in Stillwater, Oklahoma, US laid off several technical support staff, furthering ongoing concerns surrounding Creative's financial situation. Later that year, the company faced a public-relations backlash when it demanded that a user named "Daniel_K" cease distributing modified versions of drivers for Windows Vista which restored functionality that had been available in drivers for Windows XP. The company deleted his account from its online forums but reinstated it a week later. In January 2009, the firm generated Internet buzz with a mysterious website promising a "stem cell-like" processor which would give a 100-fold increase in supercomputing power over current technology, as well as advances in consumer 3D graphics. At CES 2009, it was revealed to be the ZMS-05 processor from ZiiLABS, a subsidiary formed from the combining of 3DLabs and Creative's Personal Digital Entertainment division. 2012–present In November 2012, the firm announced it has entered into an agreement with Intel Corporation for Intel to license technology and patents from ZiiLABS Inc. Ltd, a wholly owned subsidiary of Creative, and acquire engineering resources and assets related to its UK branch as a part of a $50 million deal. ZiiLABS (still wholly owned by Creative) continues to retain all ownership of its StemCell media processor technologies and patents, and will continue to supply and support its ZMS series of chips to its customers. From 2014 to 2017, Creative's revenue from audio products have contracted at an average of 15% annually, due to increased competition in the audio space. At the Consumer Electronics Show (CES) in Las Vegas in January 2018, its Super X-Fi dongle won the Best of CES 2018 Award by AVS Forum. The product was launched after more than $100 million in investment and garnered positive analyst reports. This new technology renewed interest in the company and likely helped to raise its share price from S$1.25 to S$8.75 within a 2-week period. The company is still producing Chinese-language and bilingual software for the Singapore market, but nearly half of the company's income is generated in the United States and South America; the European Union represents 32% of revenues, with Asia making the remainder. On January 4, 2023, Sim died at age 67, with president of Creative Labs Business Unit Song Siow Hui appointed as interim CEO. Products Sound Blaster Creative's Sound Blaster sound card was among the first dedicated audio processing cards to be made widely available to the general consumer. As the first to bundle what is now considered to be a part of a sound card system: digital audio, on-board music synthesizer, MIDI interface and a joystick port, Sound Blaster rose to become a de facto standard for sound cards in PCs for many years. Creative Technology have made their own file format Creative Voice which has the file format .voc. In 1987 Creative Technology released the Creative Music System (C/MS), a 12-voice sound card for the IBM PC architecture. When C/MS struggled to acquire market share, Sim traveled from Singapore to Silicon Valley and negotiated a deal with RadioShack's Tandy division to market the product as the Game Blaster. While the Game Blaster did not overcome AdLib's sound card market dominance, Creative used the platform to create the first Sound Blaster, which retained CM/S hardware and added the Yamaha YM3812 chip found on the AdLib card, as well as adding a component for playing and recording digital samples. Creative aggressively marketed the "stereo" aspect of the Sound Blaster (only the C/MS chips were capable of stereo, not the complete product) to calling the sound producing micro-controller a "DSP", hoping to associate the product with a digital signal processor (the DSP could encode/decode ADPCM in real time, but otherwise had no other DSP-like qualities). Monaural Sound Blaster cards were introduced in 1989, and Sound Blaster Pro stereo cards followed in 1992. The 16-bit Sound Blaster AWE32 added Wavetable MIDI, and AWE64 offered 32 and 64 voices. Sound Blaster achieved competitive control of the PC audio market by 1992, the same year that its main competitor, Ad Lib, Inc., went bankrupt. In the mid-1990s, following the launch of the Sound Blaster 16 and related products, Creative Technologies' audio revenue grew from US$40 million to nearly US$1 billion annually. The sixth generation of Sound Blaster sound cards introduced SBX Pro Studio, a feature that restores the highs and lows of compressed audio files, enhancing detail and clarity. SBX Pro Studio also offers user settings for controlling bass and virtual surround. Creative X-Fi Sonic Carrier The Creative X-Fi Sonic Carrier, launched in January 2016, consists of a long main unit and a subwoofer that houses 17 drivers in an 11.2.4 speaker configuration. It incorporates Dolby Atmos surround processing, and also features Creative's EAX 15.2 Dimensional Audio to extract, enhance and upscale sound from legacy material. The audio and video engine of the X-Fi Sonic Carrier are powered by 7 processors with a total of 14 cores. It supports both local and streaming video content at up to 4K 60 fps, as well as 15.2 channels of high resolution audio playback. It also comes with 3 distinct wireless technologies that allow multi-room Wi-Fi, Bluetooth, and a zero-latency speaker-to-speaker link to up to 4 subwoofer units. Other products Headphones Gaming headsets Portable Bluetooth speakers Creative GigaWorks ProGamer G500 speakers Discontinued products CD and DVD players, drives, and controller cards Graphics cards Prodikeys, a computer keyboard/musical keyboard combination Optical mice and keyboards Vado HD Creative Zen and Creative MuVo portable media players See also AdLib Aureal Semiconductor Ensoniq Environmental audio extensions Sensaura Yamaha Divisions and brands Cambridge SoundWorks Creative MuVo Creative NOMAD Creative ZEN E-mu Systems/Ensoniq Sound Blaster Sensaura SoundFont ZiiLABS, formerly 3Dlabs References Companies formerly listed on the Nasdaq Companies listed on the Singapore Exchange Computer companies established in 1981 Computer hardware companies Computer peripheral companies Design companies established in 1981 Electronics companies established in 1981 Headphones manufacturers Loudspeaker manufacturers Manufacturing companies established in 1981 Multinational companies headquartered in Singapore Portable audio player manufacturers Singaporean brands Singaporean companies established in 1981
Creative Technology
[ "Technology" ]
2,753
[ "Computer hardware companies", "Computers" ]
16,189,349
https://en.wikipedia.org/wiki/Dystrophic%20lake
Dystrophic lakes, also known as humic lakes, are lakes that contain high amounts of humic substances and organic acids. The presence of these substances causes the water to be brown in colour and have a generally low pH of around 4.0-6.0. The presence of humic substances are mainly due to certain plants in the watersheds of the lakes, such as peat mosses and conifers. Due to these acidic conditions, few taxa are able to survive, consisting mostly of aquatic plants, algae, phytoplankton, picoplankton, and bacteria. Dystrophic lakes can be found in many areas of the world, especially in the northern boreal regions. Classification of dystrophic lakes Dystrophia can be categorized as a condition affecting trophic state rather than a trophic state in itself. Lakes typically are categorized according to the increasing productivity as oligotrophic, mesotrophic, eutrophic, and hypereutrophic. Dystrophic lakes used to be classified as oligotrophic due to their low productivity. However, more recent research shows dystrophia can be associated with any of the trophic types. This is due to a wider possible pH range (acidic 4.0 to more neutral 8.0 on occasion) and other fluctuating properties like nutrient availability and chemical composition. Hydrochemical Dystrophy Index is a scale used to evaluate the dystrophy level of lakes. In 2017, Gorniak proposed a new set of rules for evaluating this index, using properties such as the surface water pH, electric conductivity, and concentrations of dissolved inorganic carbon, and dissolved organic carbon. Chemical properties Dystrophic lakes have a high level of dissolved organic carbon. This consists of organic carboxylic and phenolic acids, which keep water pH levels relatively stable, possibly by acting as a natural buffer. Therefore, the lake’s naturally acidic pH is largely unaffected by industrial emissions. Dissolved organic carbon also reduces the amount of ultraviolet radiation that enters the lake and can reduce the bioavailability of heavy metals by binding them. There is a significantly lower calcium content in the water and sediment of a dystrophic lake when compared with a non-dystrophic lake. Essential fatty acids, like eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA), are still present in the organisms in humic lakes, but are downgraded in nutritional quality by this acidic environment, resulting low nutritional quality of dystrophic lake's producers, such as phytoplankton. Because of differing trophic status, some dystrophic lakes may differ strongly in their chemical composition from other dystrophic lakes. Studies of the chemical composition of different types of dystrophic lakes have shown differing levels of dissolved inorganic nitrogen, lipase and glucosidase depending on water color. Life in dystrophic lakes The catchment area of a dystrophic lake is usually a coniferous forest rich or an area with peat mosses. Despite the presence of ample nutrients, dystrophic lakes can be considered nutrient-poor, because their nutrients are trapped in organic matter, and therefore are unavailable to primary producers. A considerable amount of the organic matter in dystrophic lakes is allochthonous, meaning it is produced externally to the lake. Due to high amounts of organic matter and lack of light, it is bacterioplankton that control the rate of nutrient flux between the aquatic and terrestrial environments. The bacteria are found in high numbers. These bacteria drive the food web of humic lakes by providing energy and supplying usable forms of organic and inorganic carbon to other organisms, primarily to phagotrophic and mixotrophic flagellates. Decomposition of organic matter by bacteria also converts organic nitrogen and phosphorus into their inorganic forms, which are then available for uptake by primary producers including both large and small phytoplankton (algae and cyanobacteria). The biological activity of humic lakes is, however, dominated by bacterial metabolism. The chemistry of humic lakes makes it difficult for higher trophic levels such as planktivorous fish to establish themselves, leaving a simplified food web consisting mostly of plants, plankton, and bacteria. The dominance of the bacteria means that dystrophic lakes generally have a higher respiration rate than primary production rate. Impacts of dystrophication on a lake ecosystem The formation of a humic lakes via organic runoff has a dramatic effect on the lake ecosystem. Increases in the lake’s acidity make it difficult for fish and other organisms to proliferate. The quality of the lake for use as drinking water also decreases as the carbon concentration and acidity increase. The fish that do adapt to the increased acidity may also not be fit for human consumption, due to the organic pollutants. Dystrophic lakes and climate change Lakes are commonly known to be important sinks in the carbon cycle. Dystrophic lakes are typically net heterotrophic due to the large amount of bacterial respiration outweighing phytoplankton photosynthesis, meaning that dystrophic lakes are larger carbon sources than clear lakes, emitting carbon into the atmosphere. The elevated levels of allochthonous carbon in humic lakes are due to vegetation in the lake and catchment area, the runoff from which is the main source of organic material. However, changes in these levels can also be attributed to shifts in precipitation, changing forestry practices, reduced sulphate deposition, and changes in temperature. Contemporary climate change is increasing temperature and precipitation in some parts of the world, thus increasing the supply of humic substances to lakes, making them more dystrophic; this process is referred to as “brownification." Examples of dystrophic lakes Examples of dystrophic lakes that have been studied by scientists include Lake Suchar II in Poland, lakes Allgjuttern, Fiolen, and Brunnsjön in Sweden, and Lake Matheson in New Zealand. References Lakes by type Aquatic ecology Limnology
Dystrophic lake
[ "Biology" ]
1,276
[ "Aquatic ecology", "Ecosystems" ]
16,189,453
https://en.wikipedia.org/wiki/Greg%20Kuperberg
Greg Kuperberg (born July 4, 1967) is a Polish-born American mathematician known for his contributions to geometric topology, quantum algebra, and combinatorics. Kuperberg is a professor of mathematics at the University of California, Davis. Biography Kuperberg is the son of two mathematicians, Krystyna Kuperberg and Włodzimierz Kuperberg. He was born in Poland in 1967, but his family emigrated to Sweden in 1969 due to the 1968 Polish political crisis. In 1972, Kuperberg's family moved to the United States, eventually settling in Auburn, Alabama. Kuperberg wrote three computer games for the IBM Personal Computer in 1982 and 1983 (which were published by Orion Software): Paratrooper, PC-Man and J-Bird. (video game clones of Sabotage, Pac-Man and Q*bert, respectively) He enrolled at Harvard University in 1983 and received a bachelor's degree in 1987. He was ranked Top 10 in the 1986 William Lowell Putnam Mathematical Competition. Upon leaving Harvard, Kuperberg studied at the University of California, Berkeley under Andrew Casson, receiving a Ph.D. in geometric topology and quantum algebra in 1991. From 1991 until 1992, Kuperberg was a NSF postdoctoral fellow and adjunct assistant professor at Berkeley, and from 1992 to 1995 held a Dickson Instructorship at the University of Chicago. From 1995 through 1996, Kuperberg was Gibbs Assistant Professor at Yale University after which he joined the mathematics faculty at the University of California, Davis. In 2012 he became a fellow of the American Mathematical Society. Kuperberg is married to physicist Rena Zieve, who is a professor of physics at UC Davis. Selected publications Kuperberg has over fifty publications, including two in the Annals of Mathematics. with Krystyna Kuperberg: References External links Greg Kuperberg, faculty page at UC-Davis Bits from my personal collection - the original IBM PC and Orion Software 1967 births Polish mathematicians 20th-century American mathematicians 21st-century American mathematicians Auburn High School (Alabama) alumni Harvard University alumni Living people People from Auburn, Alabama Polish emigrants to the United States Topologists University of California, Berkeley alumni University of California, Davis faculty Fellows of the American Mathematical Society American video game programmers
Greg Kuperberg
[ "Mathematics" ]
465
[ "Topologists", "Topology" ]