id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
5,331,853
https://en.wikipedia.org/wiki/List%20of%20films%20about%20computers
This is a list of films about computers, featuring fictional films in which activities involving computers play a central role in the development of the plot. Artificial intelligence plots Motion picture 2001: A Space Odyssey (1968) HAL 9000 The Computer Wore Tennis Shoes (1969) Colossus: The Forbin Project (1970) The Questor Tapes (1974) Demon Seed (1977) Blade Runner (1982) Tron (1982) WarGames (1983) Brainstorm (1983) 2010 (1984) HAL 9000 SAL 9000 Hide and Seek (1984, TV movie) Electric Dreams (1984) The Terminator (1984) Terminator Skynet D.A.R.Y.L. (1985) Flight of the Navigator (1986) Short Circuit (1986) Not Quite Human (1987) Short Circuit 2 (1988) Not Quite Human II (1989) Still Not Quite Human (1992) Arcade (1993) Star Trek Generations (1994) Hackers (1995) Johnny Mnemonic (1995) The Net (1995) Star Trek: First Contact (1996) Enemy of the State (1998) Lost in Space (1998) Star Trek: Insurrection (1998) Bicentennial Man (1999) The Matrix (1999) The Thirteenth Floor (1999) Universal Soldier: The Return (1999) Virus (1999) A.I. Artificial Intelligence (2001) How to Make a Monster (2001) Swordfish (2001) S1M0NE (2002) Star Trek: Nemesis (2002) The Matrix Reloaded (2003) The Matrix Revolutions (2003) I Robot (2004) The Hitchhiker's Guide to the Galaxy (2005) Live Free or Die Hard (2007) Eagle Eye (2008) Iron Man (2008) Moon (2009) GERTY 3000 Iron Man 2 (2010) Wreck-It Ralph (2012) Computer Chess (2013) Her (2013) Iron Man 3 (2013) The Machine (2013) Automata (2014) Transcendence (2014) Interstellar (2014) Vice (2015) Ex Machina (2015) Avengers: Age of Ultron (2015) Morgan (2016) Share? (2023) Television series Person Of Interest (2011-2016) Next (2020) Mrs. Davis (2023) Computers as plot devices Motion picture Desk Set (1957) The Honeymoon Machine (1961) Alphaville (1965) Billion Dollar Brain (1967) The Andromeda Strain (1971) Cloak & Dagger (1984) Revenge of the Nerds (1984) The Machine That Changed the World (1992, TV miniseries) The First $20 Million Is Always the Hardest (2002) Micro Men (2009) The Social Network (2010) Jobs (2013) The Imitation Game (2014) Steve Jobs (2015) Everything Everywhere All at Once (2022) Television series Computer Chronicles (1983 - 2002) Triumph of the Nerds: The Rise of Accidental Empires (1996) Nerds 2.0.1: A Brief History of the Internet (1998) Halt and Catch Fire (2014 - 2017) Commodore 64 Macintosh 128K NeXT Computer Silicon Valley (2014 - 2019) Valley of the Boom (2019) The IT Crowd (2006-2013) Documentaries All Watched Over by Machines of Loving Grace (2011) Web Junkie (2013) Silicon Cowboys (2016) Hacking as a plot narrative Motion picture The Italian Job (1969) Tron (1982) WarGames (1983) IMSAI 8080 Cloak & Dagger (1984) Prime Risk (1985) Ferris Bueller's Day Off (1986) Sneakers (1992) Blank Check (1994) Hackers (1995) The Net (1995) Under Siege 2: Dark Territory (1995) Masterminds (1997) 23 (1998) Entrapment (1999) The Thirteenth Floor (1999) Pirates of Silicon Valley (1999) Altair 8800 Track Down (2000) Swordfish (2001) The Score (2001) What's the Worst That Could Happen? (2001) Code Hunter (2002) Bedwin Hacker (2003) The Italian Job (2003) Foolproof (2003) The Incredibles (2004) Sky Captain and the World of Tomorrow (2004) Firewall (2006) The Net 2.0 (2006) Live Free or Die Hard (2007) WALL-E (2008) WarGames: The Dead Code (2008) Untraceable (2008) The Social Network (2010) Robot & Frank (2012) The Fifth Estate (2013) Disconnect (2013) Open Windows (2014) Who Am I – No System Is Safe (2014) Blackhat (2015) Chappie (2015) The Throwaways (2015) Snowden (2016) Hacker (2016) Anon (2018) Dark Web: Cicada 3301 (2021) Documentaries Hacking Democracy (HBO, Emmy nominated for Outstanding Investigative Journalism) Hackers: Wizards of the Electronic Age (1984) Hackers in Wonderland (2000) Revolution OS (2001) The Code (2001) Freedom Downtime (2001) The Secret History of Hacking (2001) In the Realm of the Hackers (2002) BBS: The Documentary (2004) The Code-Breakers (2006) Steal This Film (2006) Hackers Are People Too (2008) Hackers Wanted (not officially released, but leaked in 2010) The Virtual Revolution (2010) We Are Legion (2012) The Internet's Own Boy: The Story of Aaron Swartz (2014) Citizenfour (2014) Zero Days (2016) Lo and Behold, Reveries of the Connected World (2016) Cyberbunker: The Criminal Underworld (2023) Television series Person of Interest (2011 - 2016) CSI: Cyber (2015 - 2016) Scorpion (2014 - 2018) Mr. Robot (2015 - 2019) (2020 - ) Virtual reality World on a Wire (1973) Welcome to Blood City (1977) Tron (1982) Brainstorm (1983) The Lawnmower Man (1992) Disclosure (1994) Brainscan (1994) Kôkaku kidôtai (Ghost in the Shell) (1995) Strange Days (1995) Virtuosity (1995) VR.5 (1995) Johnny Mnemonic (1995) Lawnmower Man 2: Beyond Cyberspace (1996) Nirvana (1997) eXistenZ (1999) The Matrix (1999) The Thirteenth Floor (1999) Avalon (2001) Storm Watch (aka Code Hunter) (2002) Code Lyoko (2003) The Matrix Reloaded (2003) The Matrix Revolutions (2003) Avatar (2004) Inosensu: Kôkaku kidôtai (Ghost in the Shell 2: Innocence) (2004) Cargo (2009) Gamer (2009) Tron: Legacy (2010) Transcendence (2014) Ready Player One (2018) Viruses Superman III (1983) Office Space (1999) Pulse (2006) Programming Disclosure (1994) Pirates of Silicon Valley (1999) Altair 8800 Code Rush (2000) The Code (2001) Antitrust (2001) How to Make a Monster (2001) Revolution OS (2001) Dopamine (2003) One Point O (2004) Control Alt Delete (2008) The Social Network (2010) Hidden Figures (2016) IBM 7090 Websites Motion picture FeardotCom (2002) On Line (2002) I-See-You.Com (2006) Untraceable (2008) Catfish (2010) The Social Network (2010) The Internship (2013) Documentaries Home Page (1998) e-Dreams (2001) Startup.com (2001) Google: Behind the Screen (2006) Steal This Film (2006) Steal This Film II (2006) Google: The Thinking Factory (2007) Download: The True Story of the Internet (2008) The Truth According to Wikipedia (2008) The Pirate Bay Away From Keyboard (2013) Communications Electric Dreams (1984) Blank Check (1994) You've Got Mail (1998) Chatroom (2010) Supernatural Evilspeak (1981) Weird Science (1985) Ghost in the Machine (1993) Ghost Machine (2009) Unfriended (2014) (alternative title: Cybernatural) War Dr. Strangelove (1964) Doomsday device Colossus: The Forbin Project (1970) Firefox (1982) WarGames (1983) Brainstorm (1983) Stealth (2005) Space Apollo 13 (1995) Apollo Guidance Computer RocketMan (1997) From the Earth to the Moon (1998, TV miniseries) Apollo Guidance Computer Anime Ghost in the Shell (1995) Serial Experiments Lain (1998) Chobits (2002) Ghost in the Shell: Stand Alone Complex (2002-2003) Ghost in the Shell 2: Innocence (2004) Ghost in the Shell: S.A.C. 2nd GIG (2004) Ergo Proxy (2006) Ghost in the Shell: Stand Alone Complex: Solid State Society (2006) See also List of fictional computers List of fictional robots and androids List of cyberpunk films Computer screen film References External links Computer-related Computing and society Computing-related lists Works about computer hacking
List of films about computers
Technology
1,851
17,571,813
https://en.wikipedia.org/wiki/Flora%20and%20fauna%20of%20Greenland
Although the bulk of its area is covered by ice caps inhospitable to most forms of life, Greenland's terrain and waters support a wide variety of plant and animal species. The northeastern part of the island is the world's largest national park. The flora and fauna of Greenland are strongly susceptible to changes associated with climate change. The image galleries below link to information related to the flora and fauna of Greenland, including Latin taxonomy, Danish translations, and links to articles in the Danish Wikipedia, which can be helpful when searching for more information. Flora 310 species of vascular plants were said to be found in Greenland in 2019, including 15 endemic species. Although individual plants can be profuse in favourable situations, relatively few plant species tend to be represented in a given place. In northern Greenland, the ground is covered with a carpet of mosses and low-lying shrubs such as dwarf willows and crowberries. Flowering plants in the north include yellow poppy, Pedicularis, and Pyrola. Plant life in southern Greenland is more abundant, and certain plants, such as the dwarf birch and willow, may grow several feet high. The only natural forest in Greenland is found in the Qinngua Valley. The forest consists mainly of downy birch (Betula pubescens) and grey-leaf willow (Salix glauca), growing up to tall, although nine stands of conifers had been cultivated elsewhere by 2007. Horticulture shows a certain degree of success. Plants such as broccoli, radishes, spinach, leeks, lettuce, turnips, chervil, potatoes and parsley are grown up to considerable latitudes, while the very south of the country also rears asters, Nemophila, mignonette, rhubarb, sorrel and carrots. Over the decade to 2007, the growing season lengthened by as much as three weeks. In the 13th-century Konungs skuggsjá (King's mirror), it is stated that the old Norsemen tried in vain to raise barley. Recent research from archaeological digs on Greenland by the National Museum in Copenhagen discovered barley grains and concluded that the Vikings were able to grow barley. Fauna Land mammals Among the large land mammals are the musk ox, the reindeer, the polar bear and the white Arctic wolf. Other familiar mammals in Greenland include the Arctic hare, collared lemming, Beringian ermine and Arctic fox. Reindeer hunting is of considerable cultural importance to the people of Greenland. Domesticated land mammals include dogs, which were introduced by the Inuit, as well as such European-introduced species as goats, Greenlandic sheep, oxen and pigs, which are raised in modest numbers in the south. Marine mammals As many as two million seals are estimated to inhabit Greenland's coasts; species include the hooded seal (Cystophora cristata) as well as the grey seal (Halichoerus grypus). Whales frequently pass very close to Greenlandic shores in the late summer and early autumn. Species represented include the beluga whale, blue whale, Greenland whale, fin whale, humpback whale, minke whale, narwhal, pilot whale, sperm whale. Whaling was formerly a major industry in Greenland; by the turn of the 20th century, however, the right whale population was so depleted that the industry was in deep decline. Walruses are to be found primarily in the north and east of the country; like narwhal, they have at times suffered from overhunting for their tusks. Birds As of 1911, 61 species of birds were known to breed in Greenland. Certain birds such as the eider duck, guillemot and ptarmigan are often hunted for food in the winter. Fish Of the many species of fish inhabiting Greenland's waters, several have been of economic importance, including cod, caplin, halibut, rockfish, nipisak (Cycloperteus lumpus) and sea trout. The Greenland shark is used for the oil in its liver, as well as fermented and eaten as hákarl, a local delicacy. See also Fauna List of mammals of Nunavut List of Nunavut birds Flora List of Canadian plants by family References 01 01 Greenland Biota of archipelagoes
Flora and fauna of Greenland
Biology
891
29,836,013
https://en.wikipedia.org/wiki/The%20Library%20Corporation
The Library Corporation (TLC) creates and distributes automation and cataloging software to public, school, academic, and special library systems worldwide. Based in Inwood, West Virginia, with additional offices in Denver, Singapore, and Ontario, the company is owned and operated by the same family who established it in 1974. In 1985, it became the first organization in the world to successfully use CD-ROM technology for data storage when it released its BiblioFile Cataloging software. The CD-ROM drive used to read those first commercially produced discs, as well as the original BiblioFile Cataloging CD-ROMs, are now in the Smithsonian Institution. TLC, a GSA-certified company, earned a 2009 Best in Tech Award from Scholastic Administrator magazine. Also in 2009, its senior product developer, Matt Moran, was named by Library Journal magazine as one of the library industry's top 51 "Movers and Shakers." Library automation systems The company offers three integrated library systems: Library•Solution for public, academic, and special libraries; Library•Solution for Schools for public and private school libraries; and CARL•X, the next-generation version of the legacy CARL•Solution automation system. Each system automates the standard operations of a library, including the check-in/check-out process, cataloging, inventory, authority control, reports, and management of floating collections. Facilities that utilize a TLC ILS include the Los Angeles Public Library in California, Dallas Independent School District in Texas, Ministry of Home Affairs in Singapore, Anchorage School District in Alaska, and Chicago Public Schools in Illinois. Online public access catalog products TLC adds Web-based, touchscreen-optimized functionality to its ILS products with a series of software patches referred to as the LS2 suite of OPACs: LS2 PAC, LS2 Kids, and LS2 Staff. LS2 PAC works with all three of TLC's automation systems to give patrons online access to library catalogs, including downloadable e-books, audiobooks, and other digital resources. It includes a customizable display of library titles, RSS news and information feeds, Google Analytics™ integration, federated searching of in-house and online content, integrated searching of subscription databases, list creation and sharing capabilities, and patron ratings, reviews, and search tags. LS2 Kids is the children's version of LS2 PAC, designed to enable young readers to independently explore a library's online catalog. It includes quick links to popular book series, interactive title displays with enlarged book jackets, a search box that provides spelling suggestions and corrections, and a category wheel with icons that link to appropriate titles. LS2 Staff allows libraries to perform basic circulation functions from any computer with Internet access. Library automation enhancements The company also created standalone cataloging and acquisitions products that work with any ILS. eBiblioFile is a cataloging service that offers complete MARC records for e-books and other digital materials. RDAExpress is a catalog conversion service for eBiblioFile users that upgrades a library's MARC records to the new Resource Description and Access (RDA) standard. BiblioFile is a cataloging program from the 1970s that accesses and processes MARC records for printed library materials from online databases. ITS•MARC offers Z39.50 and Internet access to MARC records. The metadata is processed by cataloging programs like BiblioFile. Additionally, TLC is the exclusive distributor of SocialFlow to the library marketplace. SocialFlow is a social media optimization tool that uses algorithms and key metrics to determine the best time to publish content for the widest possible audience. See also Libraries that have implemented TLC's automation products have been featured in media reports including: CBC Radio, "Sault Public Library launches new computer service" (Jan. 9, 2012) WALA Fox 10 TV, "Libraries going mobile" (Dec. 13, 2011) Government Technology, "Smartphones Replacing Old-Fashioned Library Cards" (Aug. 3, 2011) The Wright County Monitor, "TLC system grows as more libraries join BEACON consortium" (Jan. 6, 2011) Daily Mountain Eagle, "Library gets smart-phone application" (Nov. 28, 2010) The Des Moines Register, "Council approves purchase of library data management system" (Nov. 1, 2010) Independent Tribune, "Concord library to get major renovation" (Oct. 1, 2010) Contra Costa Times, "Library streamlines online access" (March 17, 2010) Elbert County News, "Library announces software update" (Nov. 3, 2009) The Winsted Journal, "$35,000 grant helps modernize Beardsley and Memorial Library" (Aug. 21, 2009) References External links eBiblioFile RDAExpress ITS MARC and BiblioFile Library Technology Guides' profile of The Library Corporation Marshall Breeding's history of mergers and acquisitions in the library automation industry Library automation Library-related organizations Library 2.0 Library cataloging and classification Library and information science software Privately held companies based in West Virginia Companies based in West Virginia Textbook business
The Library Corporation
Engineering
1,061
4,944
https://en.wikipedia.org/wiki/Naive%20set%20theory
Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics. Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics. Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments. Method A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. Such theory treats sets as platonic absolute objects. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself. The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik. Naive set theory may refer to several very distinct notions. It may refer to Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos. Early or later versions of Georg Cantor's theory and other informal systems. Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind. Paradoxes The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Russell's paradox: there is no set consisting of "all sets that do not contain themselves". Thus consistent systems of naive set theory must include some limitations on the principles which can be used to form sets. Cantor's theory Some believe that Georg Cantor's set theory was not actually implicated in the set-theoretic paradoxes (see Frápolli 1991). One difficulty in determining this with certainty is that Cantor did not provide an axiomatization of his system. By 1899, Cantor was aware of some of the paradoxes following from unrestricted interpretation of his theory, for instance Cantor's paradox and the Burali-Forti paradox, and did not believe that they discredited his theory. Cantor's paradox can actually be derived from the above (false) assumption—that any property may be used to form a set—using for " is a cardinal number". Frege explicitly axiomatized a theory in which a formalized version of naive set theory can be interpreted, and it is this formal theory which Bertrand Russell actually addressed when he presented his paradox, not necessarily a theory Cantorwho, as mentioned, was aware of several paradoxespresumably had in mind. Axiomatic theories Axiomatic set theory was developed in response to these early attempts to understand sets, with the goal of determining precisely what operations were allowed and when. Consistency A naive set theory is not necessarily inconsistent, if it correctly specifies the sets allowed to be considered. This can be done by the means of definitions, which are implicit axioms. It is possible to state all the axioms explicitly, as in the case of Halmos' Naive Set Theory, which is actually an informal presentation of the usual axiomatic Zermelo–Fraenkel set theory. It is "naive" in that the language and notations are those of ordinary informal mathematics, and in that it does not deal with consistency or completeness of the axiom system. Likewise, an axiomatic set theory is not necessarily consistent: not necessarily free of paradoxes. It follows from Gödel's incompleteness theorems that a sufficiently complicated first order logic system (which includes most common axiomatic set theories) cannot be proved consistent from within the theory itself – even if it actually is consistent. However, the common axiomatic systems are generally believed to be consistent; by their axioms they do exclude some paradoxes, like Russell's paradox. Based on Gödel's theorem, it is just not known – and never can be – if there are no paradoxes at all in these theories or in any first-order set theory. The term naive set theory is still today also used in some literature to refer to the set theories studied by Frege and Cantor, rather than to the informal counterparts of modern axiomatic set theory. Utility The choice between an axiomatic approach and other approaches is largely a matter of convenience. In everyday mathematics the best choice may be informal use of axiomatic set theory. References to particular axioms typically then occur only when demanded by tradition, e.g. the axiom of choice is often mentioned when used. Likewise, formal proofs occur only when warranted by exceptional circumstances. This informal usage of axiomatic set theory can have (depending on notation) precisely the appearance of naive set theory as outlined below. It is considerably easier to read and write (in the formulation of most statements, proofs, and lines of discussion) and is less error-prone than a strictly formal approach. Sets, membership and equality In naive set theory, a set is described as a well-defined collection of objects. These objects are called the elements or members of the set. Objects can be anything: numbers, people, other sets, etc. For instance, 4 is a member of the set of all even integers. Clearly, the set of even numbers is infinitely large; there is no requirement that a set be finite. The definition of sets goes back to Georg Cantor. He wrote in his 1915 article Beiträge zur Begründung der transfiniten Mengenlehre: Note on consistency It does not follow from this definition how sets can be formed, and what operations on sets again will produce a set. The term "well-defined" in "well-defined collection of objects" cannot, by itself, guarantee the consistency and unambiguity of what exactly constitutes and what does not constitute a set. Attempting to achieve this would be the realm of axiomatic set theory or of axiomatic class theory. The problem, in this context, with informally formulated set theories, not derived from (and implying) any particular axiomatic theory, is that there may be several widely differing formalized versions, that have both different sets and different rules for how new sets may be formed, that all conform to the original informal definition. For example, Cantor's verbatim definition allows for considerable freedom in what constitutes a set. On the other hand, it is unlikely that Cantor was particularly interested in sets containing cats and dogs, but rather only in sets containing purely mathematical objects. An example of such a class of sets could be the von Neumann universe. But even when fixing the class of sets under consideration, it is not always clear which rules for set formation are allowed without introducing paradoxes. For the purpose of fixing the discussion below, the term "well-defined" should instead be interpreted as an intention, with either implicit or explicit rules (axioms or definitions), to rule out inconsistencies. The purpose is to keep the often deep and difficult issues of consistency away from the, usually simpler, context at hand. An explicit ruling out of all conceivable inconsistencies (paradoxes) cannot be achieved for an axiomatic set theory anyway, due to Gödel's second incompleteness theorem, so this does not at all hamper the utility of naive set theory as compared to axiomatic set theory in the simple contexts considered below. It merely simplifies the discussion. Consistency is henceforth taken for granted unless explicitly mentioned. Membership If x is a member of a set A, then it is also said that x belongs to A, or that x is in A. This is denoted by x ∈ A. The symbol ∈ is a derivation from the lowercase Greek letter epsilon, "ε", introduced by Giuseppe Peano in 1889 and is the first letter of the word ἐστί (means "is"). The symbol ∉ is often used to write x ∉ A, meaning "x is not in A". Equality Two sets A and B are defined to be equal when they have precisely the same elements, that is, if every element of A is an element of B and every element of B is an element of A. (See axiom of extensionality.) Thus a set is completely determined by its elements; the description is immaterial. For example, the set with elements 2, 3, and 5 is equal to the set of all prime numbers less than 6. If the sets A and B are equal, this is denoted symbolically as A = B (as usual). Empty set The empty set, denoted as and sometimes , is a set with no members at all. Because a set is determined completely by its elements, there can be only one empty set. (See axiom of empty set.) Although the empty set has no members, it can be a member of other sets. Thus , because the former has no members and the latter has one member. Specifying sets The simplest way to describe a set is to list its elements between curly braces (known as defining a set extensionally). Thus denotes the set whose only elements are and . (See axiom of pairing.) Note the following points: The order of elements is immaterial; for example, . Repetition (multiplicity) of elements is irrelevant; for example, . (These are consequences of the definition of equality in the previous section.) This notation can be informally abused by saying something like to indicate the set of all dogs, but this example would usually be read by mathematicians as "the set containing the single element dogs". An extreme (but correct) example of this notation is , which denotes the empty set. The notation , or sometimes , is used to denote the set containing all objects for which the condition holds (known as defining a set intensionally). For example, denotes the set of real numbers, denotes the set of everything with blonde hair. This notation is called set-builder notation (or "set comprehension", particularly in the context of Functional programming). Some variants of set builder notation are: denotes the set of all that are already members of such that the condition holds for . For example, if is the set of integers, then is the set of all even integers. (See axiom of specification.) denotes the set of all objects obtained by putting members of the set into the formula . For example, is again the set of all even integers. (See axiom of replacement.) is the most general form of set builder notation. For example, {{math|{{mset|xs owner | x is a dog}}}} is the set of all dog owners. Subsets Given two sets A and B, A is a subset of B if every element of A is also an element of B. In particular, each set B is a subset of itself; a subset of B that is not equal to B is called a proper subset. If A is a subset of B, then one can also say that B is a superset of A, that A is contained in B, or that B contains A. In symbols, means that A is a subset of B, and means that B is a superset of A. Some authors use the symbols ⊂ and ⊃ for subsets, and others use these symbols only for proper subsets. For clarity, one can explicitly use the symbols ⊊ and ⊋ to indicate non-equality. As an illustration, let R be the set of real numbers, let Z be the set of integers, let O be the set of odd integers, and let P be the set of current or former U.S. Presidents. Then O is a subset of Z, Z is a subset of R, and (hence) O is a subset of R, where in all cases subset may even be read as proper subset. Not all sets are comparable in this way. For example, it is not the case either that R is a subset of P nor that P is a subset of R. It follows immediately from the definition of equality of sets above that, given two sets A and B, if and only if and . In fact this is often given as the definition of equality. Usually when trying to prove that two sets are equal, one aims to show these two inclusions. The empty set is a subset of every set (the statement that all elements of the empty set are also members of any set A is vacuously true). The set of all subsets of a given set A is called the power set of A and is denoted by or ; the "" is sometimes in a script font: . If the set A has n elements, then will have elements. Universal sets and absolute complements In certain contexts, one may consider all sets under consideration as being subsets of some given universal set. For instance, when investigating properties of the real numbers R (and subsets of R), R may be taken as the universal set. A true universal set is not included in standard set theory (see Paradoxes below), but is included in some non-standard set theories. Given a universal set U and a subset A of U, the complement of A (in U''') is defined as . In other words, AC ("A-complement"; sometimes simply A, "A-prime" ) is the set of all members of U which are not members of A. Thus with R, Z and O defined as in the section on subsets, if Z is the universal set, then OC is the set of even integers, while if R is the universal set, then OC is the set of all real numbers that are either even integers or not integers at all. Unions, intersections, and relative complements Given two sets A and B, their union is the set consisting of all objects which are elements of A or of B or of both (see axiom of union). It is denoted by . The intersection of A and B is the set of all objects which are both in A and in B. It is denoted by . Finally, the relative complement of B relative to A, also known as the set theoretic difference of A and B, is the set of all objects that belong to A but not to B. It is written as or . Symbolically, these are respectively ; ; . The set B doesn't have to be a subset of A for to make sense; this is the difference between the relative complement and the absolute complement () from the previous section. To illustrate these ideas, let A be the set of left-handed people, and let B be the set of people with blond hair. Then is the set of all left-handed blond-haired people, while is the set of all people who are left-handed or blond-haired or both. , on the other hand, is the set of all people that are left-handed but not blond-haired, while is the set of all people who have blond hair but aren't left-handed. Now let E be the set of all human beings, and let F be the set of all living things over 1000 years old. What is in this case? No living human being is over 1000 years old, so must be the empty set {}. For any set A, the power set is a Boolean algebra under the operations of union and intersection. Ordered pairs and Cartesian products Intuitively, an ordered pair is simply a collection of two objects such that one can be distinguished as the first element and the other as the second element, and having the fundamental property that, two ordered pairs are equal if and only if their first elements are equal and their second elements are equal. Formally, an ordered pair with first coordinate a, and second coordinate b, usually denoted by (a, b), can be defined as the set It follows that, two ordered pairs (a,b) and (c,d) are equal if and only if and . Alternatively, an ordered pair can be formally thought of as a set {a,b} with a total order. (The notation (a, b) is also used to denote an open interval on the real number line, but the context should make it clear which meaning is intended. Otherwise, the notation ]a, b[ may be used to denote the open interval whereas (a, b) is used for the ordered pair). If A and B are sets, then the Cartesian product (or simply product) is defined to be: That is, is the set of all ordered pairs whose first coordinate is an element of A and whose second coordinate is an element of B. This definition may be extended to a set of ordered triples, and more generally to sets of ordered n-tuples for any positive integer n. It is even possible to define infinite Cartesian products, but this requires a more recondite definition of the product. Cartesian products were first developed by René Descartes in the context of analytic geometry. If R denotes the set of all real numbers, then represents the Euclidean plane and represents three-dimensional Euclidean space. Some important sets There are some ubiquitous sets for which the notation is almost universal. Some of these are listed below. In the list, a, b, and c refer to natural numbers, and r and s are real numbers. Natural numbers are used for counting. A blackboard bold capital N () often represents this set. Integers appear as solutions for x in equations like x + a = b. A blackboard bold capital Z () often represents this set (from the German Zahlen, meaning numbers). Rational numbers appear as solutions to equations like a + bx = c. A blackboard bold capital Q () often represents this set (for quotient, because R is used for the set of real numbers). Algebraic numbers appear as solutions to polynomial equations (with integer coefficients) and may involve radicals (including ) and certain other irrational numbers. A Q with an overline () often represents this set. The overline denotes the operation of algebraic closure. Real numbers represent the "real line" and include all numbers that can be approximated by rationals. These numbers may be rational or algebraic but may also be transcendental numbers, which cannot appear as solutions to polynomial equations with rational coefficients. A blackboard bold capital R () often represents this set. Complex numbers are sums of a real and an imaginary number: . Here either or (or both) can be zero; thus, the set of real numbers and the set of strictly imaginary numbers are subsets of the set of complex numbers, which form an algebraic closure for the set of real numbers, meaning that every polynomial with coefficients in has at least one root in this set. A blackboard bold capital C () often represents this set. Note that since a number can be identified with a point in the plane, is basically "the same" as the Cartesian product ("the same" meaning that any point in one determines a unique point in the other and for the result of calculations, it doesn't matter which one is used for the calculation, as long as multiplication rule is appropriate for ). Paradoxes in early set theory The unrestricted formation principle of sets referred to as the axiom schema of unrestricted comprehension, is the source of several early appearing paradoxes: led, in the year 1897, to the Burali-Forti paradox, the first published antinomy. produced Cantor's paradox in 1897. yielded Cantor's second antinomy in the year 1899. Here the property is true for all , whatever may be, so would be a universal set, containing everything. , i.e. the set of all sets that do not contain themselves as elements, gave Russell's paradox in 1902. If the axiom schema of unrestricted comprehension is weakened to the axiom schema of specification or axiom schema of separation', then all the above paradoxes disappear. There is a corollary. With the axiom schema of separation as an axiom of the theory, it follows, as a theorem of the theory: Or, more spectacularly (Halmos' phrasing): There is no universe. Proof: Suppose that it exists and call it . Now apply the axiom schema of separation with and for use . This leads to Russell's paradox again. Hence cannot exist in this theory. Related to the above constructions is formation of the set , where the statement following the implication certainly is false. It follows, from the definition of , using the usual inference rules (and some afterthought when reading the proof in the linked article below) both that and holds, hence . This is Curry's paradox. It is (perhaps surprisingly) not the possibility of that is problematic. It is again the axiom schema of unrestricted comprehension allowing for . With the axiom schema of specification instead of unrestricted comprehension, the conclusion does not hold and hence is not a logical consequence. Nonetheless, the possibility of is often removed explicitly or, e.g. in ZFC, implicitly, by demanding the axiom of regularity to hold. One consequence of it is or, in other words, no set is an element of itself. The axiom schema of separation is simply too weak (while unrestricted comprehension is a very strong axiom—too strong for set theory) to develop set theory with its usual operations and constructions outlined above. The axiom of regularity is of a restrictive nature as well. Therefore, one is led to the formulation of other axioms to guarantee the existence of enough sets to form a set theory. Some of these have been described informally above and many others are possible. Not all conceivable axioms can be combined freely into consistent theories. For example, the axiom of choice of ZFC is incompatible with the conceivable "every set of reals is Lebesgue measurable". The former implies the latter is false. See also Algebra of sets Axiomatic set theory Internal set theory List of set identities and relations Set theory Set (mathematics) Partially ordered set Notes References Bourbaki, N., Elements of the History of Mathematics, John Meldrum (trans.), Springer-Verlag, Berlin, Germany, 1994. ; see also pdf version Devlin, K.J., The Joy of Sets: Fundamentals of Contemporary Set Theory, 2nd edition, Springer-Verlag, New York, NY, 1993. María J. Frápolli|Frápolli, María J., 1991, "Is Cantorian set theory an iterative conception of set?". Modern Logic, v. 1 n. 4, 1991, 302–318. Kelley, J.L., General Topology, Van Nostrand Reinhold, New York, NY, 1955. van Heijenoort, J., From Frege to Gödel, A Source Book in Mathematical Logic, 1879-1931'', Harvard University Press, Cambridge, MA, 1967. Reprinted with corrections, 1977. . External links Beginnings of set theory page at St. Andrews Earliest Known Uses of Some of the Words of Mathematics (S) Set theory Systems of set theory
Naive set theory
Mathematics
4,900
35,246,903
https://en.wikipedia.org/wiki/Tsentsak
Tsentsak are invisible pathogenic projectiles or magical darts utilized in indigenous and mestizo shamanic practices for the purposes of sorcery and healing throughout much the Amazon Basin. Anthropologists identify them as objects referenced in emic accounts that represent indigenous beliefs. Tsentak are not recognized in scientific medicine. Etymology The term tsentsak is derived from the Shuar language, which belongs to the Jivaroan language family. The Shuar are members of the Jivaroan peoples who reside in the Amazon rainforest of Peru and Ecuador. This term is also used interchangeably with virote (primarily by mestizo shamans), a Spanish term for crossbow bolt which was applied to the blow darts made by the Jivaroans from the spines of the Bactris and Astrocaryum palms. Use Tsentsak are stored by the shaman in his or her yachay, or phlegm, located in the chest and stomach. The tsentsak are embedded within this phlegm and either the tsentsak or the yachay may be projected out of the shaman into a victim to cause illness and death. This phlegm is the materialization of the shaman's power; it is used to remove tsentsak from the bodies of victims as well as to protect the shaman from being harmed by the tsentsak of others. Tsentsak are only visible under the influence of a psychoactive substance called natemä, which is the Jívaro word for ayahuasca. When the shaman imbibes natemä, the world of spirits becomes visible. It is at this time that sorcerers and bewitching shamans can send tsentsak to their victims, while conversely, healers and curing shamans can remove tsentsak from their afflicted patients. Tsentsak are believed to possess their own agency and volition as living spirits that constantly desire to kill and consume human flesh. A shaman must learn to control their darts lest they escape and cause unintended harm. To facilitate control of tsentsak they must be nourished by the consumption of mapacho (Nicotiana rustica), which can be smoked or imbibed as an infusion. A shaman who does not possess the necessary restraint to swallow their tsentsak when they rise to the back of their throat will become a sorcerer or bewitching shaman, while a shaman who can learn to control these urges will become a healer or curing shaman. An apprentice shaman who receives their first tsentsak from a predominantly bewitching shaman is likely to become a sorcerer, while the apprentice that receives their first tsentsak from a curing shaman will most likely become a healer. Sorcery Throughout much of the Amazon, tsentsak are believed to be the primary cause of illness and nonviolent death. These magical darts are utilized by brujos, (shamans specializing in attack sorcery) to bring suffering and death to their victims. The darts can be regurgitated at will by the sorcerer and projected from the mouth into the body of the victim. If the dart passes entirely though the victim they will die in three to seven days, however, if the dart becomes lodged in the victim's body, it may be removed by a curing shaman. The shaman can collect plants, insects and many other objects small enough to be swallowed, which he may then convert into tsentsak. Each variety of tsentsak possesses its own specific attributes and degree of ability to cause illness. The amount and variety of tsentsak collected by a shaman is directly proportional to his power and ability to kill and heal. Healing While sorcerers may use tsentsak for malevolent purposes, healing shamans use these magic darts to create a barrier of protection around their body. They also possess the ability to suck tsentsak from the body of a victim, which can then be sent back to the sorcerer from whom it originated. The healing shaman must imbibe ayahuasca to make the darts visible in the victim's body in order to remove them. To remove the malevolent tsentsak the curing shaman must suck it out of the victim's body. In preparation for this act the shaman must first regurgitate two of his own tsentsak into the back of his throat. The first is used to block the shaman from accidentally ingesting the malevolent tsentsak, which would most likely lead to his death. The second is used to absorb or dissolve the malevolent tsentsak which the shaman then spits out or sends back to the sorcerer. Commodification A shaman will typically receive his first tsentsak from a practicing shaman to whom he has apprenticed. The practicing shaman will regurgitate some of his yachay containing the tsentsak, which the apprentice must then swallow. The apprentice can then keep the darts in his stomach indefinitely. Many shamans will travel great distances to trade tsentsak with other powerful shamans from distant regions. They may also buy and sell tsentsak to increase the potency of their power. To transfer tsentsak to another individual, both the buyer and seller must consume ayahuasca to make the tsentsak visible. The seller must then drink an infusion of Nicotiana rustica to regurgitate the tsentsak, which is then displayed to the buyer. The buyer then swallows the tsentsak, thereby adding it to his collection. Tsentsak may also be sold in the more tangible forms of tree thorns, insects, small stones, and even pieces of razor blade. Dietary and sexual restrictions In the context of Jivaroan shamanism, an apprentice shaman must abstain from sexual intercourse and follow a special diet for a period of at least three months after receiving their first tsentsak. If these restrictions are broken, the darts will leave the body of the shaman and the process must be started from the beginning. To gain the power to kill and cure, a shaman must observe these restrictions for a period of five months. Mestizo shamans adhere to a similar restrictive diet in preparation for the consumption of ayahuasca before shamanic rituals. These restrictions help to instill self-control and emotional mastery in preparation for a career as a healer. A shaman who breaks the restrictions cannot control the urges of their tsentsak and will become a sorcerer. See also Ayahuasca Jivaroan peoples Michael Harner Shamanism Shuar people Tsunki Tutelary deity Yachay Notes References External links Singing to the Plants Website of Stephan Beyer Shamanism.org Interview with Michael Harner Anthropology Magic (supernatural) American witchcraft Shamanism of the Americas
Tsentsak
Physics
1,381
1,842,122
https://en.wikipedia.org/wiki/Hans%20Kramers
Hendrik Anthony "Hans" Kramers (17 December 1894 – 24 April 1952) was a Dutch physicist who worked with Niels Bohr to understand how electromagnetic waves interact with matter and made important contributions to quantum mechanics and statistical physics. Background and education Hans Kramers was born on 17 December 1894 in Rotterdam. the son of Hendrik Kramers, a physician, and Jeanne Susanne Breukelman. In 1912 Hans finished secondary education (HBS) in Rotterdam, and studied mathematics and physics at the University of Leiden, where he obtained a master's degree in 1916. Kramers wanted to obtain foreign experience during his doctoral research, but his first choice of supervisor, Max Born in Göttingen, was not reachable because of the First World War. Because Denmark was neutral in this war, as was the Netherlands, he travelled (by ship, overland was impossible) to Copenhagen, where he visited unannounced the then still relatively unknown Niels Bohr. Bohr took him on as a Ph.D. candidate and Kramers prepared his dissertation under Bohr's direction. Although Kramers did most of his doctoral research (on intensities of atomic transitions) in Copenhagen, he obtained his formal Ph.D. under Ehrenfest in Leiden, on 8 May 1919. Kramers enjoyed music, and played cello and piano. Academic career He worked for almost ten years in Bohr's group, becoming an associate professor at the University of Copenhagen. He played a role in the ill-fated BKS theory of 1924-5. Kramers left Denmark in 1926 and returned to the Netherlands. He became a full professor in theoretical physics at Utrecht University, where he supervised Tjalling Koopmans. In 1925, with Werner Heisenberg he developed the Kramers–Heisenberg dispersion formula, and in 1926 he was one of the authors of the WKB method. He is also credited with introducing in 1948 the concept of renormalization into quantum field theory, although his approach was nonrelativistic. He is also credited for the Kramers–Kronig relations with Ralph Kronig which are mathematical equations relating real and imaginary parts of complex functions constrained by causality. One further refers to a Kramers turnover when the rate of thermally activated barrier crossing as a function of the damping goes through a maximum, thereby undergoing a transition between the energy diffusion and spatial diffusion regimes. He is also known for Kramers' degeneracy theorem. In 1934 he left Utrecht and succeeded Paul Ehrenfest in Leiden. From 1931 until his death he held also a cross appointment at Delft University of Technology. Kramers was one of the founders of the Mathematisch Centrum in Amsterdam. Family On 25 October 1920 he was married to Anna Petersen. They had three daughters and one son. Recognition Kramers became member of the Royal Netherlands Academy of Arts and Sciences in 1929, he was forced to resign in 1942. He joined the Academy again in 1945. He was an International member of the American Philosophical Society. Kramers won the Lorentz Medal in 1947 and Hughes Medal in 1951. Notes See also Spin (physics) Stark effect References External links H.B.G. Casimir, Kramers, Hendrik Anthony (1894–1952), in Biografisch Woordenboek van Nederland. (in Dutch) J.M. Romein, Hendrik Anthony Kramers, in: Jaarboek van de Maatschappij der Nederlandse Letterkunde te Leiden, 1951–1953, pp. 83–91. (in Dutch) Ph.D. candidates of H.A. Kramers: 1929-1952 Publications of H.A. Kramers 1894 births 1952 deaths 20th-century Dutch physicists Quantum physicists Academic staff of the Delft University of Technology Leiden University alumni Members of the Royal Netherlands Academy of Arts and Sciences Probability theorists Lorentz Medal winners Scientists from Rotterdam Presidents of the International Union of Pure and Applied Physics Members of the Royal Swedish Academy of Sciences Members of the American Philosophical Society
Hans Kramers
Physics
828
3,765,016
https://en.wikipedia.org/wiki/Sphingosyl%20phosphatide
Sphingosyl phosphatide refers to a lipid containing phosphorus and a long-chain base. References Cyberlipid Center – sphingosylphosphatides Phospholipids
Sphingosyl phosphatide
Chemistry,Biology
46
55,012,202
https://en.wikipedia.org/wiki/Matricin
Matricin is a sesquiterpene. It can be extracted from flower of chamomille (Matricaria chamomilla). Matricin is colorless. Chamazulene, a blue-violet derivative of azulene, found in a variety of plants including in chamomile (Matricaria chamomilla), wormwood (Artemisia absinthium) and yarrow (Achillea millefolium) is biosynthesized from matricin. References Sesquiterpenes Dienes Acetate esters Tertiary alcohols Anthemideae Heterocyclic compounds with 3 rings Oxygen heterocycles
Matricin
Chemistry
145
29,678,093
https://en.wikipedia.org/wiki/Dempwolff%20group
In mathematical finite group theory, the Dempwolff group is a finite group of order 319979520 = 215·32·5·7·31, that is the unique nonsplit extension of by its natural module of order . The uniqueness of such a nonsplit extension was shown by , and the existence by , who showed using some computer calculations of that the Dempwolff group is contained in the compact Lie group as the subgroup fixing a certain lattice in the Lie algebra of , and is also contained in the Thompson sporadic group (the full automorphism group of this lattice) as a maximal subgroup. showed that any extension of by its natural module splits if . Note that this theorem does not necessarily apply to extensions of ; for example, there is a non-split extension , which is a maximal subgroup of the Lyons group. showed that it also splits if is not 3, 4, or 5, and in each of these three cases there is just one non-split extension. These three nonsplit extensions can be constructed as follows: The nonsplit extension is a maximal subgroup of the Chevalley group . The nonsplit extension is a maximal subgroup of the sporadic Conway group Co3. The nonsplit extension is a maximal subgroup of the Thompson sporadic group Th. References External links Dempwolff group at the atlas of groups. Finite groups
Dempwolff group
Mathematics
283
37,704,692
https://en.wikipedia.org/wiki/Radical.fm
Radical.FM was a digital music streaming service available on iOS devices, Android devices, and desktop web browsers. The service allows users to create their own custom online radio stations based on musical genres. Unlike other streaming services like Spotify and Pandora Radio, Radical.FM is completely free and based on donations through a "pay-what-you-can" model. The mobile application does not show or have audio ads, or charge subscription fees. Radical.FM was founded by Thomas McAlevey, who previously launched the radio station Bandit Rock and the website Tomsradio.com. Radical.FM is headquartered in Venice, California. The service was released on iTunes in August, 2013, and on Android in June, 2014 with a catalog of over 25 million tracks. Radical.FM claims the service relies on human curation for its genres that are used to build "the best personalized radio stations in the world." The company launched a desktop service in 2014. In October, 2015, the company introduced RadCasting, personal broadcasting with synchronous music sharing and discovery, where users can broadcast and share their stations with any other user, on mobile or desktop. As of 2018, this company has shut down. See also List of Internet radio stations References External links American music websites Defunct digital music services or companies Internet radio Internet properties disestablished in 2018
Radical.fm
Technology
279
36,562,971
https://en.wikipedia.org/wiki/NinjaTel%20Van
The NinjaTel Van is a 2001 Ford Econoline E250 van, designed and converted by Bob "saberfire" Bristow and Colleen "Phar" Campbell into the base of operation for NinjaTel. From July 26 to July 29, 2012 the Ninja Networks team created and operated a mobile cell phone network from a van placed in the Vendor area of DEF CON 20, at the Rio Hotel/Casino in Las Vegas, and the Ninja Party, at Rumor Boutique Hotel in Las Vegas. NinjaTel served a small network of 650 GSM phones using custom SIM cards. Work on the van began in September 2011 and was completed 11 months later on July 26, 2012, just in time for DEF CON 20. The van is equipped with a mobile GSM cellular network, featuring all necessary equipment and a roof-mounted antenna. Network The network uses OpenBTS, Asterisk, and an Ettus Research Universal Software Radio Peripheral to provide voice and SMS service to connected devices. During DEF CON 20 it did not have data ability. Other deployments In 2013 the van was used to provide wireless network connectivity to a remote wilderness area for the production of Capture, an American reality competition television series on The CW. Gallery References External links NinjaTel, the hacker cellphone network - RobotSkirts Mobile telecommunications networks
NinjaTel Van
Technology
265
7,079,787
https://en.wikipedia.org/wiki/PS210%20experiment
The PS210 experiment was the first experiment that led to the observation of antihydrogen atoms produced at the Low Energy Antiproton Ring (LEAR) at CERN in 1995. The antihydrogen atoms were produced in flight and moved at nearly the speed of light. They made unique electrical signals in detectors that destroyed them almost immediately after they formed by matter–antimatter annihilation. Eleven signals were observed, of which two were attributed to other processes. In 1997 similar observations were announced at Fermilab from the E862 experiment. The first measurement demonstrated the existence of antihydrogen, the second (with improved setup and intensity monitoring) measured the production rate. Both experiments, one at each of the only two facilities with suitable antiprotons, were stimulated by calculations which suggested the possibility of making very fast antihydrogen within existing circular accelerators. References Further reading Particle experiments CERN experiments External links PS210 experiment record on INSPIRE-HEP
PS210 experiment
Physics
202
1,636,763
https://en.wikipedia.org/wiki/Oxidative%20addition
Oxidative addition and reductive elimination are two important and related classes of reactions in organometallic chemistry. Oxidative addition is a process that increases both the oxidation state and coordination number of a metal centre. Oxidative addition is often a step in catalytic cycles, in conjunction with its reverse reaction, reductive elimination. Role in transition metal chemistry For transition metals, oxidative reaction results in the decrease in the dn to a configuration with fewer electrons, often 2e fewer. Oxidative addition is favored for metals that are (i) basic and/or (ii) easily oxidized. Metals with a relatively low oxidation state often satisfy one of these requirements, but even high oxidation state metals undergo oxidative addition, as illustrated by the oxidation of Pt(II) with chlorine: [PtCl4]2− + Cl2 → [PtCl6]2− In classical organometallic chemistry, the formal oxidation state of the metal and the electron count of the complex both increase by two. One-electron changes are also possible and in fact some oxidative addition reactions proceed via series of 1e changes. Although oxidative additions can occur with the insertion of a metal into many different substrates, oxidative additions are most commonly seen with H–H, H–X, and C–X bonds because these substrates are most relevant to commercial applications. Oxidative addition requires that the metal complex have a vacant coordination site. For this reason, oxidative additions are common for four- and five-coordinate complexes. Reductive elimination is the reverse of oxidative addition. Reductive elimination is favored when the newly formed X–Y bond is strong. For reductive elimination to occur the two groups (X and Y) should be mutually adjacent on the metal's coordination sphere. Reductive elimination is the key product-releasing step of several reactions that form C–H and C–C bonds. Mechanisms Oxidative additions proceed by diverse pathways that depend on the metal center and the substrates. Concerted pathway Oxidative additions of nonpolar substrates such as hydrogen and hydrocarbons appear to proceed via concerted pathways. Such substrates lack π-bonds, consequently a three-centered σ complex is invoked, followed by intramolecular ligand bond cleavage of the ligand (probably by donation of electron pair into the sigma* orbital of the inter ligand bond) to form the oxidized complex. The resulting ligands will be mutually cis, although subsequent isomerization may occur. This mechanism applies to the addition of homonuclear diatomic molecules such as H2. Many C–H activation reactions also follow a concerted mechanism through the formation of an M–(C–H) agostic complex. A representative example is the reaction of hydrogen with Vaska's complex, trans-IrCl(CO)[P(C6H5)3]2. In this transformation, iridium changes its formal oxidation state from +1 to +3. The product is formally bound to three anions: one chloride and two hydride ligands. As shown below, the initial metal complex has 16 valence electrons and a coordination number of four whereas the product is a six-coordinate 18 electron complex. Formation of a trigonal bipyramidal dihydrogen intermediate is followed by cleavage of the H–H bond, due to electron back donation into the H–H σ*-orbital, i.e. a sigma complex. This system is also in chemical equilibrium, with the reverse reaction proceeding by the elimination of hydrogen gas with simultaneous reduction of the metal center. The electron back donation into the H–H σ*-orbital to cleave the H–H bond causes electron-rich metals to favor this reaction. The concerted mechanism produces a cis dihydride, while the stereochemistry of the other oxidative addition pathways do not usually produce cis adducts. SN2-type Some oxidative additions proceed analogously to the well known bimolecular nucleophilic substitution reactions in organic chemistry. Nucleophilic attack by the metal center at the less electronegative atom in the substrate leads to cleavage of the R–X bond, to form an [M–R]+ species. This step is followed by rapid coordination of the anion to the cationic metal center. For example, reaction of a square planar complex with methyl iodide: This mechanism is often assumed in the addition of polar and electrophilic substrates, such as alkyl halides and halogens. Ionic The ionic mechanism of oxidative addition is similar to the SN2 type in that it involves the stepwise addition of two distinct ligand fragments. The key difference being that ionic mechanisms involve substrates which are dissociated in solution prior to any interactions with the metal center. An example of ionic oxidative addition is the addition of hydrogen chloride. Radical In addition to undergoing SN2-type reactions, alkyl halides and similar substrates can add to a metal center via a radical mechanism, although some details remain controversial. Reactions which are generally accepted to proceed by a radical mechanism are known however. One example was proposed by Lednor and co-workers. Initiation [(CH3)2C(CN)N]2 → 2 (CH3)2(CN)C• + N2 (CH3)2(CN)C• + PhBr → (CH3)2(CN)CBr + Ph• Propagation Ph• + [Pt(PPh3)2] → [Pt(PPh3)2Ph]• [Pt(PPh3)2Ph]• + PhBr → [Pt(PPh3)2PhBr] + Ph• Applications Oxidative addition and reductive elimination are invoked in many catalytic processes in homogeneous catalysis, e.g., hydrogenations, hydroformylations, hydrosilylations, etc. Cross-coupling reactions like the Suzuki coupling, Negishi coupling, and the Sonogashira coupling also proceed by oxidative addition. References Further reading External links Chemical reactions Coordination chemistry Organometallic chemistry Reaction mechanisms Redox
Oxidative addition
Chemistry
1,285
42,428,351
https://en.wikipedia.org/wiki/Anaerolinea%20thermophila
Anaerolinea thermophila is a species of filamentous thermophilic bacteria, the type and only species of its genus. It is Gram-negative, non-spore-forming, with type strain UNI-1T (=JCM 11387T =DSM 14523T). References Further reading Satyanarayana, Tulasi, Jennifer Littlechild, and Yutaka Kawarabayasi. "Thermophilic Microbes in Environmental and Industrial Biotechnology." Stroo, Hans F., Andrea Leeson, and C. Herb Ward, eds. Bioaugmentation for Groundwater Remediation. Vol. 5. Springer, 2012. External links LPSN Type strain of Anaerolinea thermophila at BacDive - the Bacterial Diversity Metadatabase Chloroflexota Thermophiles Gram-negative bacteria Bacteria described in 2003
Anaerolinea thermophila
Biology
193
20,722,467
https://en.wikipedia.org/wiki/Nocturnal%20sleep-related%20eating%20disorder
Nocturnal sleep-related eating disorder (NSRED) is a combination of a parasomnia and an eating disorder. It is a non-rapid eye movement sleep (NREM) parasomnia. It is described as being in a specific category within somnambulism or a state of sleepwalking that includes behaviors connected to a person's conscious wishes or wants. Thus many times NSRED is a person's fulfilling of their conscious wants that they suppress; however, this disorder is difficult to distinguish from other similar types of disorders. Signs and symptoms Over the past 30 years, several studies have found that those affected by NSRED all have different symptoms and behaviors specific to them, yet they also all have similar characteristics that doctors and psychologists have identified to distinguish NSRED from other combinations of sleep and eating disorders such as night eating syndrome. Winkelman says that typical behaviors for patients with NSRED include: "Partial arousals from sleep, usually within 2 to 3 hours of sleep onset, and subsequent ingestion of food in a rapid or 'out of control' manner." They also will attempt to eat bizarre amalgamations of foods and even potentially harmful substances such as glue, wood, or other toxic materials. In addition, Schenck and Mahowald noted that their patients mainly ate sweets, pastas, both hot and cold meals, improper substances such as "raw, frozen, or spoiled foods; salt or sugar sandwiches; buttered cigarettes; and odd mixtures prepared in a blender." During the handling of this food, patients with NSRED distinguish themselves, as they are usually messy or harmful to themselves. Some eat their food with their bare hands while others attempt to eat it with utensils. This occasionally results in injuries to the person as well as other injuries. After completing their studies, Schenck and Mahowald said, "Injuries resulted from the careless cutting of food or opening of cans; consumption of scalding fluids (coffee) or solids (hot oatmeal); and frenzied running into walls, kitchen counters, and furniture." A few of the more notable symptoms of this disorder include large amounts of weight gain over short periods of time, particularly in women; irritability during the day, due to lack of restful sleep; and vivid dreams at night. It is easily distinguished from regular sleepwalking by the typical behavioral sequence consisting of "rapid, 'automatic' arising from bed, and immediate entry into the kitchen." In addition, throughout all of the studies done, doctors and psychiatrists discovered that these symptoms are invariant across weekdays, weekends, and vacations as well as the eating excursions being erratically spread throughout a sleep cycle. Most people with this disease retain no control over when they arise and consume food in their sleep. Although some have been able to restrain themselves from indulging in their unconscious appetites, some have not and must turn to alternative methods of stopping this disorder. It is important for trained physicians to recognize these symptoms in their patients as quickly as possible, so those with NSRED may be treated before they injure themselves. Diagnosis Criteria The diagnostic criteria utilized by the International Classification of Sleep Disorders-Third Edition (ICSD-3) include some dysfunctional eating when the person wake up during the main sleep period, eating unusual or toxic food, negative health consequences. The patient could be injured during these episodes and he might not be conscious and won't remember them. This criterion differentiates SRED from Night eating syndrome(NES). Patients with NES are conscious during the episode. Differential diagnosis NSRED is closely related to night eating syndrome (NES) except for the fact that those with NES are completely awake and aware of their eating and bingeing at night while those with NSRED are sleeping and unaware of what they are doing. NES is primarily considered an eating disorder while NSRED is primarily considered a parasomnia; however, both are a combination of parasomnia and eating disorders since those with NES usually have insomnia or difficulty sleeping and those with NSRED experience symptoms similar to binge eating. Some even argue over whether NES and NSRED are the same or distinct disorders. Even though there have been debates over these two disorders, specialists have examined them to try to determine the differences. Dr. J. Winkelman noted several features of the two disorders that were similar, but he gave one important factor that make these disorders different. In his article "Sleep-Related Eating Disorder and Night Eating Syndrome: Sleep Disorders, Eating Disorders, Or both", Winkelman said, "Both [disorders] involve nearly nightly binging at multiple nocturnal awakenings, defined as excess calorie intake or loss of control over consumption." He also reported that both disorders have a common occurrence of approximately one to five percent of adults, have been predominantly found in women, with a young adult onset, have a chronic course, have a primary morbidity of weight gain, sleep disruption, and shame over loss of control over food intake, have familial bases, and have been observed to have comorbid depression and daytime eating disorders. However, Winkelman said, "The most prominent cited distinction between NES and SRED is the level of consciousness during nighttime eating episodes." Therefore, these two disorders are extremely similar with only one distinction between them. Doctors and psychologists have difficulty differentiating between NES and NSRED, but the distinction of a person's level of consciousness is what doctors chiefly rely on to make a diagnosis. One mistake that is often made is the misdiagnosis of NSRED for NES. However, even though NSRED is not a commonly known and diagnosed disease, many people with it in differing ways while doctors work to find a treatment that works for everyone; several studies have been done on NSRED, such as the one conducted by Schenck and Mahowald. These studies, in turn, provide the basic information on this disorder including the symptoms, behaviors, and possible treatments that doctors are using today. Treatment For those patients who have not been able to stop this disorder on their own, doctors have been working to discover a treatment that will work for everyone. One treatment that Schenck and Mahowald studied consisted of psychotherapy combined with "environmental manipulation". This was usually done separately from the weight-reducing diets. However, during this study only 10 percent of the patients were able to lose more than one third of their initial excess weight, which was not a viable percentage. In addition, they reported that many of the patients experienced "major depression" and "severe anxiety" during the attempted treatments. This was not one of the most successful attempts to help those with NSRED. However, Dr. R. Auger reported on another trial treatment where patients were treated utilizing pramipexole. Those conducting the treatment noticed how the nocturnal median motor activity was decreased, as was assessed by actigraphy, and individual progress of sleep quality was reported. Nevertheless, Augur also said, "27 percent of subjects had RLS (restless legs syndrome, a condition known to respond to this medication), and number and duration of waking episodes related to eating behaviors were unchanged." Encouraged by the positive response verified in the above-mentioned trial treatment, doctors and psychiatrists conducted a more recent study described by Auger as "efficacy of topiramate [an antiepileptic drug associated with weight loss] in 17 consecutive patients with NSRED." Out of the 65 percent of patients who continued to take the medication on a regular basis, all confirmed either considerable development or absolute remission of "night-eating" in addition to "significant weight loss" being achieved. This has been one of the most effective treatments discovered so far, but many patients still had NSRED. Therefore, other treatments were sought after. Such treatments include those targeted to associated sleep disorders with the hope that it would play an essential part of the treatment process of NSRED. In Schenck and Mahowald's series, combinations of cardibopa/L-dopa, codeine, and clonazepam were used to treat five patients with RLS and one patient with somnambulism and PLMS (periodic limb movements in sleep). These patients all were with NSRED as well as these other disorders, and they all experienced a remission of their NSRED as a result of taking these drugs. Two patients with OSA (obstructive sleep apnea) and NSRED also reported as having a "resolution of their symptoms with nasal continuous positive airway pressure (nCPAP) therapy." Clonazepam monotherapy was also found to be successful in 50 percent of patients with simultaneous somnambulism. Dopaminergic agents such as monotherapy were effective in 25 percent of the NSRED subgroup. Success with combinations of dopaminergic and opioid drugs, with the occasional addition of sedatives, also was found in seven patients without associated sleep disorders. In those for whom opioids and sedatives are relatively contraindicated (e.g., in those with histories of substance abuse), two case reports were described as meeting with success with a combination of bupropion, levodopa, and trazodone. Notably, hypnotherapy, psychotherapy, and various behavioral techniques, including environmental manipulation, were not effective on the majority of the patients studied. Nevertheless, Auger argue that behavioral strategies should complement the overall treatment plan and should include deliberate placement of food to avoid indiscriminate wandering, maintenance of a safe sleep environment, and education regarding proper sleep hygiene and stress management. Even with their extensive studies, Schenck and Mahowald did not find the success as Auger found by treating his patients with topiramate. History The first case of NSRED was reported in 1955, but over the next 36 years, only nine more reports were made of this syndrome. Seven of these reports were single-case studies and the other two instances were seen during objective sleep studies, all done by psychiatrists and doctors. Schenck and Mahowald were the first to a major study on this disorder. They started their study of NSRED in 1985 and continued until 1993 with several cases among a total of 38 other various sleep-related disorders. Many of the cases they observed had symptoms that overlapped with those of NES, but this study was the first to discover that NSRED was different from NES in the fact that those with NSRED were either partially or completely unaware of their actions at night while those with NES were aware. Schenck and Mahowald also discovered that none of the patients had any eating instability before their problems at night while sleeping. In their 1993 report, they summarized the major findings with the idea that women encompass at least two thirds of the patients and that the majority of these patients had become overweight. They also discovered that while the patients' night-eating normally started during early adulthood, this wasn't always the case as it started as early as childhood to as late as middle adulthood. These patients not only had NSRED, but many of them had also had other nighttime behaviors such as sleep terrors for several years. This revolutionized the way people saw NSRED. With the technological age growing and more people becoming obese, Schenck and Mahowald's discovery of NSRED causing a large weight increase helped doctors more easily identify this disorder. As seen in Table 1 below, almost half of Schenck and Mahowald's patients were significantly obese. According to body mass index's criteria, no patient was emaciated. Schenck and Mahowald said, "virtually all patients had accurate non-distorted appraisals of their body size, shape, and weight. Furthermore, unlike the patients in Stunkard's series, none of our patients had problematic eating in the evening between dinner and bedtime; sleep onset insomnia was not present; and sleep latency was usually brief, apart from several patients with RLS." After realizing what was wrong with them, many of Schenck and Mahowald's patients with NSRED restricted their day eating and over exercised. This table summary identifies the first initial findings concerning NSRED, and it shows how NSRED is a random malady that affects many different types of people in individual ways. See also Night eating syndrome References External links Eating disorders Sleep disorders Sleepwalking
Nocturnal sleep-related eating disorder
Biology
2,549
6,004,027
https://en.wikipedia.org/wiki/Walther%20Mayer
Walther Mayer (11 March 1887 – 10 September 1948) was an Austrian mathematician, born in Graz, Austria-Hungary. With Leopold Vietoris he is the namesake of the Mayer–Vietoris sequence in topology. He served as an assistant to Albert Einstein, and was nicknamed "Einstein's calculator". Biography Mayer studied at the Federal Institute of Technology in Zürich and the University of Paris before receiving his doctorate in 1912 from the University of Vienna; his thesis concerned the Fredholm integral equation. He served in the military between 1914 and 1919, during which he found time to complete a habilitation on differential geometry. Because he was Jewish, he had little opportunity for an academic career in Austria, and left the country; however, in 1926, with help from Einstein, he returned to a position at the University of Vienna as Privatdozent (lecturer). He made a name for himself in topology with the Mayer–Vietoris sequence, and with an axiomatic treatment of homology predating the Eilenberg–Steenrod axioms. He also published a book on Riemannian geometry in 1930, the second volume of a textbook on differential geometry that had been started by Adalbert Duschek with a volume on curves and surfaces. In 1929, on the recommendation of Richard von Mises, he became Albert Einstein's assistant with the explicit understanding that he work with him on distant parallelism, and from 1931 to 1936, he collaborated with Albert Einstein on the theory of relativity. In 1933, after Hitler's assumption of power, he followed Einstein to the United States and became an associate in mathematics at the Institute for Advanced Study in Princeton, New Jersey. He continued working on mathematics at the Institute, and died in Princeton in 1948. Selected publications with Adalbert Duschek: Lehrbuch der Differentialgeometrie. 2 vols., Teubner 1930. vol. 1 vol. 2 Über abstrakte Topologie. In: Monatshefte für Mathematik. vol. 36, 1929, pp. 1–42 (Mayer-Vietoris-Sequenzen) with T. Y. Thomas: Foundations of the theory of Lie groups. In: Annals of Mathematics. 36, 1935, 770–822. Die Differentialgeometrie der Untermannigfaltigkeiten des Rn konstanter Krümmung. Transactions of the American Mathematical Society 38 no. 2, 1935: 267–309. with T. Y. Thomas: Fields of parallel vectors in non-analytic manifolds in the large. Compositio Mathematica, vol. 5, 1938: pp. 198-207. with Herbert Busemann: "On the foundations of calculus of variations." Transactions of the American Mathematical Society 49, no. 2, 1941: 173-198 A new homology theory. In: Annals of Mathematics. vol. 43, 1942, pp. 370–380, 594–605. The Duality Theory and the Basic Isomorphisms of Group Systems and Nets and Co-Nets of Group Systems. In: Annals of Mathematics. vol. 46, 1945, pp. 1–28 On Products in Topology. In: Annals of Mathematics. vol. 46, 1945, pp. 29–57. Duality Theorems. In: Fundamenta Mathematicae 35, 1948, 188–202. References External links Portrait of Walther Mayer (1940), United States Holocaust Memorial Museum 1887 births 1948 deaths Austrian mathematicians Austrian Jews Mathematicians from Austria-Hungary University of Vienna alumni Topologists Institute for Advanced Study people
Walther Mayer
Mathematics
741
28,335,503
https://en.wikipedia.org/wiki/Norvaline
Norvaline (abbreviated as Nva) is an amino acid with the formula CH3(CH2)2CH(NH2)CO2H. The compound is a structural analog of valeric acid and also an isomer of the more common amino acid valine. Like most other α-amino acids, norvaline is chiral. It is a white, water-soluble solid. Occurrence Norvaline is a non-proteinogenic unbranched-chain amino acid. It has previously been reported to be a natural component of an antifungal peptide of Bacillus subtilis. Norvaline and other modified unbranched chain amino acids have received attention because they appear to be incorporated in some recombinant proteins found in E. coli. Its biosynthesis has been examined. The incorporation of Nva into peptides reflects the imperfect selectivity of the associated aminoacyl-tRNA synthetase. In Miller–Urey experiments probing prebiotic synthesis of amino acids, norvaline, but also norleucine, are produced. Nomenclature Norvaline and norleucine (one hydrocarbon group longer) both possess the nor- prefix for historical reason, despite current conventional usage of the prefix to denote a missing hydrocarbon group (under which they would theoretically be called "dihomoalanine" and "trihomoalanine"). The name is not systematic, and the IUPAC/IUB Joint Commission on Nomenclature recommends that this name should be abandoned and the systematic name should be used. Potential uses Norvaline is used as a dietary supplement for bodybuilding. Recently, it was suggested in the treatment of Alzheimer's disease. References Alpha-Amino acids Biochemistry Non-proteinogenic amino acids
Norvaline
Chemistry,Biology
362
20,127,258
https://en.wikipedia.org/wiki/Video%20aggregator
A video aggregator is a website that collects and organizes online videos from other sources. Video aggregation is done for different purposes, and websites take different approaches to achieve their purpose. Some sites try to collect videos of high quality or interest for visitors to view; the collection may be made by editors or may be based on community votes. Another method is to base the collection on those videos most viewed, either at the aggregator site or at various popular video hosting sites. Some other sites exist to allow users to collect their own sets of videos, for personal use as well as for browsing and viewing by others; these sites can develop online communities around video sharing. Still other sites allow users to create a personalized video playlist, for personal use as well as for browsing and viewing by others. References Aggregation websites Video
Video aggregator
Technology
164
4,971,502
https://en.wikipedia.org/wiki/Ethernet%20over%20SDH
Ethernet Over SDH (EoS or EoSDH) or Ethernet over SONET refers to a set of protocols which allow Ethernet traffic to be carried over synchronous digital hierarchy networks in an efficient and flexible way. The same functions are available using SONET. Ethernet frames which are to be sent on the SDH link are sent through an "encapsulation" block (typically Generic Framing Procedure or GFP) to create a synchronous stream of data from the asynchronous Ethernet packets. The synchronous stream of encapsulated data is then passed through a mapping block which typically uses virtual concatenation (VCAT) to route the stream of bits over one or more SDH paths. As this is byte interleaved, it provides a better level of security compared to other mechanisms for Ethernet transport. After traversing SDH paths, the traffic is processed in the reverse fashion: virtual concatenation path processing to recreate the original synchronous byte stream, followed by decapsulation to converting the synchronous data stream to an asynchronous stream of Ethernet frames. The SDH paths may be VC-4, VC-3, VC-12 or VC-11 paths. Up to 64 VC-11 or VC-12 paths can be concatenated together to form a single larger virtually concatenated group. Up to 256 VC-3 or VC-4 paths can be concatenated together to form a single larger virtually concatenated group. The paths within a group are referred to as "members". A virtually concatenated group is typically referred to by the notation -v, where is VC-4, VC-3, VC-12 or VC-11 and X is the number of members in the group. A 10-Mbit/s Ethernet link is often transported over a VC-12-5v which allows the full bandwidth to be carried for all packet sizes. A 100-Mbit/s Ethernet link is often transported over a VC-3-2v which allows the full bandwidth to be carried when smaller packets are used (< 250 bytes) and Ethernet flow control restricts the rate of traffic for larger packets. But does only give ca. 97 Mbit/s, not full 100 Mb. A 1000-Mbit/s (or 1 GigE) Ethernet link is often transported over a VC-3-21v or a VC-4-7v which allows the full bandwidth to be carried for all packets. EoS also drops the "idle" packets of the Ethernet frame before encapsulating the Ethernet frame to GFP, which is recreated at the other end during decapsulation process. Hence this provide a better throughput compared to native Ethernet transport. An additional protocol, called link capacity adjustment scheme (LCAS), allows the two endpoints of the SDH paths to negotiate which paths are working and can carry traffic versus which paths should not be used to carry traffic. See also Packet over SONET SDH Synchronous optical networking Network protocols
Ethernet over SDH
Technology
633
32,584,509
https://en.wikipedia.org/wiki/List%20of%20Android%20launchers
This is a list of Android launchers, which present the main view of the device and are responsible for starting other apps and hosting live widgets. References Launchers Google lists Lists of mobile apps Lists of software Mobile application launchers
List of Android launchers
Technology
48
2,325,953
https://en.wikipedia.org/wiki/Quantum%20network
Quantum networks form an important element of quantum computing and quantum communication systems. Quantum networks facilitate the transmission of information in the form of quantum bits, also called qubits, between physically separated quantum processors. A quantum processor is a machine able to perform quantum circuits on a certain number of qubits. Quantum networks work in a similar way to classical networks. The main difference is that quantum networking, like quantum computing, is better at solving certain problems, such as modeling quantum systems. Basics Quantum networks for computation Networked quantum computing or distributed quantum computing works by linking multiple quantum processors through a quantum network by sending qubits in between them. Doing this creates a quantum computing cluster and therefore creates more computing potential. Less powerful computers can be linked in this way to create one more powerful processor. This is analogous to connecting several classical computers to form a computer cluster in classical computing. Like classical computing, this system is scalable by adding more and more quantum computers to the network. Currently quantum processors are only separated by short distances. Quantum networks for communication In the realm of quantum communication, one wants to send qubits from one quantum processor to another over long distances. This way, local quantum networks can be intra connected into a quantum internet. A quantum internet supports many applications, which derive their power from the fact that by creating quantum entangled qubits, information can be transmitted between the remote quantum processors. Most applications of a quantum internet require only very modest quantum processors. For most quantum internet protocols, such as quantum key distribution in quantum cryptography, it is sufficient if these processors are capable of preparing and measuring only a single qubit at a time. This is in contrast to quantum computing where interesting applications can be realized only if the (combined) quantum processors can easily simulate more qubits than a classical computer (around 60). Quantum internet applications require only small quantum processors, often just a single qubit, because quantum entanglement can already be realized between just two qubits. A simulation of an entangled quantum system on a classical computer cannot simultaneously provide the same security and speed. Overview of the elements of a quantum network The basic structure of a quantum network and more generally a quantum internet is analogous to a classical network. First, we have end nodes on which applications are ultimately run. These end nodes are quantum processors of at least one qubit. Some applications of a quantum internet require quantum processors of several qubits as well as a quantum memory at the end nodes. Second, to transport qubits from one node to another, we need communication lines. For the purpose of quantum communication, standard telecom fibers can be used. For networked quantum computing, in which quantum processors are linked at short distances, different wavelengths are chosen depending on the exact hardware platform of the quantum processor. Third, to make maximum use of communication infrastructure, one requires optical switches capable of delivering qubits to the intended quantum processor. These switches need to preserve quantum coherence, which makes them more challenging to realize than standard optical switches. Finally, one requires a quantum repeater to transport qubits over long distances. Repeaters appear in between end nodes. Since qubits cannot be copied (No-cloning theorem), classical signal amplification is not possible. By necessity, a quantum repeater works in a fundamentally different way than a classical repeater. Elements of a quantum network End nodes: quantum processors End nodes can both receive and emit information. Telecommunication lasers and parametric down-conversion combined with photodetectors can be used for quantum key distribution. In this case, the end nodes can in many cases be very simple devices consisting only of beamsplitters and photodetectors. However, for many protocols more sophisticated end nodes are desirable. These systems provide advanced processing capabilities and can also be used as quantum repeaters. Their chief advantage is that they can store and retransmit quantum information without disrupting the underlying quantum state. The quantum state being stored can either be the relative spin of an electron in a magnetic field or the energy state of an electron. They can also perform quantum logic gates. One way of realizing such end nodes is by using color centers in diamond, such as the nitrogen-vacancy center. This system forms a small quantum processor featuring several qubits. NV centers can be utilized at room temperatures. Small scale quantum algorithms and quantum error correction has already been demonstrated in this system, as well as the ability to entangle two and three quantum processors, and perform deterministic quantum teleportation. Another possible platform are quantum processors based on ion traps, which utilize radio-frequency magnetic fields and lasers. In a multispecies trapped-ion node network, photons entangled with a parent atom are used to entangle different nodes. Also, cavity quantum electrodynamics (Cavity QED) is one possible method of doing this. In Cavity QED, photonic quantum states can be transferred to and from atomic quantum states stored in single atoms contained in optical cavities. This allows for the transfer of quantum states between single atoms using optical fiber in addition to the creation of remote entanglement between distant atoms. Communication lines: physical layer Over long distances, the primary method of operating quantum networks is to use optical networks and photon-based qubits. This is due to optical networks having a reduced chance of decoherence. Optical networks have the advantage of being able to re-use existing optical fiber. Alternately, free space networks can be implemented that transmit quantum information through the atmosphere or through a vacuum. Fiber optic networks Optical networks using existing telecommunication fiber can be implemented using hardware similar to existing telecommunication equipment. This fiber can be either single-mode or multi-mode, with single-mode allowing for more precise communication. At the sender, a single photon source can be created by heavily attenuating a standard telecommunication laser such that the mean number of photons per pulse is less than 1. For receiving, an avalanche photodetector can be used. Various methods of phase or polarization control can be used such as interferometers and beam splitters. In the case of entanglement based protocols, entangled photons can be generated through spontaneous parametric down-conversion. In both cases, the telecom fiber can be multiplexed to send non-quantum timing and control signals. In 2020 a team of researchers affiliated with several institutions in China has succeeded in sending entangled quantum memories over a 50-kilometer coiled fiber cable. Free space networks Free space quantum networks operate similar to fiber optic networks but rely on line of sight between the communicating parties instead of using a fiber optic connection. Free space networks can typically support higher transmission rates than fiber optic networks and do not have to account for polarization scrambling caused by optical fiber. However, over long distances, free space communication is subject to an increased chance of environmental disturbance on the photons. Free space communication is also possible from a satellite to the ground. A quantum satellite capable of entanglement distribution over a distance of 1,203 km has been demonstrated. The experimental exchange of single photons from a global navigation satellite system at a slant distance of 20,000 km has also been reported. These satellites can play an important role in linking smaller ground-based networks over larger distances. In free-space networks, atmospheric conditions such as turbulence, scattering, and absorption present challenges that affect the fidelity of transmitted quantum states. To mitigate these effects, researchers employ adaptive optics, advanced modulation schemes, and error correction techniques. The resilience of QKD protocols against eavesdropping plays a crucial role in ensuring the security of the transmitted data. Specifically, protocols like BB84 and decoy-state schemes have been adapted for free-space environments to improve robustness against potential security vulnerabilities. Repeaters Long-distance communication is hindered by the effects of signal loss and decoherence inherent to most transport mediums such as optical fiber. In classical communication, amplifiers can be used to boost the signal during transmission, but in a quantum network amplifiers cannot be used since qubits cannot be copied – known as the no-cloning theorem. That is, to implement an amplifier, the complete state of the flying qubit would need to be determined, something which is both unwanted and impossible. Trusted repeaters An intermediary step which allows the testing of communication infrastructure are trusted repeaters. Importantly, a trusted repeater cannot be used to transmit qubits over long distances. Instead, a trusted repeater can only be used to perform quantum key distribution with the additional assumption that the repeater is trusted. Consider two end nodes A and B, and a trusted repeater R in the middle. A and R now perform quantum key distribution to generate a key . Similarly, R and B run quantum key distribution to generate a key . A and B can now obtain a key between themselves as follows: A sends to R encrypted with the key . R decrypts to obtain . R then re-encrypts using the key and sends it to B. B decrypts to obtain . A and B now share the key . The key is secure from an outside eavesdropper, but clearly the repeater R also knows . This means that any subsequent communication between A and B does not provide end to end security, but is only secure as long as A and B trust the repeater R. Quantum repeaters A true quantum repeater allows the end to end generation of quantum entanglement, and thus by using quantum teleportation the end to end transmission of qubits. In quantum key distribution protocols one can test for such entanglement. This means that when making encryption keys, the sender and receiver are secure even if they do not trust the quantum repeater. Any other application of a quantum internet also requires the end to end transmission of qubits, and thus a quantum repeater. Quantum repeaters allow entanglement and can be established at distant nodes without physically sending an entangled qubit the entire distance. In this case, the quantum network consists of many short distance links of perhaps tens or hundreds of kilometers. In the simplest case of a single repeater, two pairs of entangled qubits are established: and located at the sender and the repeater, and a second pair and located at the repeater and the receiver. These initial entangled qubits can be easily created, for example through parametric down conversion, with one qubit physically transmitted to an adjacent node. At this point, the repeater can perform a Bell measurement on the qubits and thus teleporting the quantum state of onto . This has the effect of "swapping" the entanglement such that and are now entangled at a distance twice that of the initial entangled pairs. It can be seen that a network of such repeaters can be used linearly or in a hierarchical fashion to establish entanglement over great distances. Hardware platforms suitable as end nodes above can also function as quantum repeaters. However, there are also hardware platforms specific only to the task of acting as a repeater, without the capabilities of performing quantum gates. Error correction Error correction can be used in quantum repeaters. Due to technological limitations, however, the applicability is limited to very short distances as quantum error correction schemes capable of protecting qubits over long distances would require an extremely large amount of qubits and hence extremely large quantum computers. Errors in communication can be broadly classified into two types: Loss errors (due to optical fiber/environment) and operation errors (such as depolarization, dephasing etc.). While redundancy can be used to detect and correct classical errors, redundant qubits cannot be created due to the no-cloning theorem. As a result, other types of error correction must be introduced such as the Shor code or one of a number of more general and efficient codes. All of these codes work by distributing the quantum information across multiple entangled qubits so that operation errors as well as loss errors can be corrected. In addition to quantum error correction, classical error correction can be employed by quantum networks in special cases such as quantum key distribution. In these cases, the goal of the quantum communication is to securely transmit a string of classical bits. Traditional error correction codes such as Hamming codes can be applied to the bit string before encoding and transmission on the quantum network. Entanglement purification Quantum decoherence can occur when one qubit from a maximally entangled bell state is transmitted across a quantum network. Entanglement purification allows for the creation of nearly maximally entangled qubits from a large number of arbitrary weakly entangled qubits, and thus provides additional protection against errors. Entanglement purification (also known as Entanglement distillation) has already been demonstrated in Nitrogen-vacancy centers in diamond. Applications A quantum internet supports numerous applications, enabled by quantum entanglement. In general, quantum entanglement is well suited for tasks that require coordination, synchronization or privacy. Examples of such applications include quantum key distribution, clock stabilization, protocols for distributed system problems such as leader election or Byzantine agreement, extending the baseline of telescopes, as well as position verification, secure identification and two-party cryptography in the noisy-storage model. A quantum internet also enables secure access to a quantum computer in the cloud. Specifically, a quantum internet enables very simple quantum devices to connect to a remote quantum computer in such a way that computations can be performed there without the quantum computer finding out what this computation actually is (the input and output quantum states can not be measured without destroying the computation, but the circuit composition used for the calculation will be known). Secure communications When it comes to communicating in any form the largest issue has always been keeping these communications private. Quantum networks would allow for information to be created, stored and transmitted, potentially achieving "a level of privacy, security and computational clout that is impossible to achieve with today’s Internet." By applying a quantum operator that the user selects to a system of information the information can then be sent to the receiver without a chance of an eavesdropper being able to accurately be able to record the sent information without either the sender or receiver knowing. Unlike classical information that is transmitted in bits and assigned either a 0 or 1 value, the quantum information used in quantum networks uses quantum bits (qubits), which can have both 0 and 1 value at the same time, being in a state of superposition. This works because if a listener tries to listen in then they will change the information in an unintended way by listening, thereby tipping their hand to the people on whom they are attacking. Secondly, without the proper quantum operator to decode the information they will corrupt the sent information without being able to use it themselves. Furthermore, qubits can be encoded in a variety of materials, including in the polarization of photons or the spin states of electrons. Current status Quantum internet One example of a prototype quantum communication network is the eight-user city-scale quantum network described in a paper published in September 2020. The network located in Bristol used already deployed fibre-infrastructure and worked without active switching or trusted nodes. In 2022, Researchers at the University of Science and Technology of China and Jinan Institute of Quantum Technology demonstrated quantum entanglement between two memory devices located at 12.5 km apart from each other within an urban environment. In the same year, Physicist at the Delft University of Technology in Netherlands has taken a significant step toward the network of the future by using a technique called quantum teleportation that sends data to three physical locations which was previously only possible with two locations. In 2024, researchers in the U.K and Germany achieved a first by producing, storing, and retrieving quantum information. This milestone involved interfacing a quantum dot light source and a quantum memory system, paving the way for practical applications despite challenges like quantum information loss over long distances. Quantum networks for computation In 2021, researchers at the Max Planck Institute of Quantum Optics in Germany reported a first prototype of quantum logic gates for distributed quantum computers. Experimental quantum modems A research team at the Max-Planck-Institute of Quantum Optics in Garching, Germany is finding success in transporting quantum data from flying and stable qubits via infrared spectrum matching. This requires a sophisticated, super-cooled yttrium silicate crystal to sandwich erbium in a mirrored environment to achieve resonance matching of infrared wavelengths found in fiber optic networks. The team successfully demonstrated the device works without data loss. Mobile quantum networks In 2021, researchers in China reported the successful transmission of entangled photons between drones, used as nodes for the development of mobile quantum networks or flexible network extensions. This could be the first work in which entangled particles were sent between two moving devices. Also, it has been researched the application of quantum communications to improve 6G mobile networks for joint detection and data transfer with quantum entanglement, where there are possible advantages such as security and energy efficiency. Quantum key distribution networks Several test networks have been deployed that are tailored to the task of quantum key distribution either at short distances (but connecting many users), or over larger distances by relying on trusted repeaters. These networks do not yet allow for the end to end transmission of qubits or the end to end creation of entanglement between far away nodes. DARPA Quantum Network Starting in the early 2000s, DARPA began sponsorship of a quantum network development project with the aim of implementing secure communication. The DARPA Quantum Network became operational within the BBN Technologies laboratory in late 2003 and was expanded further in 2004 to include nodes at Harvard and Boston Universities. The network consists of multiple physical layers including fiber optics supporting phase-modulated lasers and entangled photons as well free-space links. SECOQC Vienna QKD network From 2003 to 2008 the Secure Communication based on Quantum Cryptography (SECOQC) project developed a collaborative network between a number of European institutions. The architecture chosen for the SECOQC project is a trusted repeater architecture which consists of point-to-point quantum links between devices where long distance communication is accomplished through the use of repeaters. Chinese hierarchical network In May 2009, a hierarchical quantum network was demonstrated in Wuhu, China. The hierarchical network consists of a backbone network of four nodes connecting a number of subnets. The backbone nodes are connected through an optical switching quantum router. Nodes within each subnet are also connected through an optical switch and are connected to the backbone network through a trusted relay. Geneva area network (SwissQuantum) The SwissQuantum network developed and tested between 2009 and 2011 linked facilities at CERN with the University of Geneva and hepia in Geneva. The SwissQuantum program focused on transitioning the technologies developed in the SECOQC and other research quantum networks into a production environment. In particular the integration with existing telecommunication networks, and its reliability and robustness. Tokyo QKD network In 2010, a number of organizations from Japan and the European Union setup and tested the Tokyo QKD network. The Tokyo network build upon existing QKD technologies and adopted a SECOQC like network architecture. For the first time, one-time-pad encryption was implemented at high enough data rates to support popular end-user application such as secure voice and video conferencing. Previous large-scale QKD networks typically used classical encryption algorithms such as AES for high-rate data transfer and use the quantum-derived keys for low rate data or for regularly re-keying the classical encryption algorithms. Beijing-Shanghai Trunk Line In September 2017, a 2000-km quantum key distribution network between Beijing and Shanghai, China, was officially opened. This trunk line will serve as a backbone connecting quantum networks in Beijing, Shanghai, Jinan in Shandong province and Hefei in Anhui province. During the opening ceremony, two employees from the Bank of Communications completed a transaction from Shanghai to Beijing using the network. The State Grid Corporation of China is also developing a managing application for the link. The line uses 32 trusted nodes as repeaters. A quantum telecommunication network has been also put into service in Wuhan, capital of central China's Hubei Province, which will be connected to the trunk. Other similar city quantum networks along the Yangtze River are planned to follow. In 2021, researchers working on this network of networks reported that they combined over 700 optical fibers with two QKD-ground-to-satellite links using a trusted relay structure for a total distance between nodes of up to ~4,600 km, which makes it Earth's largest integrated quantum communication network. IQNET IQNET (Intelligent Quantum Networks and Technologies) was founded in 2017 by Caltech and AT&T. Together, they are collaborating with the Fermi National Accelerator Laboratory, and the Jet Propulsion Laboratory. In December 2020, IQNET published a work in PRX Quantum that reported a successful teleportation of time-bin qubits across 44 km of fiber. For the first time, the published work includes a theoretical modelling of the experimental setup. The two test beds for performed measurements were the Caltech Quantum Network and the Fermilab Quantum Network. This research represents an important step in establishing a quantum internet of the future, which would revolutionise the fields of secure communication, data storage, precision sensing, and computing. See also Quantum mechanics Quantum computer Quantum bus References Further reading External links https://web.archive.org/web/20090716121402/http://itvibe.com/news/2583/ http://www.vnunet.com/vnunet/news/2125164/first-quantum-computr-network-goes-online http://www.cse.wustl.edu/~jain/cse571-07/ftp/quantum/ https://web.archive.org/web/20141229113448/http://www.ipod.org.uk/reality/reality_quantum_entanglement.asp Quantum mechanics Quantum cryptography
Quantum network
Physics
4,542
51,703,926
https://en.wikipedia.org/wiki/Sobrr
Sobrr was a mobile application for iOS and Android. It was released in July 2014. It has been described by critics as an "anti-Facebook" social media. See also Snapchat References External links IOS software Android (operating system) software 2014 software
Sobrr
Technology
54
20,502,609
https://en.wikipedia.org/wiki/Local%20trace%20formula
In mathematics, the local trace formula is a local analogue of the Arthur–Selberg trace formula that describes the character of the representation of G(F) on the discrete part of L2(G(F)), for G a reductive algebraic group over a local field F. References Automorphic forms Theorems in number theory
Local trace formula
Mathematics
70
48,583,545
https://en.wikipedia.org/wiki/Patricia%20Hersh
Patricia Lynn Hersh (born 1973) is an American mathematician who works as a professor of mathematics at the University of Oregon. Her research concerns algebraic combinatorics, topological combinatorics, and the connections between combinatorics and other fields of mathematics. Education and career Hersh graduated magna cum laude with an A.B. in mathematics and computer science from Harvard University in 1995, with a senior thesis supervised by Persi Diaconis. She completed her Ph.D. in 1999 at the Massachusetts Institute of Technology, under the supervision of Richard P. Stanley; her dissertation was Decomposition and Enumeration in Partially Ordered Sets. After postdoctoral positions at the University of Washington, University of Michigan, and Mathematical Sciences Research Institute in Berkeley, California, she joined the faculty at Indiana University Bloomington in 2004, moved to North Carolina State University in 2008, and then to the University of Oregon in 2019. She served as an American Mathematical Society Council member at large from 2011 to 2013. Recognition In 2010, Hersh won the Ruth I. Michler Memorial Prize of the Association for Women in Mathematics, funding a visiting position for her at Cornell University. In 2015 she was elected as a fellow of the American Mathematical Society "for contributions to algebraic and topological combinatorics, and for service to the mathematical community". References External links Home page Patricia Hersh's Author Profile Page on MathSciNet. 1973 births Living people 21st-century American mathematicians Combinatorialists Harvard University alumni Massachusetts Institute of Technology alumni Indiana University faculty North Carolina State University faculty Fellows of the American Mathematical Society 21st-century American women mathematicians University of Oregon faculty
Patricia Hersh
Mathematics
328
15,645,192
https://en.wikipedia.org/wiki/Yorktown%20Refinery
Yorktown Refinery was an oil refinery in Virginia located alongside of the York River built in 1956. It is now used by Plains All American Pipeline LP as a rail and water oil terminal. The refinery used to be operated by Giant Industries and earlier operated by BP/Amoco. Giant Industries was acquired by Western Refining in 2007. The refinery could run high TAN crude oil (crude oil with a high content of naphthenic acids). The refining operations were shut down in the fall of 2010 and the refinery was later demolished. References External links Giant Industries CNN Money profile Energy infrastructure completed in 1956 Oil refineries in the United States Energy infrastructure in Virginia
Yorktown Refinery
Chemistry
133
32,610,403
https://en.wikipedia.org/wiki/Potassium%20hexafluoronickelate%28IV%29
Potassium hexafluoronickelate(IV) is an inorganic compound with the chemical formula . It can be produced through the reaction of potassium fluoride, nickel dichloride, and fluorine. It reacts violently with water, releasing oxygen. It dissolves in anhydrous hydrogen fluoride to produce a light-red solution. Potassium hexafluoronickelate(IV) decomposes at 350 °C, forming potassium hexafluoronickelate(III), nickel(II) fluoride, and fluorine: Potassium hexafluoronickelate is a strong oxidant. It can turn chlorine pentafluoride and bromine pentafluoride into and , respectively: ( X = Cl or Br , -60 °C , aHF = anhydrous hydrogen fluoride). Potassium hexafluoronickelate decomposes at high temperatures to release fluorine gas; like terbium(IV) fluoride, the emitted fluorine is primarily monatomic rather than the typical diatomic. It adopts the structure seen for K2PtCl6 and Mg2FeH6. References Potassium compounds Nickel complexes Fluoro complexes
Potassium hexafluoronickelate(IV)
Chemistry
264
20,818,181
https://en.wikipedia.org/wiki/Mnemonics%20%28keyboard%29
A mnemonic is an underlined alphanumeric character, typically appearing in a menu title, menu item, or the text of a button or component of the user interface. A mnemonic indicates to the user which key to press (in conjunction with the Alt key) to activate a command or navigate to a component. In Microsoft Windows, mnemonics are called "Access keys". In Web browsers, Access keys may or may not be engaged by the Alt key. Using mnemonics is limited to entering the underlined character with a single key stroke; for this reason, localized versions of software omit letters with diacritics that need to be input via an extra dead key stroke. See also Keyboard shortcut References External links SUN's definition of mnemonic term Keyboard shortcuts and mnemonics or accelerators are not the same thing Mnemonics (keyboard) (book) User interfaces User interface techniques Computer keyboards
Mnemonics (keyboard)
Technology
195
4,301,379
https://en.wikipedia.org/wiki/Kiddie%20ride
A kiddie ride is a child-sized, themed, mildly interactive coin-operated ride that can be ridden by young children for amusement. Kiddie rides are commonly available in amusement parks, arcades, malls, hotel game rooms, outside supermarkets, and large department stores. Less commonly, they may also appear in other venues such as restaurants, food courts, grocery shops, and auto dealerships. When activated by a coin, a kiddie ride entertains the rider for a short time with a mild motion that replicates the theme of the ride. Most rides also include sounds and music. Some even feature flashing lights, pedals, and/or buttons. Commercial kiddie rides are often colorful with an animal, vehicle, or popular cartoon character theme, which appeals to young children. They are usually driven by a heavy-duty electric motor, which is usually disguised inside or underneath the metal, fiberglass, or vacuum formed plastic body of the ride. History The kiddie ride was first invented in 1930 by James Otto Hahs of Sikeston, Missouri. Originally called the Hahs Gaited Mechanical Horse, the ride was originally conceived as a Christmas present for his children. However, Hahs soon set about commercializing it. Initially, he used wooden horses, and commissioned carousel makers to make the horses. However, he found these horses to be too heavy and decided that aluminum would be a more suitable material. When told it couldn't be done, however, Hahs went ahead and invented a process to form horses out of metal. The rides would be manufactured at Hahs Machine Works in Sikeston, and they were recognized as the most original invention of the year in 1932. In 1933, Hahs struck a deal with Exhibit Supply Company to distribute his horses, with a 5% cut going to Hahs. When the patent on the ride eventually ran out, he retired from the wealth he had amassed from sales. In 1953, Billboard magazine called it "1953's fastest growing business". Years later, aluminum horses would be replaced by fiberglass. Developed around the same time, the Link Trainer was initially intended for use as a coin-operated entertainment device as well as a tool for training pilots. Music Many very old rides do not feature music; also, some vehicle rides may favor engine sounds instead of music. However, on rides that feature music, early rides (and cheaper modern rides that imitate more well-known rides) are equipped with simple integrated circuits that continually playback one melody or repeat a set of melodies in sequence. These have evolved in the sense that the earliest musically-enabled rides played back only a single monophonic melody repetitively. In contrast, later ones played multiple polyphonic melodies, sometimes including short sound or speech samples. Later rides could also use a tape deck, while more recent rides may have a solid-state audio playback device akin to flash-based MP3 players. Usually, the music chosen is generic children's songs, while on licensed rides, the theme song for the licensed character would be used. However, in rare cases, some rides play traditional pop music, and for private rides, the owner may request a song that has personal relevance to be programmed into the ride. Many modern rides are programmed to play multiple melodies, with the music changing each time the ride is used, the logic being to prolong the interest of the child on the ride. However, some modern rides, in particular licensed character ones, are usually programmed to play a single melody or song, which is usually the theme song of the character's television show or film. There are also some exceptions where there are licensed rides playing totally unrelated pieces of music or non-licensed rides that play only one particular tune, for example, a song about cars on a car-themed ride, the Thomas theme tune on a Thomas the Tank Engine ride, the Postman Pat theme tune on a Postman Pat ride and the Fireman Sam theme tune on a Fireman Sam ride. Certain rides play a running narration or tell a story instead. Modern rides Newer, more advanced rides do not usually start as soon as coins are inserted; instead they prompt the rider, parent or guardian to press a start button, so as to allow the rider to seat him/herself comfortably before starting the ride. Often, these rides will also play a message before movement begins and may also play an ending message once the ride ends, to let the rider know that it is safe to disembark. Other safety precautions commonly found in more advanced rides include: allowing use of the start button to pause the ride, so the rider can reposition themselves or even disembark safely if desired; safety sensors that detect if anything is potentially obstructing the ride's movement and stop the ride accordingly until the obstruction is removed; overload sensors that stop the ride from moving if the weight limit on the ride is exceeded; a slow start/stop action so as not to shock or frighten younger riders. To attract attention, most rides occasionally flash their lights or play a sound, or both, at set intervals, although many older rides, as well as low-cost, or knockoff, rides do not have an attract mode. Some rides may narrate a story through sound or using a video monitor, the latter providing limited interaction with the video displayed. Common themes Arthur Batmobile Boat Elephant Fire truck Helicopter Horse or pony Ice cream truck Jumbo jet or other airplanes Ladybug or caterpillar Mickey Mouse Miniature carousel Motorcycle Panda (usually in the form of a small ship with a panda sitting facing the rider) Peacock Police vehicle Postman Pat Roller coaster School bus Sesame Street Space Shuttle SpongeBob SquarePants Sports car Swan Taxi Thomas & Friends Tractor Train (usually stationary and not on a track, but train kiddie rides that move on a small track do exist) Types of rides Track rides Track rides are usually rides in the form of a train on a track; in most coin-operated train-type track rides, the coin mechanism is on the locomotive unit of the ride and it can seat two to three toddlers. In general, the ride is powered by a low-voltage current passing through the tracks, but sometimes the ride is powered by batteries. Most versions of these rides are specifically designed to carry young children due to the low-voltage used and the size of the ride, although it is possible to find bigger models designed for older children. Track rides are not necessarily restricted to train-form; animal track rides on the theme of horses or frogs have also been documented. In a similar fashion, another type of ride that would be classified as a track ride would be one with an elongated base where the figure paces the length of the base, then turns and moves in the opposite direction on reaching its limit. Carousel rides Another common type of kiddie ride is the miniature carousel. These rides are usually in the form of a small-sized carousel. The newer models have the coin box on the main pillar, whereas older units have the coin box on a pole sticking out of the side of the ride. Carousel rides featuring licensed characters do exist. A Thomas the Tank Engine carousel ride is known to exist, as is one from a British television show for children called Play School. Carousel rides featuring the characters from The Wiggles, Bob the Builder, Sesame Street and Hi-5 have also been documented. Hydraulic rides More commonly built by European kiddie ride manufacturers like Automatic Toys Modena (ATM) from Italy, hydraulic rides are kiddie rides situated on a hydraulic arm that raises and lowers the ride during their activation. Usually, the rider is given limited interaction with the ride in the form of up/down buttons or levers so that the rider can instruct the ride to fly higher or lower, giving the user the impression of some control over their experience. Base rides This kind of ride is perhaps the most common type: an animal or vehicle situated on a vacuum-formed base that moves up-and-down, side-to-side, or both, when activated; some move in a slithering-like motion. Usually, rides of this configuration have the motor hidden in the base, although some larger rides have the motor hidden in the ride-on figure instead. One of the most popular rides is a horse ride. Recent developments have included the "Pony Express" ride, first manufactured by Italian company Cogan. These feature a complicated mechanism that alternates between galloping and trotting motions during the ride, mimicking the movements of a real-life horse. This type has been adapted by both the Spanish manufacturer Falgas for their own version of the "Pony Express" and Memo Park, another Italian-based company, for their own type of Western-style horse; Falgas adds horse sounds to the soundtrack whilst on the Memo Park version is the use of rider interactivity, in where if the rider pulls back on the reins, the horse stops for a few seconds before continuing to either gallop or trot, depending on what pace it is travelling at when the reins are pulled. Another one of the most popular rides is the Kiddie Coaster. The first edition was manufactured by Amutec and released in 2000. Another edition was manufactured by Innovative Concepts in Entertainment and released in 2002. This ride simulates one of two different roller coasters. The Innovative Concepts in Entertainment edition simulates Blue Streak and Gemini while the Amutec edition simulates two different Six Flags coasters. Free movement (bumper car-like) rides These kinds of rides are usually in the form of animals or vehicles. These are most common in Asian countries, particularly China. Unlike a real bumper car ride commonly found at funfairs, the coin-operated variant uses batteries instead of drawing electricity off of an overhead mesh, and one can ride it anytime instead of having to wait for the operator to start the ride for them. Teeter-totter rides These rides are generally teeter-totters for one person. An inanimate figure typically sits at the opposite end of the ride. The ride moves in a gentle up-and-down motion, mimicking that of a standard teeter-totter. Jolly Roger Amusement Rides has made three of these: one featuring Mr. Bump from The Mr. Men Show, one featuring the Pink Panther and one featuring Mr. Blobby. Video game hybrids These rides are a hybrid of kiddie rides and arcade video games. The rides usually incorporate a video display, and while the motion is synchronized to the events happening on the screen, the ride will start and end following the events on the screen. The ride is usually interactive and there are push-buttons to allow the rider to interact with the on-screen actions. These rides should not be mistaken for simulators, which reproduce the action of a video game without offering further interactivity. Furthermore, the video-game hybrid is time-based and ends at a predetermined time, regardless of the actions of the user. An example of a hybrid ride would be the Waku Waku Sonic Patrol Car ride and other waku-waku and wanpaku series of rides manufactured by Sega. Character rides In many cases, kiddie rides in the likes of well-known copyrighted characters or objects from films or television shows can also be found, usually at bigger shopping malls that can afford them due to the higher purchasing costs. A classic example would be the Batmobile rides. Jolly Roger Amusement Rides In 1994, R.G. Mitchell released a Thomas the Tank Engine kiddie ride with 4 push buttons which trigger "You're a really useful engine!" (Thomas: blue), "I need you to help the other engines" (Sir Topham Hatt: yellow), a whistle sound (James: red) and a steam sound (Percy: green). There is also a mini version of the aforementioned ride for places that don't have enough space. In 2006, Jolly Roger Amusement Rides of the United Kingdom released their version of the Thomas the Tank Engine kiddie ride in two options, standard and video. Jolly Roger Amusement Rides is also known for making other licensed kiddie rides, including a fire truck and an airplane featuring Woody Woodpecker and Chilly Willy, a seesaw, train and police van featuring the Pink Panther and Inspector Clouseau, and a sailboat featuring Popeye. Another example of a character kiddie ride would be a Clifford the Big Red Dog kiddie ride, manufactured by Jolly Roger Amusement Rides of the United Kingdom. This ride costs around $5000 in the United States when purchased new. It plays the theme song from the PBS Kids TV series when in motion. The push button on the ride triggers Clifford barking sounds. Another example would be a Bob the Builder ride, which features Bob climbing onto Scoop with 4 sounds and Pilchard in his shovel. This ride was also manufactured by Jolly Roger Amusement Rides and released around January 2000. Another example would be a Superman kiddie ride featuring the Man of Steel "stopping" the train you're in (meant to look like it's emerging from a tunnel into a rockfall). When in motion, it plays the Superman: The Animated Series theme (in a lower pitch, the PAL version) and has four buttons: "Look! Up in the sky! It's a bird... it's a plane... it's Superman!!" "Superman! Faster than a speeding bullet!" "Superman! More powerful than a locomotive!" and "Superman! Able to leap tall buildings in a single bound!" (these are all taken from the Superman radio show, but voiced by Don Kennedy). Like some of the above examples, it was manufactured by Jolly Roger Amusement Rides. This ride was released in January 2000, although it was copyrighted in 1999. Others Kiddie's Manufacturing made three kiddie rides based on The Flintstones: the Flintmobile, Dino and Loggin Continental. All three play "Meet The Flintstones" when the ride is in motion, and were released in 1994. In 2015, Northern Leisure released a SpongeBob SquarePants kiddie ride based on the Krabby Patty Wagon from The SpongeBob SquarePants Movie. This ride has SpongeBob seated next to the "rider's seat", and his pet snail Gary rides on the back of the ride. This SpongeBob-themed ride also includes a screen displaying the lyrics to the SpongeBob SquarePants theme song. The lyrics to the theme song have to be sung out by whoever rides it (known as sing-along). The attract mode is also the SpongeBob theme song, and each of the 3 buttons on the ride plays a sound effect: a horn, bubble noises (commonly used in transitions from a scene to another in SpongeBob), and a dolphin noise. Knockoff rides that feature figures that look like those of famous cartoon characters exist. They are cheaper than real licensed rides and are found at smaller establishments. They are not licensed, and in certain areas with high intellectual property rights recognition, purchasers of knockoff rides can get themselves entangled in legal complications. The ride figure might not be designed to look as close to a licensed character compared to genuinely licensed rides, possibly resulting in diminished recognition. Occasionally, there are some countries where knockoff characters are found on fairground rides. These feature paint jobs of popular characters that are featured on the rides without a license from their respective owners. Personal uses While kiddie rides are primarily used to garner extra income for commercial areas like shopping malls, supermarkets and amusement centers, they are also common in homes in many developed countries. This is being led by Denver-based Kiddie Rides USA and has received coverage in many magazines, including Time, Fortune, United Airlines' Hemispheres, and CNBC. Many of the rides are ex-location units which have been written off by the original owner, usually to make way for newer games or rides, and bought for a fraction of what they would cost brand new, either directly from the previous owner or on online auction sites like eBay. In popular culture In the Netflix original series Stranger Things, a horseback riding kiddie ride at the fictional Starcourt Mall proves vital to the discovery of an underground Soviet base beneath the shopping mall. In the SpongeBob SquarePants "My Pretty Seahorse" episode, Scooter mistakes Mystery outside the Krusty Krab for a kiddie ride and tries to put a coin in her, at which point he is bucked across the parking lot. In Lilo & Stitch, Stitch mistakes a rocket ship themed kiddie ride for an actual rocket ship and is disappointed it does not let him leave Earth. In Pee-wee's Playhouse, There are 2 kiddie rides introduced in season 2. The first one is Bally’s Ride The Champion Kiddie ride from 1952 (as seen in the season 2-4 & 5 intros where Pee-wee Herman is seen wearing boots and riding it and in the season 5 episode “Something To Do” Where Miss Yvonne rides it), and the second one is also made by Bally in 1952 called “Ride The Space Ship” Which is modified with orange fibreglass material and a green 71 number on it. (As seen in the episode “School” where Rapunzel is riding it while Pee-wee and the Playhouse Gang are screaming and Pee-wee screams “AHHHH!!! METEOR STORM!!!”) References External links "Rise and Fall of the American Kiddie Ride"- Jake Swearingen, The Atlantic, Dec.2014. "Remember vintage coin-operated rides?" Click Americana, Jan. 1953. Amusement rides Amusement rides by type
Kiddie ride
Physics,Technology
3,602
78,906,407
https://en.wikipedia.org/wiki/C7H6O2S
{{DISPLAYTITLE:C7H6O2S}} The molecular formula C7H6O2S may refer to: 4-Mercaptobenzoic acid Thiosalicylic acid
C7H6O2S
Chemistry
46
31,876,593
https://en.wikipedia.org/wiki/Interactive%20Scenario%20Builder
Interactive Scenario Builder (Builder) is a modeling and simulation, three-dimensional application developed by the Advanced Tactical Environmental Simulation Team (ATEST) at the Naval Research Laboratory (NRL) that aids in understanding radio frequency (RF) and electro-optical/infrared (EO/IR) propagation. Uses RF and EO/IR tactical decision aid Creation/generation of complex electronic warfare (EW) synthetic environments (scenarios) Simulation of both hardware and/or modeling of existing and future EW systems Visualization of the RF capabilities of platforms Modeling the communication of radar systems by calculating one-way and two-way RF propagation loss Pre-mission planning Near-realtime, geospatial and temporal situational awareness After-action debriefing Acquisition Support to operations (Ops) Surface EW test and evaluation (T&E) Training Options development Targeting support Operational use The Effectiveness of Navy Electronic Warfare Systems (ENEWS) group used Builder to support the design, specification, and evaluation of EA-6B and AN/SLY-2 (AIEWS) EW systems from the conceptual through the design stages. The Fleet Information Warfare Center (FIWC) used Builder to assist in EW asset scheduling and allocation during Operation Desert Fox and the Kosovo campaign. The U.S. Army's 160th Special Operations Aviation Regiment uses Builder for mission planning and mission rehearsal. Developer information Builder is developed by the: Advanced Tactical Environmental Simulation Team (ATEST) (Code 5774) Electronic Warfare Modeling & Simulation (EW M&S) Branch (Code 5770) Tactical Electronic Warfare Division (TEWD) (Code 5700) Systems Directorate (Code 5000) Naval Research Laboratory (NRL) Office of Naval Research (ONR) A listing in the Department of Defense (DoD) Modeling and Simulation Resource Registry (MSRR) states that "The primary objective of the Electronic Warfare Modeling and Simulation Branch is to develop and utilize tools for effectiveness evaluations of present, proposed, and future electronic warfare (EW) concepts, systems, and configurations for U.S. Naval Units." The EW M&S Branch used to be known as the Effectiveness of Navy Electronic Warfare Systems (ENEWS) Group (Code 5707) circa 2005. At that time, the Builder Team was under Code 5707.4. In an NRL "Solicitation, Offer and Award" document, the "Statement of Work" section states that "Code 5707 has historically developed simulations of naval EW systems, anti-ship threats, and military communication systems to support the development, fielding and testing of electronic and weapons systems." See also Office of Naval Research (ONR), a sponsor of Builder development Naval Research Laboratory (NRL) SIMDIS, another application developed by the EW M&S Branch References General references Further reading Papers used as references Other papers External links Interactive Scenario Builder website Tactical Electronic Warfare Division website 3D graphics software Cross-platform software Electromagnetic simulation software Electronic warfare GIS software Government software Information operations and warfare Infrared Java (programming language) software Military simulation Radio frequency propagation Virtual globes
Interactive Scenario Builder
Physics
629
2,691,899
https://en.wikipedia.org/wiki/Red%20clump
The red clump is a clustering of red giants in the Hertzsprung–Russell diagram at around 5,000 K and absolute magnitude (MV) +0.5, slightly hotter than most red-giant-branch stars of the same luminosity. It is visible as a denser region of the red-giant branch or a bulge towards hotter temperatures. It is prominent in many galactic open clusters, and it is also noticeable in many intermediate-age globular clusters and in nearby field stars (e.g. the Hipparcos stars). The red clump giants are cool horizontal branch stars, stars originally similar to the Sun which have undergone a helium flash and are now fusing helium in their cores. Properties Red clump stellar properties vary depending on their origin, most notably on the metallicity of the stars, but typically they have early K spectral types and effective temperatures around 5,000 K. The absolute visual magnitude of red clump giants near the sun has been measured at an average of +0.81 with metallicities between −0.6 and +0.4 dex. There is a considerable spread in the properties of red clump stars even within a single population of similar stars such as an open cluster. This is partly due to the natural variation in temperatures and luminosities of horizontal branch stars when they form and as they evolve, and partly due to the presence of other stars with similar properties. Although red clump stars are generally hotter than red-giant-branch stars, the two regions overlap and the status of individual stars can only be assigned with a detailed chemical abundance study. Evolution Modelling of the horizontal branch has shown that stars have a strong tendency to cluster at the cool end of the zero age horizontal branch (ZAHB). This tendency is weaker in low metallicity stars, so the red clump is usually more prominent in metal-rich clusters. However, there are other effects, and there are well-populated red clumps in some metal-poor globular clusters. Stars with a similar mass to the sun evolve towards the tip of the red-giant branch with a degenerate helium core. More massive stars leave the red-giant branch early and perform a blue loop, but all stars with a degenerate core reach the tip with very similar core masses, temperatures, and luminosities. After the helium flash they lie along the ZAHB, all with helium cores just under and their properties determined mostly by the size of the hydrogen envelope outside the core. Lower envelope masses result in weaker hydrogen shell fusion and give hotter and slightly less luminous stars strung along the horizontal branch. Different initial masses and natural variations in mass loss rates on the red-giant branch cause the variations in the envelope masses even though the helium cores are all the same size. Low-metallicity stars are more sensitive to the size of the hydrogen envelope, so with the same envelope masses they are spread further along the horizontal branch and fewer fall in the red clump. Although red clump stars lie consistently to the hot side of the red-giant branch that they evolved from, red clump and red-giant-branch stars from different populations can overlap. This occurs in ω Centauri where metal-poor red-giant-branch stars have the same or hotter temperatures as more metal-rich red clump giants. Other stars, not strictly horizontal branch stars, can lie in the same region of the H-R diagram. Stars too massive to develop a degenerate helium core on the red-giant branch will ignite helium before the tip of the red-giant branch and perform a blue loop. For stars only a little more massive than the sun, around , the blue loop is very short and at a luminosity similar to the red clump giants. These stars are an order of magnitude less common than sun-like stars, even rarer compared to the sub-solar stars that can form red clump giants, and the duration of the blue loop is far less than the time spent by a red clump giant on the horizontal branch. This means that these imposters are much less common in the H–R diagram, but still detectable. Stars with will also pass through the red clump as they evolve along the subgiant branch. This is again a very rapid phase of evolution, but stars such as OU Andromedae are found in the red clump region (5,500 K and ) even though it is thought to be a subgiant crossing the Hertzsprung gap. Standard candles In theory, the absolute luminosities of stars in the red clump are fairly independent of stellar composition or age so that consequently they make good standard candles for estimating astronomical distances both within our galaxy and to nearby galaxies and clusters. Variations due to metallicity, mass, age, and extinctions affect visual observations too much for them to be useful, but the effects are much smaller in the infrared. Near infrared I band observations in particular have been used to establish red clump distances. Absolute magnitudes for the red clump at solar metallicity have been measured at −0.22 in the I band and −1.54 in the K band. The distance to the Galactic Center has been measured in this way, giving a result of 7.52 kpc in agreement with other methods. Red bump The red clump should not be confused with the "red bump" or red-giant-branch bump, which is a less noticeable clustering of giants partway along the red-giant branch, caused as stars ascending the red-giant branch temporarily decrease in luminosity because of internal convection. Examples Many of the bright "red giants" visible in the sky are actually G or early K class red-clump stars. Pollux, the closest red giant to the Sun, is believed to be a red-clump star. Other well-known examples include: Capella Aa ε Tauri β Ceti Alpha Cassiopeiae Delta Andromedae Arcturus has sometimes been thought to be a clump giant, but is now more commonly considered to be on the red-giant branch, somewhat cooler and more luminous than a red-clump star. References External links Stanek's page on red clumps used for distance measurement Red giants Stellar evolution Standard candles Concepts in stellar astronomy
Red clump
Physics
1,299
38,558,148
https://en.wikipedia.org/wiki/How%20They%20Got%20Game
How They Got Game is a project that aims to explore the historical and cultural impact of new media, through interactive simulation and video gaming. The involvement was through people researching many defined areas of computing, such as storytelling, strategy, simulation, sports, and shooters. Project The preservation of software and documentation is an important aspect of this project, where How They Got Game constructed a digital archive of the source material. This includes preserving both the digital code of the software itself as well as the experience surrounding the software, to the extent possible. This includes the cultural context of the software. The project also introduced a Stanford class offered in the Science, Technology, and Society program called the "History of Computer Game Design". Personnel Henry Lowood and Tim Lenoir headed the project; and there were other contributors involved with the project, such as Casey Alt, Georgios Panzaris, Rene Patnode, Doug Wilson, Waynn Lue, David Lui, and Sarah Wilson. Technical developers involved were Casey Alt and Zachary Pogue. Henry Lowood is the Harold C. Hohbach Curator of History of science and technology, and film and media collections at Stanford University. He has been an employee of the university for 32 years. He began his profession immediately after graduating from the University of California. Lowood began his profession at Stanford University as an ordinary librarian in the 1980s; he was soon promoted to his current role. Since 2000, Lowood has headed a project named "How They Got Game". The main focus of the project is the history and preservation of digital games, virtual worlds, and interactive simulations as new media forms now merging. The research was conducted in five main areas of computer games: storytelling, strategy, simulation, sports, and shooters. He spoke to Robert Ashley in an episode of A Life Well Wasted about the project. Impact One outcome of the project is the Machinima Archive, which has been exhibited in two museums in 2003 and 2004, which features the worlds of computer games, art, and military simulation. The Machinima Archive is a joint effort of the Internet Archive, the How They Got Game project, the Academy of Machinima Arts and Sciences, and Machinima.com. The archive is a collection of machinima films, which can be found on the Internet Archive, and accepts Machinima productions from various internet publishers and other producers. References History of video games
How They Got Game
Technology
488
12,960,532
https://en.wikipedia.org/wiki/Skopje%20Statistical%20Region
The Skopje Statistical Region (; Albanian: Rajoni i Shkupit) is one of eight statistical regions of North Macedonia. The region is located in the north of the country, bordering Kosovo. Internally, it borders the Vardar, Polog, Northeastern, Eastern, and Southwestern statistical regions. Municipalities The region consists of the City of Skopje and the following municipalities: Aračinovo Čučer-Sandevo Ilinden Petrovec Sopište Studeničani Zelenikovo Demographics Population The current population of the Skopje Statistical Region is 607,007 citizens, according to the last population census in 2021, accounting for 33.0% of the total national population. The region is the largest by population in North Macedonia. Ethnicities Religions Religious affiliation according to the 2002 and 2021 Macedonian censuses: References Statistical regions of North Macedonia Geography of Skopje
Skopje Statistical Region
Mathematics
175
47,085,706
https://en.wikipedia.org/wiki/IQ%20imbalance
IQ imbalance is a performance-limiting issue in the design of a class of radio receivers known as direct conversion receivers. These translate the received radio frequency (RF, or pass-band) signal directly from the carrier frequency to baseband using a single mixing stage. Direct conversion receivers contain a local oscillator (LO) which generates both a sine wave at and a copy delayed by 90°. These are individually mixed with the RF signal, producing what are known respectively as the in-phase and quadrature signals, labelled and . However, in the analog domain, the phase difference is never exactly 90°. Neither is the gain perfectly matched between the parallel sections of circuitry dealing with the two signal paths. IQ imbalance results from these two imperfections, and is one of the two major drawbacks of direct-conversion receivers compared to traditional superheterodyne receivers. (The other is DC offset.) Their design must include measures to control IQ imbalance, so as to limit errors in the demodulated signal. Definition A direct-conversion receiver uses two quadrature sinusoidal signals to perform the so-called quadrature down-conversion. This process requires shifting the LO signal by 90° to produce a quadrature sinusoidal component, and a matched pair of mixers converting the same input signal with the two versions of the LO. Mismatches between the two LO signals and/or along the two branches of down-conversion mixers, and any following amplifiers, and low-pass filters, cause the quadrature baseband signals to be corrupted, either due to amplitude or phase differences. Suppose the received pass-band signal is identical to the transmitted signal and is given by:where is the transmitted base-band signal. Assume that the gain error is dB and the phase error is degrees. Then we can model such imbalance using mismatched local oscillator output signals:Multiplying the pass-band signal by the two LO signals and passing through a pair of low-pass filters, one obtains the demodulated base-band signals as:The above equations clearly indicate that IQ imbalance causes interference between the and base-band signals. To analyze IQ imbalance in the frequency domain, the above equation can be rewritten as:where denotes the complex conjugate of . In an OFDM system, the base-band signal consists of several sub-carriers. Complex-conjugating the base-band signal of the kth sub-carrier carrying data is identical to carrying on the th sub-carrier:where is the sub-carrier spacing. Equivalently, the received base-band OFDM signal under the IQ imbalance effect is given by:In conclusion, besides a complex gain imposed on the current sub-carrier data , IQ imbalance also introduces Inter Carrier Interference (ICI) from the adjacent carrier or sub-carrier. The ICI term makes OFDM receivers very sensitive to IQ imbalances. To solve this problem, the designer can request a stringent specification of the matching of the two branches in the frond-end or compensate for the imbalance in the base-band receiver. On the other hand, a digital Odd-Order I/Q-demodulator with only one input can be used, but such design has a bandwidth limitation. Simulation IQ imbalance can be simulated by computing the gain and phase imbalance and applying them to the base-band signal by means of several real multipliers and adders. Synchronization errors The time domain base-band signals with IQ imbalance can be represented by Note that and can be assumed to be time-invariant and frequency-invariant, meaning that they are constant over several sub carriers and symbols. With this property, multiple OFDM sub-carriers and symbols can be used to jointly estimate and to increase the accuracy. Transforming to the frequency domain, we have the frequency domain OFDM signals under the influence of IQ imbalance given by:Note that the second term represents interference coming from the mirrored sub-carrier IQ imbalance estimation in MIMO-OFDM systems In MIMO-OFDM systems, each RF channel has its own down-converting circuit. Therefore, the IQ imbalance for each RF channel is independent of those for the other RF channels. Considering a MIMO system as an example, the received frequency domain signal is given by:where and are the IQ imbalance coefficients of the qth receive RF channel. Estimation of and is the same for each RF channel. Therefore, we take the first RF channel as an example. The received signals at the pilot sub-carriers of the first RF channel are stacked into a vector , where is the matrix defined by: Clearly, the above formula is similar to that of the SISO case and can be solved using the LS method. Moreover, the estimation complexity can be reduced by using fewer pilot sub-carriers in the estimation. IQ imbalance compensation The IQ imbalance can be compensated in either the time domain or the frequency domain. In the time domain, the compensated signal in the current mth sample point is given by:We can see that, by using the ratio to mitigate the IQ imbalance, there is a loss factor . When the noise is added before the IQ imbalance, the SNR remains the same, because both noise and signal suffer this loss. However, if the noise is added after IQ imbalance, the effective SNR degrades. In this case, and , respectively, should be computed. Compared with the time domain approach, compensating in the frequency domain is more complicated because the mirrored sub-carrier is needed. The frequency domain compensated signal at the ith symbol and the kth sub-carrier:Nevertheless, in reality, the time domain compensation is less preferred because it introduces larger latency between IQ imbalance estimation and compensation. IQ imbalance estimation Frequency domain OFDM signals under the influence of IQ imbalance is given by:The IQ imbalance coefficients and are mixed with the channel frequency responses, making both the IQ imbalance estimation and channel estimation difficult. In the first half of the training sequence, only sub-carriers ranging from to N/2 − 1 transmit pilot symbols; the remaining sub-carriers are not used. In the second half, the sub-carriers from -1 to -N/2 are used for pilot transmission. Such a training scheme easily decouples the IQ imbalance and the channel frequency response. Assuming the value of the pilot symbols is +1, the received signals at sub-carriers from 1 to N/2 − 1 are given by , while the received signals at the mirrored sub-carriers take the form . From the two sets of received signals, the ratio can be easily estimated by . The second half of the training sequence can be used in a similar way. Furthermore, the accuracy of this ratio estimation can be improved by averaging over several training symbols and several sub-carriers. Although the IQ imbalance estimation using this training symbol is simple, this method suffers from low spectrum efficiency, as quite a few OFDM symbols must be reserved for training. Note that, when the thermal noise is added before the IQ imbalance, the ratio is sufficient to compensate the IQ imbalance. However, when the noise is added after the IQ imbalance, compensation using only can degrade the ensuing demodulation performance. Notes References Further reading M. Valkama, M. Renfors, and V. Koivunen, 2001. "Advanced methods for I/Q imbalance compensation in communication receivers," IEEE Transactions on Signal Processing, 49, 2335–2344 J. Tubbax, B. Come, L. V. der Perre, S. Donnay, M. Engels, H. D. Man, and M. Moonen, 2005. " Compensation of IQ imbalance and phase noise in OFDM systems," IEEE Transactions on Wireless Communications, 4, 872–877. T.D Chiueh, PY Tsai, IW L, "Baseband Receiver Design for Wireless MIMO_OFDM Communications 2nd" Slyusar, V. I., Soloshchev, O. N., Titov, I. V. A method for correction of quadrature disbalance of reception channels in a digital antenna array// Radioelectronics and Communications Systems. – 2004, VOL 47; PART 2, pages 30–35. Radio electronics
IQ imbalance
Engineering
1,733
243,343
https://en.wikipedia.org/wiki/George%20Green%20%28mathematician%29
George Green (14 July 1793 – 31 May 1841) was a British mathematical physicist who wrote An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism in 1828. The essay introduced several important concepts, among them a theorem similar to the modern Green's theorem, the idea of potential functions as currently used in physics, and the concept of what are now called Green's functions. Green was the first person to create a mathematical theory of electricity and magnetism and his theory formed the foundation for the work of other scientists such as James Clerk Maxwell, William Thomson, and others. His work on potential theory ran parallel to that of Carl Friedrich Gauss. Green's life story is remarkable in that he was almost entirely self-taught. He received only about one year of formal schooling as a child, between the ages of 8 and 9. Early life Green was born and lived for most of his life in the English town of Sneinton, Nottinghamshire, now part of the city of Nottingham. His father, also named George, was a baker who had built and owned a brick windmill used to grind grain. In his youth, Green was described as having a frail constitution and a dislike for doing work in his father's bakery. He had no choice in the matter, however, and as was common for the time he likely began working daily to earn his living at the age of five. Robert Goodacre's Academy During this era it was common for only 25–50% of children in Nottingham to receive any schooling. The majority of schools were Sunday schools, run by the Church, and children would typically attend for one or two years only. Recognizing the young Green's above average intellect, and being in a strong financial situation due to his successful bakery, his father enrolled him in March 1801 at Robert Goodacre's Academy in Upper Parliament Street. Robert Goodacre was a well-known science populariser and educator of the time. He published Essay on the Education of Youth, in which he wrote that he did not "study the interest of the boy but the embryo Man". To a non-specialist, he would have seemed deeply knowledgeable in science and mathematics, but a close inspection of his essay and curriculum revealed that the extent of his mathematical teachings was limited to algebra, trigonometry and logarithms. Thus, Green's later mathematical contributions, which exhibited knowledge of very modern developments in mathematics, could not have resulted from his tenure at the Robert Goodacre Academy. He stayed for only four terms (one school year), and it was speculated by his contemporaries that he had exhausted all they had to teach him. Move from Nottingham to Sneinton In 1773 George's father moved to Nottingham, which at the time had a reputation for being a pleasant town with open spaces and wide roads. By 1831, however, the population had increased nearly five times, in part due to the budding Industrial Revolution, and the city became known as one of the worst slums in England. There were frequent riots by starving workers, often associated with special hostility towards bakers and millers on the suspicion that they were hiding grain to drive up food prices. For these reasons, in 1807, George Green senior bought a plot of land in Sneinton. On this plot of land he built a "brick wind corn mill", now referred to as Green's Windmill. It was technologically impressive for its time, but required nearly twenty-four-hour maintenance, which was to become Green's burden for the next twenty years. Adult life Miller Just as with baking, Green found the responsibilities of operating the mill annoying and tedious. Grain from the fields was arriving continuously at the mill's doorstep, and the sails of the windmill had to be constantly adjusted to the windspeed, both to prevent damage in high winds, and to maximise rotational speed in low winds. The millstones that would continuously grind against each other, could wear down or cause a fire if they ran out of grain to grind. Every month the stones, which weighed over a ton, would have to be replaced or repaired. Family life In 1823 Green formed a relationship with Jane Smith, the daughter of William Smith, hired by Green Senior as mill manager. Although Green and Jane Smith never married, Jane eventually became known as Jane Green and the couple had seven children together; all but the first had Green as a baptismal name. The youngest child was born 13 months before Green's death. Green provided for his (so-called) common-law wife and children in his will. Nottingham Subscription Library When Green was thirty, he became a member of the Nottingham Subscription Library. This library exists today, and was likely the main source of Green's advanced mathematical knowledge. Unlike more conventional libraries, the subscription library was exclusive to a hundred or so subscribers, and the first on the list of subscribers was the Duke of Newcastle. This library catered to requests for specialised books and journals that satisfied the particular interests of their subscribers. 1828 essay In 1828, Green published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, which is the essay he is most famous for today. It was published privately at the author's expense, because he thought it would be presumptuous for a person like himself, with no formal education in mathematics, to submit the paper to an established journal. When Green published his Essay, it was sold on a subscription basis to 51 people, most of whom were friends who probably could not understand it. The wealthy landowner and mathematician Sir Edward Bromhead bought a copy and encouraged Green to do further work in mathematics. Not believing the offer was sincere, Green did not contact Bromhead for two years. Mathematician By the time Green's father died in 1829, the senior Green had become one of the gentry due to his considerable accumulated wealth and land owned, roughly half of which he left to his son and the other half to his daughter. The young Green, now thirty-six years old, consequently was able to use this wealth to abandon his miller duties and pursue mathematical studies. Cambridge Members of the Nottingham Subscription Library who knew Green repeatedly insisted that he obtain a proper University education. In particular, one of the library's most prestigious subscribers was Sir Edward Bromhead, with whom Green shared many correspondences; he insisted that Green go to Cambridge. In 1832, aged nearly forty, Green was admitted as an undergraduate at Gonville and Caius College, Cambridge. He was particularly insecure about his lack of knowledge of Greek and Latin, which were prerequisites, but it turned out not to be as hard for him to learn these as he had envisaged, as the degree of mastery required was not as high as he had expected. In the mathematics examinations, he won the first-year mathematical prize. He graduated with a BA in 1838 as a 4th Wrangler (the 4th highest scoring student in his graduating class, coming after James Joseph Sylvester who scored 2nd). College fellow Following his graduation, Green was elected a fellow of the Cambridge Philosophical Society. Even without his stellar academic standing, the Society had already read and made note of his Essay and three other publications, so Green was welcomed. The next two years provided an unparalleled opportunity for Green to read, write, and discuss his scientific ideas. In this short time he published an additional six publications with applications to hydrodynamics, sound, and optics. Final years and posthumous fame In his final years at Cambridge, Green became rather ill, and in 1840 he returned to Sneinton, only to die a year later. There are rumours that at Cambridge, Green had "succumbed to alcohol", and some of his earlier supporters, such as Sir Edward Bromhead, tried to distance themselves from him. Green's work was not well known in the mathematical community during his lifetime. Besides Green himself, the first mathematician to quote his 1828 work was the Briton Robert Murphy (1806–1843) in his 1833 work. In 1845, four years after Green's death, Green's work was rediscovered by the young William Thomson (then aged 21), later known as Lord Kelvin, who popularised it for future mathematicians. According to the book "George Green" by D.M. Cannell, William Thomson noticed Murphy's citation of Green's 1828 essay but found it difficult to locate Green's 1828 work; he finally got some copies of Green's 1828 work from William Hopkins in 1845. In 1871 N. M. Ferrers assembled The Mathematical Papers of the late George Green for publication. Green's work on the motion of waves in a canal (resulting in what is known as Green's law) anticipates the WKB approximation of quantum mechanics, while his research on light-waves and the properties of the Aether produced what is now known as the Cauchy-Green tensor. Green's theorem and functions were important tools in classical mechanics, and were revised by Schwinger's 1948 work on electrodynamics that led to his 1965 Nobel prize (shared with Feynman and Tomonaga). Green's functions later also proved useful in analysing superconductivity. On a visit to Nottingham in 1930, Albert Einstein commented that Green had been 20 years ahead of his time. The theoretical physicist Julian Schwinger, who used Green's functions in his ground-breaking works, published a tribute entitled "The Greening of Quantum Field Theory: George and I" in 1993. The George Green Library at the University of Nottingham is named after him, and houses the majority of the university's science and engineering Collection. The George Green Institute for Electromagnetics Research, a research group in the University of Nottingham engineering department, is also named after him. In 1986, Green's Mill, Sneinton in Nottingham was restored to working order. It now serves both as a working example of a 19th-century windmill and as a museum and science centre dedicated to Green. Westminster Abbey has a memorial stone for Green in the nave adjoining the graves of Sir Isaac Newton and Lord Kelvin. His work and influence on 19th-century applied physics had been largely forgotten until the publication of his biography by Mary Cannell in 1993. Source of knowledge Recent historical research suggests that the pivotal figure in Green's mathematical education was John Toplis (c1774-1857), who graduated in mathematics from Cambridge as 11th Wrangler before becoming headmaster of the forerunner of Nottingham High School 1806–1819, and lived in the same neighbourhood as Green and his family. Toplis was an advocate of the continental school of mathematics, and fluent in French, having translated Laplace's celebrated work on celestial mechanics. The possibility that Toplis played a role in Green's mathematical education would resolve several long-standing questions about the sources of Green's mathematical knowledge. For example, Green made use of "the Mathematical Analysis", a form of calculus derived from Leibniz which was virtually unheard of, or even actively discouraged, in England at the time (due to Leibniz being a contemporary of Newton, who had his own methods that were championed in England). This form of calculus, and the developments of mathematicians such as the French mathematicians Laplace, Lacroix and Poisson, were not taught even at Cambridge, let alone Nottingham, and yet Green not only had heard of these developments, but improved upon them. List of publications An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism. By George Green, Nottingham. Printed for the Author by T. Wheelhouse, Nottingham. 1828. (Quarto, vii + 72 pages.) Presented 12 November 1832. Presented 6 May 1833. Presented 16 December 1833. Presented 11 December 1837. Presented 15 May 1837. Presented 11 December 1837. Presented 18 February 1839. Presented 6 May 1839. Presented 20 May 1839. Notes References Ivor Grattan-Guinness, 'Green, George (1793–1841)', Oxford Dictionary of National Biography, Oxford University Press, 2004 accessed 26 May 2009 D. M. Cannell, "George Green mathematician and physicist 1793–1841", The Athlone Press, London, 1993. (Note: This was the first quotation of Green's 1828 work by somebody other than Green himself.) – An excellent on-line source of George Green information External links List of References for George Green 19th-century English mathematicians English physicists Mathematical analysts Alumni of Gonville and Caius College, Cambridge Fellows of Gonville and Caius College, Cambridge People from Sneinton 1793 births 1841 deaths Tourist attractions in Nottinghamshire Nottingham
George Green (mathematician)
Mathematics
2,593
48,937,492
https://en.wikipedia.org/wiki/Cloxestradiol%20acetate
Cloxestradiol acetate (brand name Genovul), also known as 17-(2,2,2-trichloroethoxy)estradiol O,O-diacetate, is a synthetic steroidal estrogen derived from estradiol. It is the O,O-diacetate ester of cloxestradiol, which, in contrast to cloxestradiol acetate, was never marketed. See also List of estrogen esters § Estradiol esters Cloxotestosterone acetate References Abandoned drugs Acetate esters Estradiol esters Estranes Estrogen esters Estrogen ethers Organochlorides Synthetic estrogens Trichloromethyl compounds
Cloxestradiol acetate
Chemistry
158
4,481,904
https://en.wikipedia.org/wiki/Van%20%27t%20Hoff%20equation
The Van 't Hoff equation relates the change in the equilibrium constant, , of a chemical reaction to the change in temperature, T, given the standard enthalpy change, , for the process. The subscript means "reaction" and the superscript means "standard". It was proposed by Dutch chemist Jacobus Henricus van 't Hoff in 1884 in his book Études de Dynamique chimique (Studies in Dynamic Chemistry). The Van 't Hoff equation has been widely utilized to explore the changes in state functions in a thermodynamic system. The Van 't Hoff plot, which is derived from this equation, is especially effective in estimating the change in enthalpy and entropy of a chemical reaction. Equation Summary and uses The standard pressure, , is used to define the reference state for the Van 't Hoff equation, which is where denotes the natural logarithm, is the thermodynamic equilibrium constant, and is the ideal gas constant. This equation is exact at any one temperature and all pressures, derived from the requirement that the Gibbs free energy of reaction be stationary in a state of chemical equilibrium. In practice, the equation is often integrated between two temperatures under the assumption that the standard reaction enthalpy is constant (and furthermore, this is also often assumed to be equal to its value at standard temperature). Since in reality and the standard reaction entropy do vary with temperature for most processes, the integrated equation is only approximate. Approximations are also made in practice to the activity coefficients within the equilibrium constant. A major use of the integrated equation is to estimate a new equilibrium constant at a new absolute temperature assuming a constant standard enthalpy change over the temperature range. To obtain the integrated equation, it is convenient to first rewrite the Van 't Hoff equation as The definite integral between temperatures and is then In this equation is the equilibrium constant at absolute temperature , and is the equilibrium constant at absolute temperature . Development from thermodynamics Combining the well-known formula for the Gibbs free energy of reaction where is the entropy of the system, with the Gibbs free energy isotherm equation: we obtain Differentiation of this expression with respect to the variable while assuming that both and are independent of yields the Van 't Hoff equation. These assumptions are expected to break down somewhat for large temperature variations. Provided that and are constant, the preceding equation gives as a linear function of and hence is known as the linear form of the Van 't Hoff equation. Therefore, when the range in temperature is small enough that the standard reaction enthalpy and reaction entropy are essentially constant, a plot of the natural logarithm of the equilibrium constant versus the reciprocal temperature gives a straight line. The slope of the line may be multiplied by the gas constant to obtain the standard enthalpy change of the reaction, and the intercept may be multiplied by to obtain the standard entropy change. Van 't Hoff isotherm The Van 't Hoff isotherm can be used to determine the temperature dependence of the Gibbs free energy of reaction for non-standard state reactions at a constant temperature: where is the Gibbs free energy of reaction under non-standard states at temperature , is the Gibbs free energy for the reaction at , is the extent of reaction, and is the thermodynamic reaction quotient. Since , the temperature dependence of both terms can be described by Van t'Hoff equations as a function of T. This finds applications in the field of electrochemistry. particularly in the study of the temperature dependence of voltaic cells. The isotherm can also be used at fixed temperature to describe the Law of Mass Action. When a reaction is at equilibrium, and . Otherwise, the Van 't Hoff isotherm predicts the direction that the system must shift in order to achieve equilibrium; when , the reaction moves in the forward direction, whereas when , the reaction moves in the backwards direction. See Chemical equilibrium. Van 't Hoff plot For a reversible reaction, the equilibrium constant can be measured at a variety of temperatures. This data can be plotted on a graph with on the -axis and on the axis. The data should have a linear relationship, the equation for which can be found by fitting the data using the linear form of the Van 't Hoff equation This graph is called the "Van 't Hoff plot" and is widely used to estimate the enthalpy and entropy of a chemical reaction. From this plot, is the slope, and is the intercept of the linear fit. By measuring the equilibrium constant, , at different temperatures, the Van 't Hoff plot can be used to assess a reaction when temperature changes. Knowing the slope and intercept from the Van 't Hoff plot, the enthalpy and entropy of a reaction can be easily obtained using The Van 't Hoff plot can be used to quickly determine the enthalpy of a chemical reaction both qualitatively and quantitatively. This change in enthalpy can be positive or negative, leading to two major forms of the Van 't Hoff plot. Endothermic reactions For an endothermic reaction, heat is absorbed, making the net enthalpy change positive. Thus, according to the definition of the slope: When the reaction is endothermic, (and the gas constant ), so Thus, for an endothermic reaction, the Van 't Hoff plot should always have a negative slope. Exothermic reactions For an exothermic reaction, heat is released, making the net enthalpy change negative. Thus, according to the definition of the slope: For an exothermic reaction , so Thus, for an exothermic reaction, the Van 't Hoff plot should always have a positive slope. Error propagation At first glance, using the fact that it would appear that two measurements of would suffice to be able to obtain an accurate value of : where and are the equilibrium constant values obtained at temperatures and respectively. However, the precision of values obtained in this way is highly dependent on the precision of the measured equilibrium constant values. The use of error propagation shows that the error in will be about 76 kJ/mol times the experimental uncertainty in , or about 110 kJ/mol times the uncertainty in the values. Similar considerations apply to the entropy of reaction obtained from . Notably, when equilibrium constants are measured at three or more temperatures, values of and are often obtained by straight-line fitting. The expectation is that the error will be reduced by this procedure, although the assumption that the enthalpy and entropy of reaction are constant may or may not prove to be correct. If there is significant temperature dependence in either or both quantities, it should manifest itself in nonlinear behavior in the Van 't Hoff plot; however, more than three data points would presumably be needed in order to observe this. Applications of the Van 't Hoff plot Van 't Hoff analysis In biological research, the Van 't Hoff plot is also called Van 't Hoff analysis. It is most effective in determining the favored product in a reaction. It may obtain results different from direct calorimetry such as differential scanning calorimetry or isothermal titration calorimetry due to various effects other than experimental error. Assume two products B and C form in a reaction: a A + d D → b B, a A + d D → c C. In this case, can be defined as ratio of B to C rather than the equilibrium constant. When > 1, B is the favored product, and the data on the Van 't Hoff plot will be in the positive region. When < 1, C is the favored product, and the data on the Van 't Hoff plot will be in the negative region. Using this information, a Van 't Hoff analysis can help determine the most suitable temperature for a favored product. In 2010, a Van 't Hoff analysis was used to determine whether water preferentially forms a hydrogen bond with the C-terminus or the N-terminus of the amino acid proline. The equilibrium constant for each reaction was found at a variety of temperatures, and a Van 't Hoff plot was created. This analysis showed that enthalpically, the water preferred to hydrogen bond to the C-terminus, but entropically it was more favorable to hydrogen bond with the N-terminus. Specifically, they found that C-terminus hydrogen bonding was favored by 4.2–6.4 kJ/mol. The N-terminus hydrogen bonding was favored by 31–43 J/(K mol). This data alone could not conclude which site water will preferentially hydrogen-bond to, so additional experiments were used. It was determined that at lower temperatures, the enthalpically favored species, the water hydrogen-bonded to the C-terminus, was preferred. At higher temperatures, the entropically favored species, the water hydrogen-bonded to the N-terminus, was preferred. Mechanistic studies A chemical reaction may undergo different reaction mechanisms at different temperatures. In this case, a Van 't Hoff plot with two or more linear fits may be exploited. Each linear fit has a different slope and intercept, which indicates different changes in enthalpy and entropy for each distinct mechanisms. The Van 't Hoff plot can be used to find the enthalpy and entropy change for each mechanism and the favored mechanism under different temperatures. In the example figure, the reaction undergoes mechanism 1 at high temperature and mechanism 2 at low temperature. Temperature dependence If the enthalpy and entropy are roughly constant as temperature varies over a certain range, then the Van 't Hoff plot is approximately linear when plotted over that range. However, in some cases the enthalpy and entropy do change dramatically with temperature. A first-order approximation is to assume that the two different reaction products have different heat capacities. Incorporating this assumption yields an additional term in the expression for the equilibrium constant as a function of temperature. A polynomial fit can then be used to analyze data that exhibits a non-constant standard enthalpy of reaction: where Thus, the enthalpy and entropy of a reaction can still be determined at specific temperatures even when a temperature dependence exists. Surfactant self-assembly The Van 't Hoff relation is particularly useful for the determination of the micellization enthalpy of surfactants from the temperature dependence of the critical micelle concentration (CMC): However, the relation loses its validity when the aggregation number is also temperature-dependent, and the following relation should be used instead: with and being the free energies of the surfactant in a micelle with aggregation number and respectively. This effect is particularly relevant for nonionic ethoxylated surfactants or polyoxypropylene–polyoxyethylene block copolymers (Poloxamers, Pluronics, Synperonics). The extended equation can be exploited for the extraction of aggregation numbers of self-assembled micelles from differential scanning calorimetric thermograms. See also Clausius–Clapeyron relation Van 't Hoff factor () Gibbs–Helmholtz equation Solubility equilibrium Arrhenius equation References Equilibrium chemistry Eponymous equations of physics Thermochemistry Jacobus Henricus van 't Hoff
Van 't Hoff equation
Physics,Chemistry
2,337
60,330,584
https://en.wikipedia.org/wiki/Gardner%20transition
In condensed matter physics, the Gardner transition refers to a temperature induced transition in which the free energy basin of a disordered system divides into many marginally stable sub-basins. It is named after Elizabeth Gardner who first described it in 1985. See also Glass transition References Condensed matter physics
Gardner transition
Physics,Chemistry,Materials_science,Engineering
58
18,866,258
https://en.wikipedia.org/wiki/Am%20star
An Am star or metallic-line star is a type of chemically peculiar star of spectral type A whose spectrum has strong and often variable absorption lines of metals such as zinc, strontium, zirconium, and barium, and deficiencies of others, such as calcium and scandium. The original definition of an Am star was one in which the star shows "an apparent surface underabundance of Ca (and/or Sc) and/or an apparent overabundance of the Fe group and heavier elements". The unusual relative abundances cause the spectral type assessed from the Calcium K lines to be systematically earlier than one assessed from other metallic lines. Typically, a spectral type judged solely from hydrogen lines is intermediate. This leads to two or three spectral types being given. For example, Sirius has been given a spectral type of kA0hA0VmA1, indicating that it is A0 when judged by the Calcium k line, A0V when judged by its hydrogen lines, and A1 when judged by the lines of heavy metals. There are other formats, such as A0mA1Va, again for Sirius. The chemical abnormalities are due to some elements which absorb more light being pushed towards the surface, while others sink under the force of gravity. This effect takes place only if the star has low rotational velocity. Normally, A-type stars rotate quickly. Most Am stars form part of a binary system in which the rotation of the stars has been slowed by tidal braking. The best-known metallic-line star is Sirius (α Canis Majoris). The following table lists some metallic-line stars in order of descending apparent visual magnitude. List δ Delphini and ρ Puppis A small number of Am stars show unusually late spectral types and particularly strong luminosity effects. Although Am stars in general show abnormal luminosity effects, stars such as ρ Puppis are believed to be more evolved and more luminous than most Am stars, lying above the main sequence. Am stars and δ Scuti variables lie in approximately the same location on the H–R diagram, but it is rare for a star to be both an Am star and a δ Scuti variable. ρ Puppis is one example and δ Delphini is another. Several authors have referred to a class of stars known as δ Delphini stars, Am stars but with relatively little difference between the calcium and other metallic lines. They have also been compared to the δ Scuti stars. Later studies showed that the group was somewhat inhomogeneous, possibly coincidental, and recommended dropping use of the δ Delphini class in favour of a narrower class of ρ Puppis stars with relatively high luminosity and late spectral types. However, there is still sometimes confusion, for example with ρ Puppis stars being considered to all be δ Scuti variables. Notes and references Star types
Am star
Astronomy
584
49,954,704
https://en.wikipedia.org/wiki/CidA/LrgA%20holin
The CidA/LrgA Holin (CidA/LrgA Holin) Family (TC# 1.E.14) is a group of proteins named after CidA (TC# 1.E.14.1.2) and LrgA (TC# 1.E.14.1.1) of Staphylococcus aureus. CidA and LrgA are homologous holin and anti-holin proteins, each with 4 putative transmembrane segments (TMSs). Members of the CidA/LrgA holin family also include putative murine hydrolase exporters from a wide range of Gram-positive and Gram-negative bacteria as well as archaea. Most CidA/LrgA holin family proteins vary in size between 100 and 160 amino acyl residues (aas) in length although a few are larger. Function It has been proposed that CidA and CidB (23% and 32% identical to LrgA and LrgB, respectively) are involved in programmed cell death in a process that is analogous to apoptosis in eukaryotes. These proteins are known to regulate and influence biofilm formation by releasing DNA from lysed cells which contributes to the biofilm matrix. CidA, a 131 aa protein with 4 putative TMSs, is believed to be the holin which exports the autolysin CidB, while LrgA may be an anti-holin, a protein that binds and inhibits holin activity. If this is a general mechanism for programmed cell death, this would explain their near ubiquity in the prokaryotic world. Expression The cidABC operon is activated by CidR in the presence of acetic acid. Both CidAB and LrgAB affect biofilm formation, oxidative stress, stationary phase survival and antibiotic tolerance in a reciprocal fashion, and their genes are regulated by the LytSR two component regulatory system. Microfluidic techniques have been used to follow gene expression temporally and spatially during biofilm formation, revealing that both cidA and lrgA are expressed mostly in the interior of tower structures in the biofilms, regulated by oxygen availability. Analogous proteins may be linked to competence in S. mutants. See also Holin Lysin Transporter Classification Database Further reading References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins Holins
CidA/LrgA holin
Biology
503
788,704
https://en.wikipedia.org/wiki/Conjunction%20elimination
In propositional logic, conjunction elimination (also called and elimination, ∧ elimination, or simplification) is a valid immediate inference, argument form and rule of inference which makes the inference that, if the conjunction A and B is true, then A is true, and B is true. The rule makes it possible to shorten longer proofs by deriving one of the conjuncts of a conjunction on a line by itself. An example in English: It's raining and it's pouring. Therefore it's raining. The rule consists of two separate sub-rules, which can be expressed in formal language as: and The two sub-rules together mean that, whenever an instance of "" appears on a line of a proof, either "" or "" can be placed on a subsequent line by itself. The above example in English is an application of the first sub-rule. Formal notation The conjunction elimination sub-rules may be written in sequent notation: and where is a metalogical symbol meaning that is a syntactic consequence of and is also a syntactic consequence of in logical system; and expressed as truth-functional tautologies or theorems of propositional logic: and where and are propositions expressed in some formal system. References Rules of inference Theorems in propositional logic sv:Matematiskt uttryck#Förenkling
Conjunction elimination
Mathematics
280
8,494,049
https://en.wikipedia.org/wiki/Donald%20F.%20Hunt
Donald F. Hunt is the University Professor of Chemistry and Pathology at the University of Virginia. He is known for his research in the field of mass spectrometry, he developed electron capture negative ion mass spectrometry. He has received multiple awards for his work including the Distinguished Contribution Award from the American Society for Mass Spectrometry and the Thomson Medal from the International Mass Spectrometry Society. Early life and education He received his B.S. and Ph.D. from the University of Massachusetts Amherst and was a National Institutes of Health Postdoctoral trainee under Klaus Biemann at MIT. The Hunt laboratory The Hunt laboratory develops new methodology and instrumentation centered on mass spectrometry based proteomics for the characterization of proteins and their modifications. Research interests Among his many research interests, Hunt investigates how the immune system uses peptides to kill diseased cells, and how modifications to chromatin-associated proteins called histones create a "Code" that may be involved in many gene regulation events. Awards Hunt has been awarded several honors including the Distinguished Contribution Award from the American Society for Mass Spectrometry in 1994; the Christian B. Anfinsen Award from the Protein Society; the Chemical Instrumentation Award and Field and Franklin Award from the American Chemical Society; the Thomson Medal from the International Mass Spectrometry Society; the Human Proteome Organization's Distinguished Achievement Award in Proteomics, and the Association of Biomolecular Resource Facilities 2007 Award. In addition, he also received the Charles H. Stone Award (American Chemical Society) and the Pehr Edman Award for outstanding achievements in the application of mass spectrometry. He received the Chemical Instrumentation Award sponsored by the American Chemical Society in 1997. References University of Virginia faculty Living people Year of birth missing (living people) Thomson Medal recipients Mass spectrometrists 21st-century American chemists
Donald F. Hunt
Physics,Chemistry
382
21,682,968
https://en.wikipedia.org/wiki/Australasian%20Language%20Technology%20Association
The Australasian Language Technology Association (ALTA) promotes language technology research and development in Australia and New Zealand. ALTA organises regular events for the exchange of research results and for academic and industrial training, and co-ordinates activities with other professional societies. ALTA is a founding regional organization of the Asian Federation of Natural Language Processing (AFNLP). Every year early December ALTA organises a research workshop (commonly known as ALTW) gathering together the growing language technology community in Australia and New Zealand, both from the academic and industrial world. The workshop welcomes original work on any aspect of natural language processing, including both speech and text. Accepted papers are published in the ALTA proceedings, which are also included as part of the ACL Anthology. Since 2008 ALTA has been involved in organising the Australian Computational and Linguistics Olympiad (OzCLO), which is a contest for high school students in the area of linguistics and computational linguistics. Conferences ALTW2003, 10 December 2003, Melbourne. ALTW2004, 8 December 2004, Sydney. ALTW2005, 10–11 December 2005, Sydney. ALTW2006, 30 November - 1 December 2006, Sydney, as part of the HCSNet SummerFest. ALTA2007, 10–11 December 2007, Melbourne, in conjunction with ADCS. ALTA2008, 8–10 December 2008, Hobart, in conjunction with ADCS. ALTA2009, 3–4 December 2009, Sydney, as part of the HCSNet Summerfest. ALTA2010, 9–10 December 2010, Melbourne, in conjunction with ADCS. ALTA2011, 1–2 December 2011, Canberra, in conjunction with ADCS and langfest 2011, which includes the 2nd combined conference of the Applied Linguistics Association of Australia (ALAA) and the Applied Linguistics Association of New Zealand (ALANZ), as well as the 42nd Annual Conference of the Australian Linguistics Society (ALS) ALTW2012, 6–5 December 2012, Dunedin in conjunction with ADCS. ALTW2013, 4–6 December 2013, Brisbane in conjunction with ADCS. ALTW2014, 26–28 November 2014, Melbourne, in conjunction with ADCS. Notes External links ALTA home page AFNLP home page OzCLO home page Educational organisations based in Australia Educational organisations based in New Zealand Computational linguistics Professional associations based in Australia Computer science conferences
Australasian Language Technology Association
Technology
485
21,066,639
https://en.wikipedia.org/wiki/Taspoglutide
Taspoglutide is a former experimental drug, a glucagon-like peptide-1 agonist (GLP-1 agonist), that was under investigation for treatment of type 2 diabetes and being codeveloped by Ipsen and Roche. Initially, phase II trials reported it was effective and well tolerated. Of the eight planned phase III clinical trials of weekly taspoglutide (four against exenatide, sitagliptin, insulin glargine, and pioglitazone), at least five were active in 2009. Preliminary results in early 2010 were favourable. (At least one of the eight planned phase III trials had not started recruiting by end 2009.) In September 2010 Roche halted Phase III clinical trials due to instances of serious hypersensitivity reactions and gastrointestinal side effects. no new trials have been registered since 2010. Chemistry Taspoglutide is the peptide with the sequence H2N-His-2-methyl-Ala-Glu-Gly-Thr-Phe-Thr-Ser-Asp-Val-Ser-Ser-Tyr-Leu-Glu-Gly-Gln-Ala-Ala-Lys-Glu-Phe-Ile-Ala-Trp-Leu-Val-Lys-2-methyl-Ala-Arg-CONH2. In other words, it is the 8-(2-methylalanine)-35-(2-methylalanine)-36-L-argininamide derivative of the amino acid sequence 7–36 of human glucagon-like peptide I. See also Incretin References GLP-1 receptor agonists Peptide therapeutics Abandoned drugs
Taspoglutide
Chemistry
364
63,926,753
https://en.wikipedia.org/wiki/QST%20%28genetics%29
In quantitative genetics, QST is a statistic intended to measure the degree of genetic differentiation among populations with regard to a quantitative trait. It was developed by Ken Spitze in 1993. Its name reflects that QST was intended to be analogous to the fixation index for a single genetic locus (FST). QST is often compared with FST of neutral loci to test if variation in a quantitative trait is a result of divergent selection or genetic drift, an analysis known as QST–FST comparisons. Calculation of QST Equations QST represents the proportion of variance among subpopulations, and is it’s calculation is synonymous to FST developed by Sewall Wright. However, instead of using genetic differentiation, QST is calculated by finding the variance of a quantitative trait within and among subpopulations, and for the total population. Variance of a quantitative trait among populations (σ2GB) is described as: And the variance of a quantitative trait within populations (σ2GW) is described as: Where σ2T is the total genetic variance in all populations. Therefore, QST can be calculated with the following equation: Assumptions Calculation of QST is subject to several assumptions: populations must be in Hardy-Weinberg Equilibrium, observed variation is assumed to be due to additive genetic effects only, selection and linkage disequilibrium are not present, and the subpopulations exist within an island model. QST-FST comparisons QST–FST analyses often involve culturing organisms in consistent environmental conditions, known as common garden experiments, and comparing the phenotypic variance to genetic variance. If QST is found to exceed FST, this is interpreted as evidence of divergent selection, because it indicates more differentiation in the trait than could be produced solely by genetic drift. If QST is less than FST, balancing selection is expected to be present. If the values of QST and FSTare equivalent, the observed trait differentiation could be due to genetic drift. Suitable comparison of QST and FST is subject to multiple ecological and evolutionary assumptions, and since the development of QST, multiple studies have examined the limitations and constrictions of QST-FST analyses. Leinonen et al. notes FST must be calculated with neutral loci, however over filtering of non-neutral loci can artificially reduce FSTvalues. Cubry et al. found QST is reduced in the presence of dominance, resulting in conservative estimates of divergent selection when QST is high, and inconclusive results of balancing selection when QST is low. Additionally, population structure can significantly impact QST-FST ratios. Stepping stone models, which can generate more evolutionary noise than island models, are more likely to experience type 1 errors. If a subset of populations act as sources, such as during invasion, weighting the genetic contributions of each population can increase detection of adaptation. In order to improve precision of QST analyses, more populations (>20) should be included in analyses. QST applications in literature Multiple studies have incorporated QST to separate effects of natural selection and genetic drift, and QST is often observed to exceed FST, indicating local adaptation. In an ecological restoration study, Bower and Aitken used QST to evaluate suitable populations for seed transfer of whitebark pine. They found high QST values in many populations, suggesting local adaptation for cold-adapted characteristics. During an assessment of the invasive species, Brachypodium sylvaticum, Marchini et al. found divergence between native and invasive populations during initial establishment in the invaded range, but minimal divergence during range expansion. In an examination of the common snapdragon (Antirrhinum majus) along an elevation gradient, QST-FST analyses revealed different adaptation trends between two subspecies (A. m. pseudomajus and A. m. striatum). While both subspecies occur at all elevations, A. m. striatum had high QST values for traits associated with altitude adaptation: plant height, number of branches, and internode length. A. m. pseudomajus had lower QST than FST values for germination time. See also F-statistics Quantitative genetics Conservation genetics Divergent selection Genetic diversity References Genetics terms Population genetics Statistical tests
QST (genetics)
Biology
886
17,252,532
https://en.wikipedia.org/wiki/Middle%20lamella
The middle lamella is a layer that cements together the primary cell walls of two adjoining plant cells. It is the first formed layer to be deposited at the time of cytokinesis. The cell plate that is formed during cell division itself develops into middle lamella or lamellum. The middle lamella is made up of calcium and magnesium pectates. In a mature plant cell it is the outermost layer of cell wall. In plants, the pectins form a unified and continuous layer between adjacent cells. Frequently, it is difficult to distinguish the middle lamella from the primary wall, especially in cells that develop thick secondary walls. In such cases, the two adjacent primary walls and the middle lamella, and perhaps the first layer of the secondary wall of each cell, may be called a compound middle lamella. When the middle lamella is degraded by enzymes, as happens during fruit ripening and abscission, the adjacent cells will separate. See also Cell wall Plasma membrane References 2.Telugu Akademi Hyderabad "Intermediate first year Botany" Cell biology Plant anatomy
Middle lamella
Biology
220
23,183,048
https://en.wikipedia.org/wiki/International%20Energy%20Forum
The International Energy Forum (IEF) is an inter-governmental, non-profit international organisation which aims to foster greater mutual understanding and awareness of common energy interests among its members. The 73 Member Countries of the Forum are signatories to the IEF Charter, which outlines the framework of the global energy dialogue through this inter-governmental arrangement. The IEF is the neutral facilitator of informal, open, informed and continuing global energy dialogue. Recognising their interdependence in the field of energy, the member countries of the IEF co-operate under the neutral framework of the Forum to foster greater mutual understanding and awareness of common energy interests in order to ensure global energy security. The IEF is unique in that participants not only include IEA and OPEC countries, but also key international actors such as Brazil, China, India, Mexico, Russia, and South Africa. The IEF member countries account for more than 90 percent of global oil and gas supply and demand. The Forum's biennial Ministerial Meetings are the world's largest gathering of Energy Ministers. The magnitude and diversity of this engagement is a testament to the position of the IEF as a neutral facilitator and honest broker of solutions in the common interest. Through the Forum and its associated events, IEF Ministers, their officials, energy industry executives, and other experts engage in a dialogue of increasing importance to global energy security. The IEF is promoted by a permanent Secretariat based in the Diplomatic Quarter of Riyadh, Saudi Arabia. The International Energy Forum also coordinates the Joint Organisations Data Initiative (JODI) which is a concrete outcome of the global energy dialogue. Mission statement The International Energy Forum aims to provide a platform for member-states to have access to open discussion and dialogue between countries that make up the global energy market. The Forum aims to gather all aspects of the energy market; producer, consumer and transit states. The goal of the forum is to create better understanding of the market on all sides, and to increase mutual awareness and understanding of existing member states. Objectives The fundamental aims of the Forum are: Fostering greater mutual understanding and awareness of common energy interests among its Members; Promoting a better understanding of the benefits of stable and transparent energy markets for the health of the world economy, the security of energy supply and demand, and the expansion of global trade and investment in energy resources and technology; Identifying and promoting principles and guidelines that enhance energy market transparency, stability and sustainability; Narrowing the differences among energy producing, consuming and transit Member States on global energy issues and promoting a fuller understanding of their interdependency and the benefits to be gained from cooperation through dialogue among them, as well as between them and energy related industries; Promoting the study and exchange of views on the inter-relationships among energy, technology, environmental issues, economic growth and development; Building confidence and trust through improved information sharing among States; and Facilitating the collection, compilation and dissemination of data, information and analyses that contribute to greater market transparency, stability and sustainability. History The concept of a systematic producer-consumer dialogue emerged in the 1970s as a part of the general reorganisation in the global political and economic order with energy markets transforming the structure within individual countries as well as power balances and relations between countries. In the wake of the first Gulf War in the early 1990s, consumers and producers recognised their joint interest in the stability of the oil market, creating greater awareness, and understanding sensitivities toward each other's interests. The Gulf War revealed the importance of a concerted and coordinated global response to an adverse supply shock. On October 1, 1990, at the United Nations General Assembly, Venezuelan President, Carlos Andrés Pérez called for an urgent meeting of producers and consumers under the auspices of the United Nations to help the world face the growing uncertainties and politics of the oil market. With the support of French President François Mitterrand, and Norwegian Prime Minister, Gro Brundtland, political support was gained for the initiation of a Ministerial Seminar of producers and consumers. Established in Paris in July 1991, the International Energy Forum was created in order to stabilise the global energy market after the 1970s energy crisis, and the 1980s oil glut. One of the main priorities of the Forum was to bring together member-states and private corporations in order to increase awareness of national and international interests, and the workings of the market in order to avoid the instabilities of the previous two decades. Organizations Headquartered in the Diplomatic Quarter of Riyadh, Saudi Arabia, The International Energy Forum is a Secretariat organization. The forum is governed by an executive board which is composed of 31 representatives of ministries of the respective member states. The body is led by Secretary General, Joe McMonigle of USA. Mr McMonigle was appointed on August 1, 2020. The International Energy Forum Secretariat is to ensure that the Forum is promoting a neutral platform for the exchange of information and views regarding conflicts and the future of the energy industry. Another goal of the executive board is to include both public and private entities in the global energy market in order to bring forth multiple viewpoints to the Forum. Additional duties that are performed by the Executive Board include organizing all of the Forum's activities. These include all meetings and summits that are put on by the Forum, and also coordinates the Forum's Programme of Work. The Joint Organisations Data Initiative (JODI) IEF Energy Ministers recognized that the exchange and free dissemination of energy market data helps to mitigate uncertainty by improving market transparency and facilitating well-informed decision-making that instills investor confidence, supports market stability and strengthens energy security. The Joint Organisations Data Initiative, coordinated by the IEF since 2005, relies on the combined efforts of the eight JODI partner organisations (APEC, EUROSTAT, GECF, IEA, OLADE, OPEC, and UNSD), and more than 100 national administrations, and industry stakeholders to gather, verify and transmit the official data that populates JODI's two public databases JODI-Oil and JODI-Gas with key monthly supply and demand indicators. References External links Official IEF website Official IEF Fact Book International energy organizations
International Energy Forum
Engineering
1,254
642,778
https://en.wikipedia.org/wiki/Dungeon%20crawl
A dungeon crawl is a type of scenario in fantasy role-playing games (RPGs) in which heroes navigate a labyrinth environment (a "dungeon"), battling various monsters, avoiding traps, solving puzzles, and looting any treasure they may find. Video games and board games which predominantly feature dungeon crawl elements are considered to be a genre. Board games Dungeon crawling in board games dates to 1975 when Gary Gygax introduced Solo Dungeon Adventures. That year also saw the release of Dungeon!. Over the years, many games built on that concept. One of the most acclaimed board games of the late 2010s, Gloomhaven, is a dungeon crawler. Video games The first computer-based dungeon crawl was pedit5, developed in 1975 by Rusty Rutherford on the PLATO interactive education system based in Urbana, Illinois. Although this game was quickly deleted from the system, several more like it appeared, including dnd and Moria. Computer games and series from the 1980s, such as Rogue, The Bard's Tale, Cosmic Soldier, Dungeon Master, Gauntlet, Madō Monogatari, Megami Tensei, Might and Magic, Legend of Zelda, Phantasy Star, Ultima, and Wizardry, helped set the standards of the genre. Their primitive graphics were conducive to this style, due to the need for repetitive tiles or similar-looking graphics to create effective mazes. Game Developers Matt Barton described Telengard (1982) as a "pure dungeon crawler" for its lack of diversions, and noted its expansive dungeons as a "key selling point". Some dungeon crawlers from this era also employed action role-playing game combat, such as Dragon Slayer, and The Tower of Druaga. Games that grew out of this style are also considered dungeon crawlers, in that the player is limited to the confines of the walls of the dungeon, but still allows for complex systems around combat, enemy behavior, and loot systems, as well as the potential for multiplayer and online play. Gauntlet, Diablo, The Binding of Isaac and Enter the Gungeon are examples of these dungeon crawlers. Variations on the dungeon crawl trope can be found in other genres. In the early 2010s there was a modest resurgence in their popularity, particularly in Japan, largely due to the success of the Etrian Odyssey series by Atlus. Instance dungeon In massively multiplayer online games, an instance is a special area, typically a dungeon or a restricted dungeon-like environment, that generates a new copy of the location for each group or certain number of players that enters the area. Instancing, the general term for the use of this technique, addresses several problems encountered by players in the shared spaces of virtual worlds, but also sacrifices the social element of shared spaces and realistic immersion in that virtual world. They also tend to be a lot smaller and more linear. First-person party-based dungeon crawlers This subgenre consists of RPGs where the player leads a party of adventurers in first-person perspective, typically in a grid-based environment. Examples include the aforementioned Wizardry, Might and Magic and Bard's Tale series; as well as the Etrian Odyssey and Elminage series. Games of this type are also known as "blobbers", since the player moves the entire party around the playing field as a single unit, or "blob". Many "blobbers" are turn-based, such as the play-by-mail game Heroic Fantasy, but some games such as Dungeon Master, Legend of Grimrock and Eye of the Beholder series are played in real-time. Early games in this genre lack an automap feature, forcing players to draw their own maps in order to keep track of their progress. Spatial puzzles are common, and players may have to, for instance, move a stone in one part of the level in order to open a gate in another part of the level. See also Play-by-mail game Role-playing game terms Roshia's Gauntlet (Code of the Rats) References 20th-century neologisms Role-playing game terminology Video game genres Video game terminology
Dungeon crawl
Technology
852
43,276,293
https://en.wikipedia.org/wiki/Aurora%20Generator%20Test
Idaho National Laboratory ran the Aurora Generator Test in 2007 to demonstrate how a cyberattack could destroy physical components of the electric grid. The experiment used a computer program to rapidly open and close a diesel generator's circuit breakers out of phase from the rest of the grid, thereby subjecting the engine to abnormal torques and ultimately causing it to explode. This vulnerability is referred to as the Aurora Vulnerability. This vulnerability is especially a concern because most grid equipment supports using Modbus and other legacy communications protocols that were designed without security in mind. As such, they do not support authentication, confidentiality, or replay protection. This means that any attacker that can communicate with the device can control it and use the Aurora Vulnerability to destroy it. Experiment To prepare for the experiment, the researchers procured and installed a 2.25 MW (3000 horsepower) generator and connected it to the substation. They also needed access to a programmable digital relay or another device capable of controlling the breaker. Although such access can be through a mechanical or digital interface, in this case the latter was used. A generator unit consists of a diesel engine mechanically linked to an alternator. In many commercial-industrial settings, multiple generators need to operate together in tandem, in order to provide power to the desired load. A generator that is operating normally is synchronized with either the power grid or with one or more additional generators (for example in an "islanded" independent power network as might be used in a remote location or for emergency backup power). When generators are operating in synchronicity, effectively their alternators are magnetically locked together. In the Aurora experiment, the researchers used a cyberattack to open and close the breakers out of sync, in order to deliberately maximize the stress. Each time the breakers were closed, the torque induced in the alternator (as a result of the out-of-synchrony connection) caused the entire generator to bounce and shake. The generator used in the experiment was equipped with a resilient rubber rotating coupling (located between the diesel engine and the alternator, thus indirectly connecting the engine's steel crankshaft to the alternator's steel shaft). During the initial steps of the attack, black rubber pieces were ejected as the rotating coupling was incrementally destroyed (as a result of the extremely abnormal torques induced by the out-of-synchronization alternator on the diesel engine's crankshaft). The rotating rubber coupling was soon destroyed outright, whereupon the diesel engine itself was then quickly ripped apart, with parts sent flying off. Some parts of the generator landed as far as 80 feet away from the generator. In addition to the massive and obvious mechanical damage to the diesel engine itself, evidence of overheating of the alternator was later observed (upon subsequent disassembly of the unit). In this attack, the generator unit was destroyed in roughly three minutes. However, this process took three minutes only because the researchers assessed the damage from each iteration of the attack. A real attack could have destroyed the unit much more quickly. For example, a generator built without a rotating rubber coupling between the diesel engine and the alternator would experience the crankshaft-destroying abnormal forces in its diesel engine immediately, given the absence of a shock-absorbing material between these two rotating components. A generator unit assembled in this way could see its diesel engine ruined by a single out-of-synchrony connection of the alternator. The Aurora experiment was designated as unclassified, for official use only. On September 27, 2007, CNN published an article based on the information and video DHS released to them, and on July 3, 2014, DHS released many of the documents related to the experiment as part of an unrelated FOIA request. Vulnerability The Aurora vulnerability is caused by the out-of-sync closing of the protective relays. "A close, but imperfect, analogy would be to imagine the effect of shifting a car into Reverse while it is being driven on a highway, or the effect of revving the engine up while the car is in neutral and then shifting it into Drive." "The Aurora attack is designed to open a circuit breaker, wait for the system or generator to slip out of synchronism, and reclose the breaker, all before the protection system recognizes and responds to the attack... Traditional generator protection elements typically actuate and block reclosing in about 15 cycles. Many variables affect this time, and every system needs to be analyzed to determine its specific vulnerability to the Aurora attack... Although the main focus of the Aurora attack is the potential 15-cycle window of opportunity immediately after the target breaker is opened, the overriding issue is how fast the generator moves away from system synchronism." Potential impact The failure of even a single generator could cause widespread outages and possibly cascading failure of the entire power grid as occurred in the Northeast blackout of 2003. Additionally, even if there are no outages from the removal of a single component (N-1 resilience), there is a large window for a second attack or failure as it could take more than a year to replace a destroyed generator, because many generators and transformers are custom-built. Mitigations The Aurora vulnerability can be mitigated by preventing the out-of-phase opening and closing of the breakers. Some suggested methods include adding functionality in protective relays to ensure synchronism and adding a time delay for closing breakers. One mitigation technique is to add a synchronism-check function to all protective relays that potentially connect two systems together. To implement this, the function must prevent the relay from closing unless the voltage and frequency are within a pre-set range. Devices such as the IEEE 25 Sync-Check relay and IEEE 50 can be used to prevent out-of-phase opening and closing of the breakers. Diesel engines can also be equipped with independent sensors that detect abnormal vibration signatures. It is possible to design such a sensor to immediately trigger a complete shutdown of the generator upon detection of a single major excursion from the vibration signature of a normally operating engine. However, the damage from that single excursion might already be substantial, particularly if a resilient rubber coupling between the engine and the alternator is not present. Criticisms There was some discussion as to whether Aurora hardware mitigation devices (HMD) can cause other failures. In May 2011, Quanta Technology published an article that used RTDS (Real Time Digital Simulator) testing to examine the "performance of multiple commercial relay devices available" of Aurora HMDs. To quote: "The relays were subject to different test categories to find out if their performance is dependable when they need to operate, and secure in response to typical power system transients such as faults, power swing and load switching... In general, there were technical shortcomings in the protection scheme’s design that were identified and documented using the real time testing results. RTDS testing showed that there is, as yet, no single solution that can be widely applied to any case, and that can present the required reliability level." A presentation from Quanta Technology and Dominion succinctly stated in their reliability assessment "HMDs are not dependable, nor secure." Joe Weiss, a cybersecurity and control system professional, disputed the findings from this report and claimed that it has misled utilities. He wrote: "This report has done a great deal of damage by implying that the Aurora mitigation devices will cause grid issues. Several utilities have used the Quanta report as a basis for not installing any Aurora mitigation devices. Unfortunately, the report has several very questionable assumptions. They include applying initial conditions that the hardware mitigation was not designed to address such as slower developing faults, or off nominal grid frequencies. Existing protection will address “slower” developing faults and off nominal grid frequencies (<59 Hz or >61 Hz). The Aurora hardware mitigation devices are for the very fast out-of-phase condition faults that are currently gaps in protection (i.e., not protected by any other device) of the grid." Timeline On March 4, 2007, Idaho National Laboratory demonstrated the Aurora vulnerability. On June 21, 2007, NERC notified industry about the Aurora vulnerability. On September 27, 2007, CNN released a previously classified demonstration video of the Aurora attack on their homepage. That video can be downloaded from here. On October 13, 2010, NERC released a recommendation to industry on the Aurora vulnerability. On July 3, 2014, the US Department of Homeland Security released 840 pages of documents related to Aurora in response to an unrelated FOIA request. See also Brittle Power Electromagnetic pulse Energy security List of power outages New York City blackout of 1977 Programmable logic controller Resilient control systems Vulnerability of nuclear plants to attack When Technology Fails Metcalf sniper attack References External links http://www.langner.com/en/2014/07/09/aurora-revisited-by-its-original-project-lead/ http://www.powermag.com/what-you-need-to-know-and-dont-about-the-aurora-vulnerability/?printmode=1 http://breakingenergy.com/2013/09/13/the-all-too-real-cyberthreat-nobody-is-prepared-for-aurora/ https://www.computerworld.com/article/1589185/simulated-attack-points-to-vulnerable-u-s-power-infrastructure.html http://www.computerworld.com/s/article/9249642/New_docs_show_DHS_was_more_worried_about_critical_infrastructure_flaw_in_07_than_it_let_on http://threatpost.com/dhs-releases-hundreds-of-documents-on-wrong-aurora-project http://news.infracritical.com/pipermail/scadasec/2014-July/thread.html https://web.archive.org/web/20140903052039/http://www.thepresidency.org.70-32-102-141.pr6m-p7xj.accessdomain.com/sites/default/files/Grid%20Report%20July%2015%20First%20Edition.pdf (Page 30) http://www.infosecisland.com/blogview/20925-Misconceptions-about-Aurora-Why-Isnt-More-Being-Done.html https://www.sce.com/wps/wcm/connect/c5fe765f-f66b-4d37-8e9f-7911fd6e7f3b/AURORACustomerOutreach.pdf?MOD=AJPERES Cyberattacks Cyberwarfare Computer security Energy infrastructure Industrial computing Electrical grid
Aurora Generator Test
Technology,Engineering
2,286
9,111,167
https://en.wikipedia.org/wiki/William%20Rutherford%20%28mathematician%29
William Rutherford (1798–1871) was an English mathematician famous for his calculation of 208 digits of the mathematical constant π in 1841. Only the first 152 calculated digits were later found to be correct; but that broke the record of the time, which was held by the Slovenian mathematician Jurij Vega since 1789 (126 first digits correct). Rutherford used the following formula: Life Rutherford was born about 1798. He was a master at a school at Woodburn from 1822 to 1825, when he went to Hawick, Roxburghshire, and he was later (1832–1837) a master at Corporation Academy, Berwick-on-Tweed. In 1838 Rutherford obtained a mathematical post at the Royal Military Academy, Woolwich. He was a member of the council of the Royal Astronomical Society from 1844 to 1847, and honorary secretary in 1845 and 1846. He was a friend of Wesley S. B. Woolhouse. Rutherford retired from his post at Woolwich about 1864, and died on 16 September 1871, at his residence, Tweed Cottage, Maryon Road, Charlton, at the age of seventy-three. Works Rutherford was the editor, with Stephen Fenwick and (for the first volume only) with Thomas Stephen Davies, of The Mathematician, vol. i. 1845, vol. ii. 1847, vol. iii. 1850, to which he contributed many papers. He sent problems, solutions and papers to The Ladies' Diary from 1822 to 1869, and also contributed to the Gentlemen's Diary. His mathematical studies were of a traditional type. Rutherford edited Simson's Euclid (1841, 1847); Charles Hutton's Course of Mathematics, for Woolwich, 1841, 1846, 1854, 1860; John Bonnycastle's Algebra, with William Galbraith, 1848; Thomas Carpenter's Arithmetic, 1852, 1859; Edwin Colman Tyson's Key to Bonnycastle's Arithmetic, 1860; He published also: Computation of π to 208 Decimal Places (correct to 153), Philosophical Transactions, 1841. Demonstration of Pascal's Theorem, Philosophical Magazine, 1843. Theorems in Co-ordinate Geometry, Philosophical Magazine, 1843. Elementary Propositions in the Geometry of Co-ordinates (with Stephen Fenwick), 1843. Earthwork Tables (with Charles K. Sibley), 1847. Complete Solution of Numerical Equations , 1849. Arithmetic, Algebra, and Differential and Integral Calculus in Course of Mathematics for R.M.A. Woolwich , 1850. The Extension of π to 440 Places (Royal Society Proceedings, 1853, p. 274). On Statical Friction and Revetments , 1859. He also wrote mathematical pamphlets, including one on the solution of spherical triangles. See also History of π History of numerical approximations of π Napoleon's theorem Yasumasa Kanada References and notes Attribution 1798 births Pi-related people 19th-century English mathematicians 1871 deaths
William Rutherford (mathematician)
Mathematics
592
8,288,415
https://en.wikipedia.org/wiki/Regional%20differentiation
In the field of developmental biology, regional differentiation is the process by which different areas are identified in the development of the early embryo. The process by which the cells become specified differs between organisms. Cell fate determination In terms of developmental commitment, a cell can either be specified or it can be determined. Specification is the first stage in differentiation. A cell that is specified can have its commitment reversed while the determined state is irreversible. There are two main types of specification: autonomous and conditional. A cell specified autonomously will develop into a specific fate based upon cytoplasmic determinants with no regard to the environment the cell is in. A cell specified conditionally will develop into a specific fate based upon other surrounding cells or morphogen gradients. Another type of specification is syncytial specification, characteristic of most insect classes. Specification in sea urchins uses both autonomous and conditional mechanisms to determine the anterior/posterior axis. The anterior/posterior axis lies along the animal/vegetal axis set up during cleavage. The micromeres induce the nearby tissue to become endoderm while the animal cells are specified to become ectoderm. The animal cells are not determined because the micromeres can induce the animal cells to also take on mesodermal and endodermal fates. It was observed that β-catenin was present in the nuclei at the vegetal pole of the blastula. Through a series of experiments, one study confirmed the role of β-catenin in the cell-autonomous specification of vegetal cell fates and the micromeres inducing ability. Treatments of lithium chloride sufficient to vegetalize the embryo resulted in increases in nuclearly localized b-catenin. Reduction of expression of β-catenin in the nucleus correlated with loss of vegetal cell fates. Transplants of micromeres lacking nuclear accumulation of β-catenin were unable to induce a second axis. For the molecular mechanism of β-catenin and the micromeres, it was observed that Notch was present uniformly on the apical surface of the early blastula but was lost in the secondary mesenchyme cells (SMCs) during late blastula and enriched in the presumptive endodermal cells in late blastula. Notch is both necessary and sufficient for determination of the SMCs. The micromeres express the ligand for Notch, Delta, on their surface to induce the formation of SMCs. The high nuclear levels of b-catenin results from the high accumulation of the disheveled protein at the vegetal pole of the egg. disheveled inactivates GSK-3 and prevents the phosphorylation of β-catenin. This allows β-catenin to escape degradation and enter the nucleus. The only important role of β-catenin is to activate the transcription of the gene Pmar1. This gene represses a repressor to allow micromere genes to be expressed. The aboral/oral axis (analogous to the dorsal/ventral axes in other animals) is specified by a nodal homolog. This nodal was localized on the future oral side of the embryo. Experiments confirmed that nodal is both necessary and sufficient to promote development of the oral fate. Nodal also has a role in left/right axis formation. Tunicates Tunicates have been a popular choice for the study of regional specification because tunicates were the first organism in which autonomous specification was discovered and tunicates are evolutionary related to vertebrates. Early observations in tunicates led to the identification of the yellow crescent (also called the myoplasm). This cytoplasm was segregated to future muscle cells and if transplanted could induce the formation of muscle cells. The cytoplasmic determinant macho-1 was isolated as the necessary and sufficient factor for muscle cell formation. Similar to Sea urchins, the accumulation of b-catenin in the nuclei was identified as both necessary and sufficient to induce endoderm. Two more cell fates are determined by conditional specification. The endoderm sends a fibroblast growth factor (FGF) signal to specify the notocord and the mesenchyme fates. Anterior cells respond to FGF to become notocord while posterior cells (identified by the presence of macho-1) respond to FGF to become mesenchyme. The cytoplasm of the egg not only determines cell fate, but also determines the dorsal/ventral axis. The cytoplasm in the vegetal pole specifies this axis and removing this cytoplasm leads to a loss of axis information. The yellow cytoplasm specifies the anterior/posterior axis. When the yellow cytoplasm moves to the posterior of the egg to become posterior vegetal cytoplasm (PVC), the anterior/posterior axis is specified. Removal of the PVC leads to a loss of the axis while transplantation to the anterior reverses the axis. C. elegans In the two cell stage, the embryo of the nematode C. elegans exhibits mosaic behavior. There are two cells, the P1 cell and the AB cell. The P1 cell was able make all of its fated cells while the AB cell could only make a portion of the cells it was fated to produce. Thus, The first division gives the autonomous specification of the two cells, but the AB cells require a conditional mechanism to produce all of its fated cells. The AB lineage gives rise to neurons, skin, and pharynx. The P1 cell divides into EMS and P2. The EMS cell divides into MS and E. The MS lineage gives rise to pharynx, muscle, and neurons. The E lineage gives rise to intestines. The P2 cell divides into P3 and C founder cells. The C founder cells give rise to muscle, skin, and neurons. The P3 cell divides into P4 and D founder cells. The D founder cells give rise to muscle while the P4 lineage gives rise to the germ line. Axis specification The anterior/posterior axis is specified by the sperm at the posterior side. At the two cell stage, the anterior cell is the AB cell while the posterior cell is the P1 cell. The dorsal/ventral axis of the animal is set by a random position of cells during the four cell stage of the embryo. The dorsal cell is the ABp cell while the ventral cell is the EMS cell. Localization of cytoplasmic determinants The autonomous specification of C. elegans arises from different cytoplasmic determinants. PAR proteins are responsible for partitioning these determinants in the early embryo. These proteins are located at the periphery of the zygote and play a role in intracellular signaling. The current model for the function of these proteins is that they cause local changes in the cytoplasm that lead to different protein accumulation in the posterior vs. the anterior. Mex-5 accumulates in the anterior while PIE-1 and P granules (see below) accumulate in the posterior. Specification of germ line P granules were identified as the cytoplasmic determinants. While uniformly present at fertilization, these granules become localized in the posterior P1 cell prior to the first division. These granules are further localized between each division into P cells (ex. P2, P3) until after the fourth division when they are put into the P4 cells which become the germ line. Specification of EMS and P1 cells Other proteins that are likely to function as localized cytoplasmic determinants in the P1 lineage include SKN-1, PIE-1 and PAL-1. SKN-1 is a cytoplasmic determinant that is localized in the P1 cell lineage and determines EMS cell fate. PIE-1 is localized in the P2 cell lineage and is a general repressor of transcription. SKN-1 is repressed in P2 cells and is unable to specify an EMS fate in these cells. The repressive activity of PIE-1 is required to keep the germ line lineage from differentiating. Specification of C and D founder cells PAL-1 is required to specify the fates of the C and D founder cells (derived from the P2 lineage). PAL-1, however, is present in both EMS and P2. Normally, PAL-1 activity is repressed in EMS by SKN-1 but not repressed in P2. Both C and D founder cells depend on PAL-1 but there is another factor that is required to distinguish C from D. Specification of E lineage The specification of the E lineage depends on signals from P2 to the EMS cell. Components of Wnt signaling were involved and were named mom genes. mom-2 is a member of the Wnt family of proteins (i.e. the signal) and mom-5 is a member of the frizzled family of proteins (i.e. the receptor). Specification of ABa and ABp The specification of ABa and ABp depend on another cell-cell signaling event. A difference between these two cell types is that ABa gives rise to anterior pharynx while ABp does not contribute to pharynx. A signal from MS at the 12-cell stage induces pharynx in ABa progeny cells but not in ABp progeny. Signals from the P2 cells prevent the ABp from forming pharynx. This signal from the P2 was discovered to be APX-1 within the Delta family of proteins. These proteins are known to be ligands for the Notch protein. GLP-1, a Notch protein, is also required for specification of the fate of ABp. Drosophila Anterior/posterior axis The anterior/posterior patterning of Drosophila come from three maternal groups of genes. The anterior group patterns the head and thoracic segments. The posterior group patterns the abdominal segments and the terminal group patterns the anterior and posterior terminal regions called the terminalia (the acron in the anterior and the telson in the posterior). The anterior group genes include bicoid. Bicoid functions as a graded morphogen transcription factor that localizes to the nucleus. The head of the embryo forms at the point of highest concentration of bicoid and the anterior pattern depends upon the concentration of bicoid. Bicoid works as a transcriptional activator of the gap genes hunchback (hb), buttonhead (btd), empty spiracles (ems), and orthodentical (otd) while also acting to repress translation of caudal. A different affinity for bicoid in the promoters of the genes it activates allows for the concentration dependent activation. Otd has a low affinity for bicoid, hb has a higher affinity and so will be activated at a lower bicoid concentration. Two other anterior group genes, swallow and exuperantia play a role in localizing bicoid to the anterior. Bicoid is directed to the anterior by its 3' untranslated region (3'UTR). The microtubule cytoskeleton also plays a role in localizing bicoid. The posterior group genes include nanos. Similar to bicoid, nanos is localized to the posterior pole as a graded morphogen. The only role of nanos is to repress the maternally transcribed hunchback mRNA in the posterior. Another protein, pumilio, is required for nanos to repress hunchback. Other posterior proteins, oskar (which tethers nanos mRNA), Tudor, vasa, and Valois, localize the germ line determinants and nanos to the posterior. In contrast to the anterior and the posterior, the positional information for the terminalia come from the follicle cells of the ovary. The terminalia are specified through the action of the Torso receptor tyrosine kinase. The follicle cells secrete Torso-like into the perivitelline space only at the poles. Torso-like cleaves the pro-peptide Trunk which appears to be the Torso ligand. Trunk activates Torso and causes a signal transduction cascade which represses the transcriptional repressor Groucho which in turn causes the activation of the terminal gap genes tailless and huckebein. Segmentation and homeotic genes The patterning from the maternal genes work to influence the expression of the segmentation genes. The segmentation genes are embryonically expressed genes that specify the numbers, size and polarity of the segments. The gap genes are directly influenced by the maternal genes and are expressed in local and overlapping regions along the anterior/posterior axis. These genes are influenced by not only the maternal genes, but also by epistatic interactions between the other gap genes. The gap genes work to activate the pair-rule genes. Each pair-rule gene is expressed in seven stripes as a result of the combined effect of the gap genes and interactions between the other pair-rule genes. The pair-rule genes can be divided into two classes: the primary pair-rule genes and the secondary pair-rule genes. The primary pair-rules genes are able to influence the secondary pair-rule genes but not vice versa. The molecular mechanism between the regulation of the primary pair-rule genes was understood through a complex analysis of the regulation of even-skipped. Both positive and negative regulatory interactions by both maternal and gap genes and a unique combination of transcription factors work to express even-skipped in different parts of the embryo. The same gap gene can act positively in one stripe but negatively in another. The expression of the pair-rule genes translate into the expression of the segment polarity genes in 14 stripes. The role of the segment polarity genes is to define to boundaries and the polarity of the segments. The means to which the genes accomplish this is believed to involve a wingless and hedgehog graded distribution or cascade of signals initiated by these proteins. Unlike the gap and the pair-rule genes, the segment polarity genes function within cells rather than within the syncytium. Thus, segment polarity genes influence patterning though signaling rather than autonomously. Also, the gap and pair-rule genes are expressed transiently while segment polarity gene expression is maintained throughout development. The continued expression of the segment polarity genes is maintained by a feedback loop involving hedgehog and wingless. While the segmentation genes can specify the number, size, and polarity of segments, homeotic genes can specify the identity of the segment. The homeotic genes are activated by gap genes and pair-rule genes. The Antennapedia complex and the bithorax complex on the third chromosome contain the major homeotic genes required for specifying segmental identity (actually parasegmental identity). These genes are transcription factors and are expressed in overlapping regions that correlate with their position along the chromosome. These transcription factors regulate other transcription factors, cell surface molecules with roles in cell adhesion, and other cell signals. Later during development, homeotic genes are expressed in the nervous system in a similar anterior/posterior pattern. Homeotic genes are maintained throughout development through the modification of the condensation state of their chromatin. Polycomb genes maintain the chromatin in an inactive conformation while trithorax genes maintain chromatin in an active conformation. All homeotic genes share a segment of protein with a similar sequence and structure called the homeodomain (the DNA sequence is called the homeobox). This region of the homeotic proteins binds DNA. This domain was found in other developmental regulatory proteins, such as bicoid, as well in other animals including humans. Molecular mapping revealed that the HOX gene cluster has been inherited intact from a common ancestor of flies and mammals which indicates that it is a fundamental developmental regulatory system. Dorsal/ventral axis The maternal protein, Dorsal, functions like a graded morphogen to set the ventral side of the embryo (the name comes from mutations which led to a dorsalized phenotype). Dorsal is like bicoid in that it is a nuclear protein; however, unlike bicoid, dorsal is uniformly distributed throughout the embryo. The concentration difference arises from differential nuclear transport. The mechanism by which dorsal becomes differentially located into the nuclei occurs in three steps. The first step happens in the dorsal side of the embryo. The nucleus in the oocyte moves along a microtubule track to one side of the oocyte. This side sends a signal, gurken, to the torpedo receptors on the follicle cells. The torpedo receptor is found in all follicle cells; however, the gurken signal is only found on the anterior dorsal side of the oocyte. The follicle cells change shape and synthetic properties to distinguish the dorsal side from the ventral side. These dorsal follicle cells are unable to produce the pipe protein required for step two. The second step is a signal from the ventral follicle cells back to the oocyte. This signal acts after the egg has left the follicle cells so this signal is stored in the perivitelline space. The follicle cells secrete windbeutel, nudel, and pipe, which create a protease-activating complex. Because the dorsal follicle cells do not express pipe, they are not able to create this complex. Later, the embryo secretes three inactive proteases (gastrulation defective, snake, and Easter) and an inactive ligand (spätzle) into the perivitelline space. These proteases are activated by the complex and cleave spätzle into an active form. This active protein is distributed in a ventral to dorsal gradient. Toll is a receptor tyrosine kinase for spätzle and transduces the graded spätzle signal through the cytoplasm to phosphorylate cactus. Once phosphorylated, cactus no longer binds to dorsal, leaving it free to enter the nucleus. The amount of released dorsal depends on the amount of spätzle protein present. The third step is the regional expression of zygotic genes decapentaplegic (dpp), zerknüllt, tolloid, twist, snail, and rhomboid due to the expression of dorsal in the nucleus. High levels of dorsal are required to turn on transcription of twist and snail. Low levels of dorsal can activate the transcription of rhomboid. Dorsal represses the transcription of zerknüllt, tolloid, and dpp. The zygotic genes also interact with each other to restrict their domains of expression. Amphibians Dorsal/ventral axis and organizer Between fertilization and the first cleavage in Xenopus embryos, the cortical cytoplasm of the zygote rotates relative to the central cytoplasm by about 30 degrees to uncover (in some species) a gray crescent in the marginal or middle region of the embryo. The cortical rotation is powered by microtubules motors moving along parallel arrays of cortical microtubules. This gray crescent marks the future dorsal side of the embryo. Blocking this rotation prevents formation of the dorsal/ventral axis. By the late blastula stage, the Xenopus embryos have a clear dorsal/ventral axis. In the early gastrula, most of the tissue in the embryo is not determined. The one exception is the anterior portion of the dorsal blastopore lip. When this tissue was transplanted to another part of the embryo, it developed as it normally would. In addition, this tissue was able to induce the formation of another dorsal/ventral axis. Hans Spemann named this region the organizer and the induction of the dorsal axis the primary induction. The organizer is induced from a dorsal vegetal region called the Nieuwkoop center. There are many different developmental potentials throughout the blastula stage embryos. The vegetal cap can give rise to only endodermal cell types while the animal cap can give rise to only ectodermal cell types. The marginal zone, however, can give rise to most structures in the embryo including mesoderm. A series of experiments by Pieter Nieuwkoop showed that if the marginal zone is removed and the animal and vegetal caps placed next to each other, the mesoderm comes from the animal cap and the dorsal tissues are always adjacent to the dorsal vegetal cells. Thus, this dorsal vegetal region, named the Nieuwkoop center, was able to induce the formation of the organizer. Twinning assays identified Wnt proteins as molecules from the Nieuwkoop center that could specify the dorsal/ventral axis. In twinning assays, molecules are injected into the ventral blastomere of a four-cell stage embryo. If the molecules specify the dorsal axis, dorsal structures will be formed on the ventral side. Wnt proteins were not necessary to specify the axis, but examination of other proteins in the Wnt pathway led to the discovery that β-catenin was necessary. β-catenin is present in the nuclei on the dorsal side but not on the ventral side. β-catenin levels are regulated by GSK-3. When active, GSK-3 phosphorylates free β-catenin, which is then targeted for degradation. There are two possible molecules that might regulate GSK-3: GBP (GSK-3 Binding Protein) and Dishevelled. The current model is that these act together to inhibit GSK-3 activity. Dishevelled is able to induce a secondary axis when overexpressed and is present at higher levels on the dorsal side after cortical rotation (Symmetry Breaking and Cortical Rotation). Depletion of Dishevelled, however, has no effect. GBP has an effect both when depleted and overexpressed. Recent evidence, however, showed that Xwnt11, a Wnt molecule expressed in Xenopus, was both sufficient and necessary for dorsal axis formation. Mesoderm formation comes from two signals: one for the ventral portion and one for the dorsal portion. Animal cap assays were used to determine the molecular signals from the vegetal cap that are able to induce the animal cap to form mesoderm. In an animal cap assay, molecules of interest are either applied in medium that the cap is grown in or injected as mRNA in an early embryo. These experiments identified a group of molecules, the transforming growth factor-β (TGF-β) family. With dominant negative forms of TGF-β, early experiments were only able to identify the family of molecules involved not the specific member. Recent experiments have identified the Xenopus nodal-related proteins (Xnr-1, Xnr-2, and Xnr-4) as the mesoderm-inducing signals. Inhibitors of these ligands prevents mesoderm formation and these proteins show a graded distribution along the dorsal/ventral axis. Vegetally localized mRNA, VegT and possibly Vg1, are involved in inducing the endoderm. It is hypothesized that VegT also activates the Xnr-1,2,4 proteins. VegT acts as a transcription factor to activate genes specifying endodermal fate while Vg1 acts as a paracrine factor. β-catenin in the nucleus activates two transcription factors: siamois and twin. β-catenin also acts synergistically with VegT to produce high levels of Xnr-1,2,4. Siamois will act synergistically with Xnr-1,2,4 to activate a high level of the transcription factors such as goosecoid in the organizer. Areas in the embryo with lower levels of Xnr-1,2,4 will express ventral or lateral mesoderm. Nuclear β-catenin works synergistically with the mesodermal cell fate signal to create the signaling activity of the Nieuwkoop center to induce the formation of the organizer in the dorsal mesoderm. Organizer function There are two classes of genes that are responsible for the organizer's activity: transcription factors and secreted proteins. Goosecoid (which has a homology between bicoid and gooseberry) is the first known gene to be expressed in the organizer and is both sufficient and necessary to specify a secondary axis. The organizer induces ventral mesoderm to become lateral mesoderm, induces the ectoderm to form neural tissue and induces dorsal structures in the endoderm. The mechanism behind these inductions is an inhibition of the bone morphogenetic protein 4 signaling pathway that ventralizes the embryo. In the absence of these signals, ectoderm reverts to its default state of neural tissue. Four of the secreted molecules from the organizer, chordin, noggin, follistatin and Xenopus nodal-related-3 (Xnr-3), directly interact with BMP-4 and block its ability to bind to its receptor. Thus, these molecules create a gradient of BMP-4 along the dorsal/ventral axis of the mesoderm. BMP-4 mainly acts in trunk and tail region of the embryo while a different set of signals work in the head region. Xwnt-8 is expressed throughout the ventral and lateral mesoderm. The endomesoderm (can give rise to either endoderm or mesoderm) at the leading edge of the archenteron (future anterior) secrete three factors Cerberus, Dickkopf, and Frzb. While Cerberus and Frzb bind directly to Xwnt-8 to prevent it from binding to its receptor, Cerberus is also capable of binding to BMP-4 and Xnr1. Furthermore, Dickkopf binds to LRP-5, a transmembrane protein important for the signalling pathway of Xwnt-8, leading to endocytosis of LRP-5 and eventually to an inhibition of the Xwnt-8 pathway. Anterior/posterior axis The anterior/posterior patterning of the embryo occurs sometime before or during gastrulation. The first cells to involute have anterior inducing activity while the last cells have posterior inducing activity. The anterior inducing ability comes from the Xwnt-8 antagonizing signals Cereberus, Dickkopf and Frzb discussed above. Anterior head development also requires the function of IGFs (insulin-like growth factors) expressed in the dorsal midline and the anterior neural tube. It is believed that IGFs function by activating a signal transduction cascade that interferes and inhibits both Wnt signaling and BMP signaling. In the posterior, two candidates for posteriorizing signals include eFGF, a fibroblast growth factor homologue, and retinoic acid. Fish The basis for axis formation in zebrafish parallels what is known in amphibians. The embryonic shield has the same function as the dorsal lip of the blastopore and acts as the organizer. When transplanted, it is able to organize a secondary axis and removing it prevents the formation of dorsal structures. β-catenin also has a role similar to its role in amphibians. It accumulates in the nucleus only on the dorsal side; ventral β-catenin induces a secondary axis. It activates the expression of Squint (a Nodal related signaling protein aka ndr1) and Bozozok (a homeodomain transcription factor similar to Siamois) which act together to activate goosecoid in the embryonic shield. As in Xenopus, mesoderm induction involves two signals: one from the vegetal pole to induce ventral mesoderm and one from the Nieuwkoop center equivalent dorsal vegetal cells to induce dorsal mesoderm. The signals from the organizer also parallel to those from amphibians. Noggin and chordin homologue Chordino, binds to a BMP family member, BMP2B, to block it from ventralizing the embryo. Dickkopf binds to a Wnt homolog Wnt8 to block it from ventralizing and posteriorizing the embryo. There is a third pathway regulated by β-catenin in fish. β-catenin activates the transcription factor stat3. Stat3 coordinates cell movements during gastrulation and contributes to establishing planar polarity. Birds The dorsal/ventral axis is defined in chick embryos by the orientation of the cells with respect to the yolk. Ventral is down with respect to the yolk while animal is up. This axis is defined by the creation of a pH difference "inside" and "outside" of the blastoderm between the subgerminal space and the albumin on the outside. The subgerminal space has a pH of 6.5 while the albumin on the outside has a pH of 9.5. The anterior/posterior axis is defined during the initial tilting of the embryo when the eggshell is being deposited. The egg is constantly being rotated in a consistent direction and there is a partial stratification of the yolk; the lighter yolk components will be near one end of the blastoderm and will become the future posterior. The molecular basis of the posterior is not known, however, the accumulation of cells eventually results in the posterior marginal zone (PMZ). The PMZ is the equivalent of the Nieuwkoop center is that its role is to induce Hensen's node. Transplantation of the PMZ results in induction of a primitive streak, however, PMZ does not contribute to the streak itself. Similar to the Nieuwkoop center, the PMZ expresses both Vg1 and nuclear localized β-catenin. The Hensen's node is equivalent to the organizer. Transplantation of Hensen's node results in the formation of a secondary axis. Hensen's node is the site where gastrulation begins and it becomes the dorsal mesoderm. Hensen's node is formed from the induction of PMZ on the anterior part of the PMZ called Koller's sickle. When the primitive streak forms, these cells expand out to become Hensen's node. These cells express goosecoid consistent with their role as the organizer. The function of the organizer in chick embryos is similar to that of amphibians and fish, however, there are some differences. Similar to the amphibians and fish, the organizer does secrete Chordin, Noggin and Nodal proteins that antagonize BMP signaling and dorsalize the embryo. Neural induction, however, does not rely entirely on inhibiting the BMP signaling. Overexpression of BMP antagonists is not enough induce formation of neurons nor overexpressing BMP block formation of neurons. While the whole story is unknown for neural induction, FGFs seem to play a role in mesoderm and neural induction. The anterior/posterior patterning of the embryo requires signals like cerberus from the hypoblast and the spatial regulation of retinoic acid accumulation to activate the 3' Hox genes in the posterior neuroectoderm (hindbrain and spinal cord). Mammals The earliest specification in mouse embryos occurs between trophoblast and inner cell mass cells in the outer polar cells and the inner apolar cells respectively. These two groups become specified at the eight-cell stage during compaction, but do not become determined until they reach the 64-cell stage. If an apolar cell is transplanted to the outside during the 8-32 cell stage, that cell will develop as a trophoblast cell. The anterior/posterior axis in the mouse embryo is specified by two signaling centers. In the mouse embryo, the egg forms a cylinder with the epiblast forming a cup at the distal end of that cylinder. The epiblast is surrounded by the visceral endoderm, the equivalent of the hypoblast of humans and chicks. Signals for the anterior/posterior axis come from the primitive node. The other important site is the anterior visceral endoderm (AVE). The AVE lies anterior to the node's most anterior position and lies just under the epiblast in the region that will become occupied by migrating endomesoderm to form head mesoderm and foregut endoderm. The AVE interacts with the node to specify the most anterior structures. Thus, the node is able to form a normal trunk, but requires signals from the AVE to form a head. The discovery of the homeobox in Drosophila flies and its conservation in other animals has led to advancements in understanding the anterior/posterior patterning. Most of the Hox genes in mammals show an expression pattern that parallels the homeotic genes in flies. In mammals, there are four copies of the Hox genes. Each set of Hox genes are paralogous to the others (Hox1a is a paralogue of Hox1b, etc.) These paralogs show overlapping expression patterns and could act redundantly. However, double mutations in paralogous genes can also act synergistically indicating that the genes must work together for function. See also Pattern formation References Developmental biology
Regional differentiation
Biology
6,923
7,156,954
https://en.wikipedia.org/wiki/Algebraically%20closed%20group
In group theory, a group is algebraically closed if any finite set of equations and inequations that are applicable to have a solution in without needing a group extension. This notion will be made precise later in the article in . Informal discussion Suppose we wished to find an element of a group satisfying the conditions (equations and inequations): Then it is easy to see that this is impossible because the first two equations imply . In this case we say the set of conditions are inconsistent with . (In fact this set of conditions are inconsistent with any group whatsoever.) Now suppose is the group with the multiplication table to the right. Then the conditions: have a solution in , namely . However the conditions: Do not have a solution in , as can easily be checked. However, if we extend the group to the group with the adjacent multiplication table: Then the conditions have two solutions, namely and . Thus there are three possibilities regarding such conditions: They may be inconsistent with and have no solution in any extension of . They may have a solution in . They may have no solution in but nevertheless have a solution in some extension of . It is reasonable to ask whether there are any groups such that whenever a set of conditions like these have a solution at all, they have a solution in itself? The answer turns out to be "yes", and we call such groups algebraically closed groups. Formal definition We first need some preliminary ideas. If is a group and is the free group on countably many generators, then by a finite set of equations and inequations with coefficients in we mean a pair of subsets and of the free product of and . This formalizes the notion of a set of equations and inequations consisting of variables and elements of . The set represents equations like: The set represents inequations like By a solution in to this finite set of equations and inequations, we mean a homomorphism , such that for all and for all , where is the unique homomorphism that equals on and is the identity on . This formalizes the idea of substituting elements of for the variables to get true identities and inidentities. In the example the substitutions and yield: We say the finite set of equations and inequations is consistent with if we can solve them in a "bigger" group . More formally: The equations and inequations are consistent with if there is a group and an embedding such that the finite set of equations and inequations and has a solution in , where is the unique homomorphism that equals on and is the identity on . Now we formally define the group to be algebraically closed if every finite set of equations and inequations that has coefficients in and is consistent with has a solution in . Known results It is difficult to give concrete examples of algebraically closed groups as the following results indicate: Every countable group can be embedded in a countable algebraically closed group. Every algebraically closed group is simple. No algebraically closed group is finitely generated. An algebraically closed group cannot be recursively presented. A finitely generated group has a solvable word problem if and only if it can be embedded in every algebraically closed group. The proofs of these results are in general very complex. However, a sketch of the proof that a countable group can be embedded in an algebraically closed group follows. First we embed in a countable group with the property that every finite set of equations with coefficients in that is consistent in has a solution in as follows: There are only countably many finite sets of equations and inequations with coefficients in . Fix an enumeration of them. Define groups inductively by: Now let: Now iterate this construction to get a sequence of groups and let: Then is a countable group containing . It is algebraically closed because any finite set of equations and inequations that is consistent with must have coefficients in some and so must have a solution in . See also Algebraic closure Algebraically closed field References A. Macintyre: On algebraically closed groups, ann. of Math, 96, 53-97 (1972) B.H. Neumann: A note on algebraically closed groups. J. London Math. Soc. 27, 227-242 (1952) B.H. Neumann: The isomorphism problem for algebraically closed groups. In: Word Problems, pp 553–562. Amsterdam: North-Holland 1973 W.R. Scott: Algebraically closed groups. Proc. Amer. Math. Soc. 2, 118-121 (1951) Properties of groups
Algebraically closed group
Mathematics
948
22,696,307
https://en.wikipedia.org/wiki/Mill%20scale
Mill scale, often shortened to just scale, is the flaky surface of hot rolled steel, consisting of the mixed iron oxides iron(II) oxide (, wüstite), iron(III) oxide (, hematite), and iron(II,III) oxide (, magnetite). Mill scale is formed on the outer surfaces of plates, sheets or profiles when they are produced by passing red hot iron or steel billets through rolling mills. Mill scale is bluish-black in color. It is usually less than thick, and initially adheres to the steel surface and protects it from atmospheric corrosion provided no break occurs in this coating. Because it is electrochemically cathodic to steel, any break in the mill scale coating will cause accelerated corrosion of steel exposed at the break. Mill scale is thus a boon for a while, until its coating breaks due to handling of the steel product or due to any other mechanical cause. Mill scale becomes a nuisance when the steel is to be processed. Any paint applied over it is wasted, since it will come off with the scale as moisture-laden air gets under it. Thus mill scale can be removed from steel surfaces by flame cleaning, pickling, or abrasive blasting, which are all tedious operations that consume energy. This is why shipbuilders and steel fixers used to leave steel and rebar delivered freshly rolled from mills out in the open to allow it to 'weather' until most of the scale fell off due to atmospheric action. Nowadays, most steel mills can supply their product with mill scale removed and steel coated with shop primers over which welding or painting can be done safely. Mill scale generated in rolling mills will be collected and sent to a sinter plant for recycling. In art Mill scale is sought after by select abstract expressionist artists as its effect on steel can cause unpredicted and seemingly random abstract organic visual effects. Although the majority of mill scale is removed from steel during its passage through scale breaker rolls during manufacturing, smaller structurally inconsequential residue can be visible. Leveraging this processing vestige by accelerating its corrosive effects through the metallurgical use of phosphoric acid or in conjunction with selenium dioxide can create a high contrast visual substrate onto which other compositional elements can be added. In refractory production Mill scale can be used as a raw material in granular refractory. When this refractory is cast and preheated, these scales provide escape routes for the evaporating water vapor, thus preventing cracks and resulting in a strong, monolithic structure. In reduced iron powder production Mill scale is a complex oxide that contains around 70% iron with traces of nonferrous metals and alkaline compounds. Reduced iron powder may be obtained by conversion of mill scale into a single highest oxide i.e. hematite () followed by reduction with hydrogen. Shahid and Choi reported the reverse co-precipitation method for the synthesis of magnetite () from mill scale and used for multiple environmental applications such as nutrient recovery, ballasted coagulation in activated sludge process, and heavy metal remediation in an aqueous environment. See also Dross Firescale Hammer paint Slag Slag (welding) References Metallurgy Oxides Steelmaking Metalworking terminology
Mill scale
Chemistry,Materials_science,Engineering
681
60,931,340
https://en.wikipedia.org/wiki/Bioinvent
BioInvent International is a Swedish clinical-stage biotech company that discovers and develops novel and first-in-class immunomodulatory antibodies for cancer therapy. The company’s validated, proprietary F.I.R.S.T™ technology platform simultaneously identifies both targets and the antibodies that bind to them, generatingnew drug candidates to fuel the Company’s own broad clinical development pipeline or for additional licensing and partnering. Currently, the company has five clinical programs for hematological cancer and solid tumor treatment. Furthermore, the company has a fully integrated, proprietary, state-of-the art manufacturing facility unit. Martin Welschof has been CEO since 2018. The company is a partner of The Leukemia & Lymphoma Society’s Therapy Acceleration Program, an initiative that develops blood cancer treatment. References External links Pharmaceutical companies established in 1983 Pharmaceutical companies of Sweden Life sciences industry Companies listed on Nasdaq Stockholm
Bioinvent
Biology
192
2,401,526
https://en.wikipedia.org/wiki/Molecular%20Borromean%20rings
In chemistry, molecular Borromean rings are an example of a mechanically-interlocked molecular architecture in which three macrocycles are interlocked in such a way that breaking any macrocycle allows the others to dissociate. They are the smallest examples of Borromean rings. The synthesis of molecular Borromean rings was reported in 2004 by the group of J. Fraser Stoddart. The so-called Borromeate is made up of three interpenetrated macrocycles formed through templated self assembly as complexes of zinc. The synthesis of the macrocyclic systems involves self-assembles of two organic building blocks: 2,6-diformylpyridine (an aromatic compound with two aldehyde groups positioned ortho to the nitrogen atom of the pyridine ring) and a symmetric diamine containing a meta-substituted 2,2'-bipyridine group. Zinc acetate is added as the template for the reaction, resulting in one zinc cation in each of the six pentacoordinate complexation sites. Trifluoroacetic acid (TFA) is added to catalyse the imine bond-forming reactions. The preparation of the tri-ring Borromeate involves a total of 18 precursor molecules and is only possible because the building blocks self-assemble through 12 aromatic pi-pi interactions and 30 zinc to nitrogen dative bonds. Because of these interactions, the Borromeate is thermodynamically the most stable reaction product out of potentially many others. As a consequence of all the reactions taking place being equilibria, the Borromeate is the predominant reaction product. Reduction with sodium borohydride in ethanol affords the neutral Borromeand. With the zinc removed, the three macrocycles are no longer chemically bonded but remain "mechanically entangled in such a way that that if only one of the rings is removed the other two can part company." The Borromeand is thus a true Borromean system as cleavage of just one imine bond (to an amine and an acetal) in this structure breaks the mechanical bond between the three constituent macrocycles, releasing the other two individual rings. A borromeand differs from a [3]catenane in that none of its three macrocycles is concatenated with another other; if one bond in a [3]catenane is broken and a cycle removed, a [2]catenane can remain. Organic synthesis of this seemingly complex compound is in reality fairly simple; for this reason, the Stoddart group has suggested it as a gram-scale laboratory activity for undergraduate organic chemistry courses. See also Molecular knot Topology (chemistry) Dynamic covalent chemistry References External links Borromean chemistry overview website Supramolecular chemistry Borromean rings
Molecular Borromean rings
Chemistry,Materials_science,Mathematics
600
201,851
https://en.wikipedia.org/wiki/Mortise%20and%20tenon
A mortise and tenon (occasionally mortice and tenon) joint connects two pieces of wood or other material. Woodworkers around the world have used it for thousands of years to join pieces of wood, mainly when the adjoining pieces connect at right angles. Mortise and tenon joints are strong and stable joints that can be used in many projects. They connect by either gluing or friction-fitting into place. The mortise and tenon joint also gives an attractive look. One drawback to this joint is the difficulty in making it because of the precise measuring and tight cutting required. In its most basic form, a mortise and tenon joint is both simple and strong. There are many variations of this type of joint, and the basic mortise and tenon has two components: the mortise hole, and the tenon tongue. The tenon, formed on the end of a member generally referred to as a rail, fits into a square or rectangular hole cut into the other, corresponding member. The tenon is cut to fit the mortise hole exactly. It usually has shoulders that seat when the joint fully enters the mortise hole. The joint may be glued, pinned, or wedged to lock it in place. This joint is also used with other materials. For example, it is traditionally used by both stonemasons and blacksmiths. Etymology The noun mortise, "a hole or groove in which something is fitted to form a joint", comes from from Old French (13th century), possibly from Arabic , "fastened", past participle of , "cut a mortise in". The word tenon, a noun in English since the late 14th century, developed its sense of "a projection inserted to make a joint" from the Old French "to hold". History and ancient examples The mortise and tenon joint is an ancient joint. One of the earliest mortise-tenon structure examples dates back 7,000 years to the Hemudu culture in China's Zhejiang Province. Tusked joints were found in a well near Leipzig, created by early Neolithic Linear Pottery culture, and used in construction of the wooden lining of the wells. Mortise and tenon joints have also been found joining the wooden planks of the "Khufu ship", a long vessel sealed into a pit in the Giza pyramid complex of the Fourth Dynasty around 2500 BC. They were also found in the Uluburun shipwreck (14th century BC). Mortise and tenon joints have also been found in ancient furniture from archaeological sites in the Middle East, Europe and Asia. Many instances are found, for example, in ruins of houses in the Silk Road kingdom of Cadota, dating from the first to the 4th century BC. In traditional Chinese architecture, wood components such as beams, brackets, roof frames, and struts were made to interlock with perfect fit, without using fasteners or glues, enabling the wood to expand and contract according to humidity. Archaeological evidence from Chinese sites shows that, by the end of the Neolithic, mortise and tenon joinery was employed in Chinese construction. The thirty sarsen stones of Stonehenge were dressed and fashioned with mortise and tenon joints before they were erected between 2600 and 2400 BC. A variation of the mortise and tenon technique, called Phoenician joints (from the Latin ) was extensively used in ancient shipbuilding to assemble hull planks and other watercraft components together. It is a locked (pegged) mortise and tenon technique that consists of cutting two mortises into the edges of two planks; a separate rectangular tenon is then inserted in the two mortises. The assembly is then locked in place by driving a dowel through one or more holes drilled through mortise side wall and tenon. Description Generally, the size of the mortise and tenon is related to the thickness of the timbers. It is good practice to proportion the tenon as one third the thickness of the rail, or as close to this as is practical. The haunch, the cut-away part of a sash corner joint that prevents the tenon coming loose, is one third the length of the tenon and one-sixth of the width of the tenon in its depth. The remaining two-thirds of the rail, the tenon shoulders, help to counteract lateral forces that might tweak the tenon from the mortise, contributing to its strength. These also serve to hide imperfections in the opening of the mortise. Types Mortises A mortise is a hole cut into a timber to receive a tenon. There are several kinds of mortise: Open mortise: a mortise that has only three sides. (See bridle joint). Stub mortise: a shallow mortise, the depth of which depends on the size of the timber; also a mortise that does not go through the workpiece (as opposed to a "through mortise"). Through mortise: a mortise that passes entirely through a piece. Wedged half-dovetail: a mortise in which the back is wider, or taller, than the front, or opening. The space for the wedge initially leaves room to insert the tenon. The wedge, after the tenon is engaged, prevents its withdrawal. Through-wedged half-dovetail: a wedged half-dovetail mortise that passes entirely through the piece. Tenons A tenon is a projection on the end of a timber for insertion into a mortise. Usually, the tenon is taller than it is wide. There are several kinds of tenons: Stub tenon: a short tenon, the depth of which depends on the size of the timber; also a tenon that is shorter than the width of the mortised piece so the tenon does not show (as opposed to a "through tenon"). Through tenon: a tenon that passes entirely through the piece of wood it is inserted into, being clearly visible on the rear side. Loose tenon: a tenon that is a separate part of the joint, as opposed to a fixed tenon that is an integral part of one of the pieces to be joined. Biscuit tenon: a thin oval piece of wood, shaped like a biscuit Pegged (or pinned) tenon: the joint is strengthened by driving a peg or dowel pin (treenail) through one or more holes drilled through the mortise side wall and tenon; this is common in timber framing joints. Tusk tenon: a kind of mortise and tenon joint that uses a wedge-shaped key to hold the joint together. Teasel (or teazle) tenon: a term used for the tenon on top of a jowled or gunstock post, which is typically received by the mortise in the underside of a tie beam. A common element of the English tying joint. Top tenon: the tenon that occurs on top of a post. Hammer-headed tenon: a method of forming a tenon joint when the shoulders cannot be tightened with a clamp. Half shoulder tenon: an asymmetric tenon with a shoulder on one side only. A common use is in framed, ledged, and braced doors. Gallery See also Box joint Dado Dovetail joint Kumiko References This article is partly based on a Quicksilver wiki article at A Glossary of Terms For Traditional Timber Framing (Timberbee) under the terms of the GNU Free Documentation License. Joinery Timber framing
Mortise and tenon
Technology
1,559
6,248,691
https://en.wikipedia.org/wiki/Bake-out
Bake-out, in several areas of technology and fabrication, and in building construction, refers to the process of using high heat temperature (heat), and possibly vacuum, to remove volatile compounds from materials and objects before placing them into situations where the slow release of the same volatile compounds would contaminate the contents of a container or vessel, spoil a vacuum, or cause discomfort (odor or irritation) or illness. Bake-out is an artificial acceleration of the process of outgassing. In manufacturing In various physics and vacuum device engineering, such as particle accelerators, semiconductor fabrication, and vacuum tubes, bake-out is a manufacturing process, the period of time when a part or device is placed in a vacuum chamber (or its operating vacuum state, for devices which operate in vacuum) and heated, usually by built-in heaters. This drives off gases, which are removed by continued operation of the vacuum pump. Low hydrogen annealing, or hydrogen bake-out, is used to help reduce or remove hydrogen in stainless bulk steel. In construction In building construction, bake-out is the use of heat to remove volatile organic compounds such as solvents remaining in paint, carpets, and other building materials from a building after its construction, to reduce annoying odors or improve indoor air quality. The building interior is heated to a much higher temperature than normal and kept at that temperature for an extended period of time, to encourage such compounds to vaporize into the air, which is vented (released to the atmosphere). See also Vacuum Indoor air quality Volatile organic compounds References Building biology Vacuum systems ja:ベーキング
Bake-out
Physics,Engineering
334
65,909,127
https://en.wikipedia.org/wiki/Brave%20Robot
Brave Robot is a brand of vegan ice cream made using Perfect Day's synthesized milk proteins. It has no lactose, but does include synthetic molecules reproducing those found in milk. The ice cream comes in 8 flavors: Raspberry White Truffle, Blueberry Pie, A Lot of Chocolate, Peanut Butter 'n Fudge, Hazelnut Chocolate Chunk, Buttery Pecan, Vanilla 'n Cookies, and Vanilla. By the end of 2021, the company had sold one million pints of ice-cream. By 2022 the product was available in 8000 stores across the USA. In 2023, the company’s parent group, The Urgent Company, was bought by Superlatus; Superlatus announced plans for Brave Robot to launch a line of pulse protein snacks in 2024. See also Coolhaus References Dairy-free frozen dessert brands Vegan cuisine Cellular agriculture American companies established in 2020 Food and drink companies of the United States
Brave Robot
Engineering,Biology
197
50,328,015
https://en.wikipedia.org/wiki/Surface
A surface, as the term is most generally used, is the outermost or uppermost layer of a physical object or space. It is the portion or region of the object that can first be perceived by an observer using the senses of sight and touch, and is the portion with which other materials first interact. The surface of an object is more than "a mere geometric solid", but is "filled with, spread over by, or suffused with perceivable qualities such as color and warmth". The concept of surface has been abstracted and formalized in mathematics, specifically in geometry. Depending on the properties on which the emphasis is given, there are several non equivalent such formalizations, that are all called surface, sometimes with some qualifier, such as algebraic surface, smooth surface or fractal surface. The concept of surface and its mathematical abstraction are both widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface. The concept also raises certain philosophical questions—for example, how thick is the layer of atoms or molecules that can be considered part of the surface of an object (i.e., where does the "surface" end and the "interior" begin), and do objects really have a surface at all if, at the subatomic level, they never actually come in contact with other objects. Perception of surfaces The surface of an object is the part of the object that is primarily perceived. Humans equate seeing the surface of an object with seeing an object. For example, in looking at an automobile, it is normally not possible to see the engine, electronics, and other internal structures, but the object is still recognized as an automobile because the surface identifies it as one. Conceptually, the "surface" of an object can be defined as the topmost layer of atoms. Many objects and organisms have a surface that is in some way distinct from their interior. For example, the peel of an apple has very different qualities from the interior of the apple, and the exterior surface of a radio may have very different components from the interior. Peeling the apple constitutes removal of the surface, ultimately leaving a different surface with a different texture and appearance, identifiable as a peeled apple. Removing the exterior surface of an electronic device may render its purpose unrecognizable. By contrast, removing the outermost layer of a rock or the topmost layer of liquid contained in a glass would leave a substance or material with the same composition, only slightly reduced in volume. In mathematics In the physical sciences The concept of a surface in the physical sciences encompasses the structures and dynamics of and occurring at surfaces. The field underlies many practical disciplines such as semiconductor physics and applied nanotechnology but is also of fundamental interest. Synchrotron x-ray and neutron scattering measurements are used to provide experimental data on the structure and motion of molecular adsorbates adsorbed on surfaces. The aim of such methods is to provide the data needed to benchmark the latest developments in the modelling of surface systems, their electronic and physical structures and the energetics and friction associated with surface motion. Current projects focus on the surface adsorption of polyaromatic hydrocarbons (PAHs), a class of molecules key to the refinement of the modelling of dispersive forces through approaches such as density functional theory, and build on our complementary work applying helium atom scattering and scanning tunnelling microscopy to small molecules with aromatic functionality. Many surfaces considered in physics and chemistry (physical sciences in general) are interfaces. For example, a surface may be the idealized limit between two fluids, liquid and gas (the surface of the sea in air) or the idealized boundary of a solid (the surface of a ball). In fluid dynamics, the shape of a free surface may be defined by surface tension. However, they are surfaces only at macroscopic scale. At microscopic scale, they may have some thickness. At atomic scale, they do not look at all as a surface, because of holes formed by spaces between atoms or molecules. Other surfaces considered in physics are wavefronts. One of these, discovered by Fresnel, is called wave surface by mathematicians. The surface of the reflector of a telescope is a paraboloid of revolution. Other occurrences: Soap bubbles, which are physical examples of minimal surfaces Equipotential surface in, e.g., gravity fields Earth's surface Surface science, the study of physical and chemical phenomena that occur at the interface of two phases Surface metrology Surface wave, a mechanical wave Atmospheric boundaries (tropopause, edge of space, plasmapause, etc.) In computer graphics In computer graphics, a surface is a mathematical representation of a 3D object or shape. Surfaces are used to model and render the outer layer of an object, giving it form, texture, and color in a virtual space. A surface is essentially a collection of points in 3D space that are mathematically defined and visualized to form the shape of an object. Surfaces are crucial for creating realistic 3D models, as they define the "skin" or "outer boundary" of an object. Surfaces can be categorized based on how they are defined or represented: Polygonal surfaces are made up of polygons, which are typically triangles or quadrilaterals. They are approximate and sometimes visibly faceted. They are common in games and other real-time rendering because they are computationally efficient. Parametric surfaces are defined using equations that depend on parameters. They include Bézier surfaces and NURBS. They are smooth and exact. They are used in CAD and animation. Implicit surfaces are the solution sets of equations of the form . They capture some complex shapes well. Surfaces in computer graphics have several important attributes that define their behavior and appearance. Geometry is a key attribute that determines the shape, size, and position of the surface in 3D space, forming the foundational structure of the model. Material properties, such as texture, color, shininess, and transparency, influence how the surface interacts with light and contribute to its visual appeal. Additionally, normals, which are perpendicular vectors to the surface at each point, are essential for accurate lighting and shading calculations, ensuring that the surface responds realistically to light sources. Surfaces in computer graphics have a wide range of applications. They are extensively used in modeling objects, such as designing characters, cars, and buildings, where the surface defines the shape and structure of the model. In rendering, surfaces play a critical role in determining how objects appear in a scene by influencing their shading, reflections, and textures, which contribute to the overall realism. Additionally, surfaces are vital in simulations, where they help replicate physical properties such as the movement of water waves or the dynamics of fabrics, enhancing the accuracy of visual and interactive experiences. One of the main challenges in computer graphics is creating realistic simulations of surfaces. In technical applications of 3D computer graphics (CAx) such as computer-aided design and computer-aided manufacturing, surfaces are one way of representing objects. The other ways are wireframe (lines and curves) and solids. Point clouds are also sometimes used as temporary ways to represent an object, with the goal of using the points to create one or more of the three permanent representations. One technique used for enhancing surface realism in computer graphics is the use of physically-based rendering (PBR) algorithms which simulate the interaction of light with surfaces based on their physical properties, such as reflectance, roughness, and transparency. By incorporating mathematical models and algorithms, PBR can generate highly realistic renderings that resemble the behavior of real-world materials. PBR has found practical applications beyond entertainment, extending its impact to architectural design, product prototyping, and scientific simulations. References Geometric shapes Broad-concept articles
Surface
Mathematics
1,610
48,167,283
https://en.wikipedia.org/wiki/Offset%20ink
Offset ink is a specific type of ink used in conjunction with offset printing presses, such as those used to produce letterpress or lithography prints. Such ink must be specially formulated to resist other chemicals it will come in contact with on the printing press. It is widely used for printing high-quality images and text on various substrates such as paper, cardboard, and certain plastics. It is crucial that offset ink resist water-in-ink emulsification (i.e., repel rather than absorb water). It also should withstand degradation by the fountain solution that covers the non-printing areas of the engraved plate. Offset ink needs to be very rich in pigment so that its full color vibrancy is perceptible, even in minute quantity. References Ainsworth, Mitchell, C., "Inks and Their Composition and Manufacture," Charles Griffin and Company Ltd, 1904. Inks Printing materials Visual arts materials
Offset ink
Physics
190
22,373,877
https://en.wikipedia.org/wiki/Environmental%20impact%20of%20irrigation
The environmental impact of irrigation relates to the changes in quantity and quality of soil and water as a result of irrigation and the subsequent effects on natural and social conditions in river basins and downstream of an irrigation scheme. The effects stem from the altered hydrological conditions caused by the installation and operation of the irrigation scheme. Amongst some of these problems is the depletion of underground aquifers through overdrafting. Soil can be over-irrigated due to poor distribution uniformity or management wastes water, chemicals, and may lead to water pollution. Over-irrigation can cause deep drainage from rising water tables that can lead to problems of irrigation salinity requiring watertable control by some form of subsurface land drainage. However, if the soil is under-irrigated, it gives poor soil salinity control, which leads to increased soil salinity with the consequent buildup of toxic salts on the soil surface in areas with high evaporation. This requires either leaching to remove these salts or a method of drainage to carry the salts away. Irrigation with saline or high-sodium water may damage soil structure owing to the formation of alkaline soil. Direct effects An irrigation scheme draws water from groundwater, rivers, lakes, or overland flow, and distributes it over a certain area. Hydrological, or direct, effects of doing this include reduction in downstream river flow, increased evaporation in the irrigated area, increased level in the water table as groundwater recharge in the area is increased and flow increased in the irrigated area. Likewise, irrigation has immediate effects on providing moisture to the atmosphere, inducing atmospheric instabilities, and increasing rainfall downwind, or in other cases modifies the atmospheric circulation, delivering rain to different downwind areas. Increases or decreases in irrigation are a key area of concern in precipitationshed studies, that examine how significant modifications to the delivery of evaporation to the atmosphere can alter downwind rainfall. Indirect Effects Indirect effects are those that have consequences that take longer to develop and may also be longer-lasting. The indirect effects of irrigation include the following: Waterlogging Soil salination Ecological damage Socioeconomic impacts The indirect effects of waterlogging and soil salination occur directly on the land being irrigated. The ecological and socioeconomic consequences take longer to happen but can be more far-reaching. Some irrigation schemes use water wells for irrigation. As a result, the overall water level decreases. This may cause water mining, land/soil subsidence, and, along the coast, saltwater intrusion. Irrigated land area worldwide occupies about 16% of the total agricultural area, and the crop yield of irrigated land is roughly 40% of the total yield. In other words, irrigated land produces 2.5 times more product than non-irrigated land. Adverse impacts Reduced river flow The reduced downstream river flow may cause: reduced downstream flooding disappearance of ecologically and economically important wetlands or flood forests reduced availability of industrial, municipal, household, and drinking water reduced shipping routes. Water withdrawal poses a serious threat to the Ganges. In India, barrages control all of the tributaries to the Ganges and divert roughly 60 percent of river flow to irrigation reduced fishing opportunities. The Indus River in Pakistan faces scarcity due to the over-extraction of water for agriculture. The Indus is inhabited by 25 amphibian species and 147 fish species, of which 22 are found nowhere else. It harbors the endangered Indus river dolphin, one of the world's rarest mammals. Fish populations, the main source of protein and overall life support systems for many communities are also being threatened reduced discharge into the sea, which may have various consequences like coastal erosion (e.g. in Ghana) and saltwater intrusion in delta's and estuaries (e.g. in Egypt, see Aswan dam). Current water withdrawal from the river Nile for irrigation is so high that, despite its size, the river does not reach the sea in dry periods. The Aral Sea has suffered an "environmental catastrophe" due to the interception of river water for irrigation purposes. Increased groundwater recharge, waterlogging, soil salinity Increased groundwater recharge stems from the unavoidable deep percolation losses in the irrigation scheme. The lower the irrigation efficiency, the higher the losses. Although reasonably high irrigation efficiencies of 70% or more (i.e., losses of 30% or less) can occur with sophisticated techniques like sprinkler irrigation and drip irrigation or by well-managed surface irrigation, in practice the losses are commonly in the order of 40% to 60%. This may cause the following issues: rising water tables increased storage of groundwater that may be used for irrigation, municipal, household, and drinking water by pumping from wells waterlogging and drainage problems in villages, agricultural lands, and along roads - with mostly negative consequences. The increased level of the water table can lead to reduced agricultural production. shallow water tables - a sign that the aquifer is unable to cope with the groundwater recharge stemming from the deep percolation losses where water tables are shallow, the irrigation applications are reduced. As a result, the soil is no longer leached and soil salinity problems develop stagnant water tables at the soil surface are known to increase the incidence of water-borne diseases like malaria, filariasis, yellow fever, dengue, and schistosomiasis (Bilharzia) in many areas. Health costs, appraisals of health impacts, and mitigation measures are rarely part of irrigation projects. to mitigate the adverse effects of shallow water tables and soil salinization, some form of watertable control, soil salinity control, drainage and drainage system is needed as drainage water moves through the soil profile, it may dissolve nutrients (either fertilizer-based or naturally occurring) such as nitrates, leading to a buildup of those nutrients in the ground-water aquifer. High nitrate levels in drinking water can be harmful to humans, particularly infants under six months, where it is linked to "blue-baby syndrome" (see Methemoglobinemia). Reduced downstream river water quality Owing to drainage of surface and groundwater in the project area, which waters may be salinized and polluted by agricultural chemicals like biocides and fertilizers, the quality of the river water below the project area can deteriorate, which makes it less fit for industrial, municipal and household use. It may lead to reduced public health. Polluted river water entering the sea may adversely affect the ecology along the seashore (see Aswan dam). The detention of sediments behind the dams can eliminate the natural contribution of sediments, which is critical to surface water irrigation diversions. Sedimentation is an essential part of the ecosystem that requires the natural flux of the river flow. This natural cycle of sediment dispersion replenishes the nutrients in the soil, which will, in turn, determine the livelihood of the plants and animals that rely on the sediments carried downstream. The benefits of heavy sedimentation deposits can be seen in large rivers like the Nile River. The sediment from the delta has built up to form a giant aquifer during flood season and retains water in the wetlands. The wetlands created and sustained due to built-up sediment are a habitat for numerous species of birds. However, heavy sedimentation can reduce downstream river water quality and can exacerbate floods upstream. This has been known to happen in the Sanmenxia reservoir in China. The Sanmenxia reservoir is part of a larger man-made project of hydroelectric dams called the Three Gorge Project <ref></Allen Wohl, “The Chang Jiang: Bridling a Dragon”, A World of Rivers p 275, p.283. Calculating the amount of sediment that will be carried downstream to the Sanmenxia reservoir is difficult to estimate.</ref> In 1998, uncertain calculations and heavy sediment greatly affected the reservoir's ability to fulfill its flood-control function properly This also reduces the downstream river water quality. Shifting more towards mass irrigation installments to meet more socioeconomic demands is going against the natural balance of nature, and use water pragmatically- use it where it is found<ref></Donald Worster, “ Thinking Like a River,” in The Wealth of Nature: Environmental History and the Ecological Imagination (New York: Oxford University Press, (1993), p133ef></ref> Affected downstream water users Downstream water users often have no legal water rights and may fall victim to irrigation development. Pastoralists and nomadic tribes may find their land and water resources blocked by new irrigation developments without having legal recourse. Flood-recession cropping may be seriously affected by the upstream interception of river water for irrigation purposes. In Baluchistan, Pakistan, the development of new small-scale irrigation projects depleted the water resources of nomadic tribes traveling annually between Baluchistan and Gujarat or Rajasthan, India After the closure of the Kainji dam, Nigeria, 50 to 70 percent of the downstream area of flood-recession cropping was lost Lost land use opportunities Irrigation projects may reduce the fishing opportunities of the original population and the grazing opportunities for cattle. The livestock pressure on the remaining lands may increase considerably because the ousted traditional pastoralist tribes will have to find their subsistence and existence elsewhere, overgrazing may increase, followed by serious soil erosion and the loss of natural resources. The Manatali reservoir formed by the Manantali dam in Mali intersects the migration routes of nomadic pastoralists and destroyed 43000 ha of savannah, probably leading to overgrazing and erosion elsewhere. Further, the reservoir destroyed 120 km2 of forest. The depletion of groundwater aquifers, which is caused by the suppression of the seasonal flood cycle, is damaging the forests downstream of the dam. Groundwater mining with wells, land subsidence When more groundwater is pumped from wells than replenished, storage of water in the aquifer is being mined, and the use of that water is no longer sustainable. As levels fail, it becomes more difficult to extract water, and pumps will struggle to maintain the design flow rate, which may consume more energy per unit of water. Eventually, extracting groundwater may become so difficult that farmers may be forced to abandon irrigated agriculture. Some notable examples include: The hundreds of tube wells installed in Uttar Pradesh, India, with World Bank funding, have operating periods of 1.4 to 4.7 hours/day. In contrast, they were designed to operate 16 hours/day In Baluchistan, Pakistan, the development of tube well irrigation projects was at the expense of the traditional qanat or karez users groundwater-related subsidence of the land due to mining of groundwater occurred in the United States at a rate of 1m for every 13m that the water table was lowered Homes at Greens Bayou near Houston, Texas, where 5 to 7 feet of subsidence has occurred, were flooded during a storm in June 1989 as shown in the picture Simulation and prediction The effects of irrigation on the water table, soil salinity, and salinity of drainage and groundwater, and the effects of mitigative measures can be simulated and predicted using agro-hydro-salinity models like SaltMod and SahysMod Case studies In India, 2.19 million ha of land has been reported to suffer from waterlogging in irrigation canal commands. Also, 3.47 million ha were reported to be seriously salt-affected, In the Indus Plains in Pakistan, more than 2 million hectares of land are waterlogged. The soil of 13.6 million hectares within the Gross Command Area was surveyed, which revealed that 3.1 million hectares (23%) were saline. 23% of this was in Sindh and 13% in the Punjab. More than 3 million ha of water-logged lands have been provided with tube-wells and drains at the cost of billions of rupees. Still, the reclamation objectives were only partially achieved. The Asian Development Bank (ADB) states that 38% of the irrigated area is now waterlogged and 14% of the surface is too saline for use In the Nile delta of Egypt, drainage is being installed in millions of hectares to combat the water-logging resulting from the introduction of massive perennial irrigation after the completion of the High Dam at Assuan In Mexico, 15% of the 3 million ha of irrigable land is salinized, and 10% is waterlogged In Peru some 0.3 million ha of the 1.05 million ha of irrigable land suffers from degradation (see Irrigation in Peru). Estimates indicate that roughly one-third of the irrigated land in the major irrigation countries is already badly affected by salinity or is expected to become so in the near future. Present estimates for Israel are 13% of the irrigated land, Australia 20%, China 15%, Iraq 50%, Egypt 30%. Irrigation-induced salinity occurs in large and small irrigation systems alike FAO has estimated that by 1990 about 52 million ha of irrigated land will need to have improved drainage systems installed, much of it subsurface drainage to control salinity Reduced downstream drainage and groundwater quality The downstream drainage water quality may deteriorate owing to leaching of salts, nutrients, herbicides and pesticides with high salinity and alkalinity. There is the threat of soils converting into saline or alkali soils. This may negatively affect the health of the population at the tail-end of the river basin and downstream of the irrigation scheme, as well as the ecological balance. The Aral Sea, for example, is seriously polluted by drainage water. The downstream quality of the groundwater may deteriorate in a similar way as the downstream drainage water and have similar consequences Mitigation of adverse effects Irrigation can have a variety of negative impacts on ecology and socioeconomy, which may be mitigated in a number of ways. These include siting the irrigation project in a location that minimizes negative impacts. The efficiency of existing projects can be improved and existing degraded croplands can be improved rather than establishing a new irrigation project Developing small-scale, individually owned irrigation systems as an alternative to large-scale, publicly owned and managed schemes. The use of sprinkler irrigation and micro-irrigation systems decreases the risk of waterlogging and erosion. Where practicable, using treated wastewater makes more water available to other users Maintaining flood flows downstream of the dams can ensure that an adequate area is flooded each year, supporting, amongst other objectives, fishery activities. Delayed environmental impacts It often takes time to accurately predict the impact that new irrigation schemes will have on the ecology and socioeconomy of a region. By the time these predictions are available, a considerable amount of time and resources may have already been expended in the implementation of that project. When that is the case, the project managers will often only change the project if the impact would be considerably more than they had originally expected. Case study in Malawi Frequently irrigation schemes are seen as extremely necessary for socioeconomic well-being especially in developing countries. One example of this can be demonstrated from a proposal for an irrigation scheme in Malawi. Here it was shown that the potential positive effects of the irrigation project that was being proposed "outweighed the potential negative impacts". It was stated that the impacts would mostly "be localized, minimal, a short term occurring during the construction and operation phases of the Project". In order to help alleviate and prevent major environmental impacts, they would use techniques that minimize the potential negative impacts. As far as the region's socioeconomic well-being, there would be no "displacement and/or resettlement envisioned during the implementation of the project activities". The original primary purposes of the irrigation project were to reduce poverty, improve food security, create local employment, increase household income and enhance the sustainability of land use. Due to this careful planning, this project was successful both in improving the socioeconomic conditions in the region and ensuring that land and water are sustainable into the future. See also Environmental issues with agriculture Environmental impacts of reservoirs Alkali soils Irrigation in viticulture Routing (hydrology) Indian Council of Forestry Research and Education Further reading T.C. Dougherty and A.W. Hall, 1995. Environmental impact assessment of irrigation and drainage projects. FAO Irrigation and Drainage Paper 53. . On line: http://www.fao.org/docrep/v8350e/v8350e00.htm R.E. Tillman, 1981. Environmental guidelines for irrigation. New York Botanical Garden Cary Arboretum. A comparative survey of dam-induced resettlement in 50 cases by Thayer Scudder and John Gray External links Download of simulation and prediction model SaltMod from: Download of simulation and prediction model SahysMod from: "SaltMod: A tool for the interweaving of irrigation and drainage for salinity control": "Modern interferences with traditional irrigation in Baluchistan": References Irrigation Environmental issues with water Environmental issues with soil Environmental impact of agriculture
Environmental impact of irrigation
Environmental_science
3,528
12,168,152
https://en.wikipedia.org/wiki/Re-recording%20mixer
A re-recording mixer in North America, also known as a dubbing mixer in Europe, is a post-production audio engineer who mixes recorded dialogue, sound effects and music to create the final version of a soundtrack for a feature film, television program, or television advertisement. The final mix must achieve a desired sonic balance between its various elements, and must match the director's or sound designer's original vision for the project. For material intended for broadcast, the final mix must also comply with all applicable laws governing sound mixing (e.g., the CALM Act in the United States and the EBU R 128 loudness protocol in Europe). The different names of this profession are both based on the fact that the mixer is not mixing a live performance to a live audience nor recording live on a set. That is, the mixer is re-recording sound already recorded elsewhere (the basis of the North American name) after passing it through mixing equipment such as a digital audio workstation and may dub in additional sounds in the process (the basis of the European name). While mixing can be performed in a recording studio or home office, a full-size mixing stage or dubbing stage is used for feature films intended for release to movie theaters in order to help the mixer envision how the final mix will be heard in such large spaces. During production or earlier parts of post-production, sound editors, sound designers, sound engineers, production sound mixers and/or music editors assemble the tracks that become raw materials for the re-recording mixer to work with. Those tracks in turn originate with sounds created by professional musicians, singers, actors, or Foley artists. The first part of the traditional re-recording process is called the "premix." In the dialog premix the re-recording mixer does preliminary processing, including making initial loudness adjustments, cross-fading, and reducing environmental noise or spill that the on-set microphone picked up. In most instances, audio restoration software may be employed. For film or television productions, they may add a temporary/permanent music soundtrack that will have been prepared by the music editor, then the resulting work will be previewed by test audiences, and then the film or television program is re-cut and the soundtrack must be mixed again. Re-recording mixer may also augment or minimize audience reactions for television programs recorded in front of a studio audience. In some cases, a laugh track may augment these reactions. During the "final mix" the re-recording/dubbing mixers, guided by the director or producer, must make creative decisions from moment to moment in each scene about how loud each major sound element (dialog, sound effects, laugh track and music) should be relative to each other. They also modify individual sounds when desired by adjusting their loudness and spectral content and by adding artificial reverberation. They can insert sounds into a three-dimensional space of the listening environment for a variety of venues and release formats: movie theaters, home theater systems, etc. that have stereo and multi-channel (5.1, 7.1, etc.) surround sound systems. Today, films may be mixed in 'object-based' audio formats such as Dolby Atmos, which adds height channels and metadata to allow for real-time rendering of audio objects in a three-dimensional coordinate space. References Filmmaking occupations Audio engineering Sound recording
Re-recording mixer
Engineering
691
1,369,506
https://en.wikipedia.org/wiki/Rachis
In biology, a rachis (from the [], "backbone, spine") is a main axis or "shaft". In zoology and microbiology In vertebrates, rachis can refer to the series of articulated vertebrae, which encase the spinal cord. In this case the rachis usually forms the supporting axis of the body and is then called the spine or vertebral column. Rachis can also mean the central shaft of pennaceous feathers. In the gonad of the invertebrate nematode Caenorhabditis elegans, a rachis is the central cell-free core or axis of the gonadal arm of both adult males and hermaphrodites where the germ cells have achieved pachytene and are attached to the walls of the gonadal tube. The rachis is filled with cytoplasm. In botany In plants, a rachis is the main axis of a compound structure. It can be the main stem of a compound leaf, such as in Acacia or ferns, or the main, flower-bearing portion of an inflorescence above a supporting peduncle. Where it subdivides into further branches, these are known as rachillae (singular rachilla). The central spine that remains when an Abies seed cone disintegrates is also called the rachis. A ripe head of wild-type wheat is easily shattered into dispersal units when touched or blown by the wind. A series of abscission layers forms that divides the rachis into dispersal units consisting of a small group of flowers (a single spikelet) attached to a short segment of the rachis. This is significant in the history of agriculture, and referred to by archaeologists as a "brittle rachis", one type of shattering in crop plants. See also Stipe (botany) References Vertebrate anatomy Plant morphology
Rachis
Biology
402
62,946,258
https://en.wikipedia.org/wiki/Alfred%20Makower
Alfred Jacques Makower (9 May 1876 in London – 1 February 1941) was electrical engineer and community activist. He was head of the Electrical Engineering Department of South-Western Polytechnic. Alfred was the son of a German silk merchant. He attended University College School from 1884, the University College itself in 1894, then Trinity College, Cambridge, in 1895. Here he took the Mathematical Tripos, before moving on to the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), in 1898. Then in 1900 he was given a job by Union-Elektricitäts-Gesellschaft (UEG), a subsidiary of the Thomson-Houston Electric Company. He then returned to England to work for British Thomson-Houston Company in 1902. In 1904 he was appointed head of the Electrical Engineering Department of South-Western Polytechnic. In 1913 he became a founding director Mossay and Co., a company established by Paul Mossay, along with A. Berkeley and Alfred Mays-Smith. Alfred was chair of the Professional Committee of the German Jewish Aid Committee, in which capacity he helped several German engineer refugees with financial support and help in finding employment amongst his contacts in the engineering sector. He was vice-president of the Jewish Board of Guardians whose General Relief Committee he also chaired. He had a son, Ernest S. Makower. References 1876 births 1941 deaths Electrical engineers Technische Universität Berlin alumni People educated at University College School
Alfred Makower
Engineering
301
9,733,872
https://en.wikipedia.org/wiki/Map-based%20controller
In the field of control engineering, a map-based controller is a controller whose outputs are based on values derived from a pre-defined lookup table. The inputs to the controller are usually values taken from one or more sensors and are used to index the output values in the lookup table. By effectively placing the transfer function as discrete entries within a lookup table, engineers are free to modify smaller sections or update the whole list of entries as required. References Control engineering
Map-based controller
Engineering
96
2,150,852
https://en.wikipedia.org/wiki/SipXecs
SipXecs is a free software enterprise communications system. It was initially developed by Pingtel Corporation in 2003 as a voice over IP telephony server located in Boston, MA. The server was later extended with additional collaboration capabilities as part of the SIPfoundry project. Since its extension, sipXecs now acts as a software implementation of the Session Initiation Protocol (SIP), making it a full IP-based communications system. SipXecs competitors include other open-source telephony and SoftSwitch solutions such as Asterisk, FreeSWITCH, and the SIP Express Router. History Development of sipXecs began in 2003 by Pingtel Corporation. In 2004, Pingtel adopted an open-source business model and contributed the codebase to the not-for-profit organization SIPfoundry. It has been an open source project since then. Pingtel's assets were acquired by Bluesocket in July 2007. In August 2008 the Pingtel assets were acquired from Bluesocket by Nortel. Subsequent to the acquisition by Nortel, Nortel released the SCS500 product based on sipXecs. SCS500 was positioned as an open and software-only telephony server for the SMB market up to 500 users and received some recognition. It was later renamed SCS and positioned as an enterprise communications system. Subsequent to the Nortel bankruptcy and the acquisition of the Nortel assets by Avaya, sipXecs continued to be used as the basis for the Avaya Live cloud based communications service. In April 2010 the founders of SIPfoundry founded , a commercial version of the software. Information SipXecs is designed as a software-only, distributed cloud application. It runs on the Linux operating system CentOS or RHEL on either virtualized or physical servers. A minimum configuration allows running all of the sipXecs components on a single server, including database, all available services, and the sipXecs management. Global clusters can be built using built-in auto-configuration capabilities from the centralized management system. SipXecs uses MongoDB as a distributed and partition tolerant database for global transactions, includes CFEngine for orchestration of clusters and JasperReports for reporting. The management and configuration system is based on the Spring Framework. sipXecs includes FreeSWITCH as its media server and Openfire for presence and instant messaging services. SipXecs follows standards such as Session Initiation Protocol (SIP), SRTP, Extensible Messaging and Presence Protocol (XMPP), SIP and XMPP over TLS, and several Web standards including WebRTC, WebSOCKET and Representational State Transfer (REST). Adoption Amazon.com was an early adopter of sipXecs. This initial 5,000 user deployment expanded considerably in the following years. OnRelay, a company in the UK, selected sipXecs for its fixed-mobile convergence solution sold to carriers. Colorado State University and Cedarville University of Ohio committed to sipXecs in 2010. Red Hat deployed a commercial version of sipXecs from globally in 2012. Under the SIPfoundry Higher Education Program (HEP) and as of 2014 Lafayette College, St. Mary's University, Messiah College, Colorado School of Mines, Carthage College deployed sipXecs to replace their respective PBX systems. SipXecs is used by small and large enterprises ranging up to about 20,000 users per cluster. SIPfoundry lists the following users on its Web site: Brevard County FL, Dutch Police, Easter Seals, Siemens Transportation, British Airways. Availability SipXecs is available for Red Hat Linux and CentOS. It runs virtualized in different cloud environments such as the Amazon Elastic Compute Cloud, the Google Compute Engine, the HP Cloud, IBM SoftLayer, VMware vCloud and VMware ESX, OpenStack environments, and clouds from other vendors using these technologies. Licensing and Copyright SIPfoundry distributes the sipXecs source code under the AGPL-3.0-or-later license. Many different corporate and individual contributors contributed to sipXecs, including Pingtel, Bluesocket, Nortel, Avaya, and as some of the larger corporate contributors representing 864,791 lines of code. In addition, the sipXecs solution includes many other open-source components. SIPfoundry holds Copyright on all derivative work. Contributions to sipXecs are made under a Contributor Agreement, which grants SIPfoundry shared Copyright with the original author on all contributed code. Hardware SipXecs supports a wide range of SIP compatible hardware, such as PSTN gateways, desk phones, softphones and mobile phone applications. A plug n'play auto-configuration capability is available for phones from currently (software release 14.04) 18 different vendors. SIP reference implementation The SipXecs system represents a reference implementation of the SIP standard. It was used at SIPIT interoperability events organized by the SIP Forum to test interoperability of SIP solutions from many different vendors. See also Comparison of VoIP software List of free and open-source software packages List of SIP software References External links SIPfoundry official website Collaborative software Free VoIP software Instant messaging Open-source cloud applications Software using the GNU Affero General Public License
SipXecs
Technology
1,111
34,059,947
https://en.wikipedia.org/wiki/Yospace
Yospace is a digital video distribution company. Its technology allows live and on-demand video content to be taken to connected devices such as smartphones, feature phones, tablets and to web browsers with user-targeted ad insertion. Its clients are primarily broadcasters, multi-service operators, and digital publishers. Yospace is based in Staines outside London along with many other technology businesses in the so-called M4 corridor. Company history SmartPhone Emulator Yospace was founded in 1999. Its first product, the SmartPhone Emulator, released in April 2000, was a WAP handset emulator of the Nokia 7110. The product was unique at the time as it was the only solution available to developers that provided an accurate rendering of the handset display. The SmartPhone Emulator was released as a downloadable developer tool and an applet that could be embedded into a web page. The developer tool allowed simultaneous display of a variety of different handsets including those from Nokia, Ericsson, Motorola. User-generated video communities In 2005, Yospace launched a user-generated video community called SeeMeTV in partnership with the mobile operator 3 in the UK, which was launched shortly after as LookAtMe! on the O2 network in the UK. BeOnTV followed on T-Mobile in 2007. The services offered a means for subscribers to send in videos and pictures via MMS into a moderated gallery from which other community members could download and rate the entries. The service charged for video downloads and paid a percentage of the revenue back to the contributors via PayPal. By March 2006, the operator 3 claimed that over £100,000 had been paid back to contributors. The company won numerous awards including being voted Number 1 in “Real Business' Top 50 Companies to Watch in Mobile’, and winning ‘Mobile Innovation Award 2006’ from the Mobile Entertainment Forum. The services since combined into a single community under the name EyeVibe. Yospace sold the service to Australian company Moko.mobi in 2010. Acquisition by EMAP In February 2007, the company was acquired by EMAP for £8.7 million. EMAP was subsequently acquired by Europe's largest privately held publishing group H.Bauer Publishing. In March 2009, as part of Bauer Media's strategy review, it sold its interest in Yospace to private investors for £1. During the period under Bauer ownership, Yospace launched yospaceCDS, a SaaS version of the video technology it had been developing for its own consumer services as a platform for digital publishers, media production houses and software developers alike to take video content out to mobile devices. Acquisition by RTL Group In January 2019, RTL Group announced their planned acquisition of Yospace for up to US$33 million (approx. €29 million), with transaction scheduled for completion on 1 February 2019. Yospace today The company is managed by Tim Sewell, CEO and David Springall CTO and co-founder. Sky, BBC, Canadian Broadcasting Corporation, GSMA, ITN, Thomson Reuters, Vodafone, CBS Interactive, and Hearst Television have used Yospace technology, and with the most recent addition of BT Sport replacing Microsoft Silverlight. References External links Technology companies of the United Kingdom British companies established in 1999 Technology companies established in 1999 Digital electronics
Yospace
Engineering
666
37,653,193
https://en.wikipedia.org/wiki/HD%2096819
HD 96819 is a star in the equatorial constellation of Hydra. It was formerly known by its designation 10 Crateris, but that name fell into disuse after constellations were redrawn and the star was no longer in Crater. It is visible to the naked eye as a dim, white-hued star with an apparent visual magnitude of 5.43. Parallax measurements put it at a distance of 182 light years away from the Sun. This is most likely (98.7% chance) a member of the TW Hydrae association. This is a rapidly rotating A-type main-sequence star that is about double the mass of the Sun. It emits 20.66 times as much energy as the Sun, at an effective temperature of 8,954 K. HD 96819 is currently 31.5% through its life as a main-sequence star: after that it will swell up as a red giant. It is a young star of around nine million years age, and is a suspected variable star. Previously thought to be a single star, in 2022 a companion star was discovered, making HD 96819 a binary star. The companion star has about half the mass of the Sun. References A-type main-sequence stars TW Hydrae association Hydra (constellation) Durchmusterung objects Crateris, 10 096819 054477 4334
HD 96819
Astronomy
288
4,649,749
https://en.wikipedia.org/wiki/Oracle%20Fusion%20Middleware
Oracle Fusion Middleware (FMW, also known as Fusion Middleware) consists of several software products from Oracle Corporation. FMW spans multiple services, including Java EE and developer tools, integration services, business intelligence, collaboration, and content management. FMW depends on open standards such as BPEL, SOAP, XML and JMS. Oracle Fusion Middleware provides software for the development, deployment, and management of service-oriented architecture (SOA). It includes what Oracle calls "hot-pluggable" architecture, designed to facilitate integration with existing applications and systems from other software vendors such as IBM, Microsoft, and SAP AG. Evolution Many of the products included under the FMW banner do not themselves qualify as middleware products: "Fusion Middleware" essentially represents a re-branding of many of Oracle products outside of Oracle's core database and applications-software offerings—compare Oracle Fusion. Oracle acquired many of its FMW products via acquisitions. This includes products from BEA Systems and Stellent. In order to provide standards-based software to assist with business process automation, HP has incorporated FMW into its "service-oriented architecture (SOA) portfolio". Oracle leveraged its Configurable Network Computing (CNC) technology acquired from its PeopleSoft/JD Edwards 2005 purchase. Oracle Fusion Applications, based on Oracle Fusion Middleware, were finally released in September 2010. According to Oracle, as of 2013, over 120,000 customers were using Fusion Middleware. This includes over 35 of the world's 50 largest companies and more than 750 of the BusinessWeek Global 1000, with FMW also supported by 7,500 partners. Assessments In January 2008, Oracle WebCenter Content (formerly Universal Content Management) won InfoWorld's "Technology of the Year" award for "Best Enterprise Content Manager", with Oracle SOA Suite winning the award for "Best Enterprise Service Bus". In 2007, Gartner wrote that "Oracle Fusion Middleware has reached a degree of completeness that puts it on par with, and in some cases ahead of, competing software stacks", and reported revenue from the suite of over US$1 billion during FY06, estimating the revenue from the genuinely middleware aspects at US$740 million. Oracle Fusion Middleware components Infrastructure / Application server Oracle WebLogic Server (WLS) Oracle Application Server (IAS) JRockit – a JVM whose functionality has now been merged to OpenJDK Tuxedo (software) Oracle Coherence Oracle Service Registry – metadata registry application-server security Oracle Web Cache Integration and process-management BPEL Process Manager Oracle Business Activity Monitoring (Oracle BAM) – Business activity monitoring (BAM) business rules Business Process Analysis Suite Oracle BPM – Business process management Oracle Data Integrator (ODI) – an application using the database for set-based data integration Enterprise connectivity (adapters) Oracle Enterprise Messaging Service Oracle Enterprise Service Bus Oracle Application server B2B Oracle Web Services Manager (OWSM) - a security and monitoring product for web services Application development tools Oracle Application Development Framework (ADF) JDeveloper Oracle SOA Suite TopLink – a Java object-relational mapping package Oracle Forms services Oracle Developer Suite Business intelligence Oracle Business Intelligence (OBIEE) Oracle Crystal Ball – enables stochastic forecasting and simulation using spreadsheet models Oracle Discoverer Data hubs Oracle BI Publisher Oracle Reports services Systems management Oracle Enterprise Manager Web services manager User interaction / content management Oracle Beehive – collaboration platform Unified messaging Workspaces Oracle WebCenter Oracle Imaging and Process Management Web content management Records management Enterprise search Digital asset management Email archiving Identity management Oracle Identity Management Enterprise Single sign-on Oracle Entitlements Server Oracle Identity Manager Oracle Access Manager Oracle Adaptive Access Manager Oracle Virtual Directory See also BEA Systems Oracle Fusion Applications Oracle Technology Network (OTN) Stellent References External links Oracle Fusion Middleware overview KMWorld article on Oracle acquisition of Stellent Stellent acquisition page WebCenter Content Users Group - Yahoo! Groups OTN Forum - WebCenter Content Fujitsu.com-42 Real Life Examples of Fusion Middleware with Applications Amazon.com Oracle Fusion Middleware Patterns - Harish Gaur (Author), Markus Zirn (Contributor) Content.FM Content Management On Air. Broadcasting news, product updates and general purpose information about ECM and Oracle Universal Content Management Fusion Middleware Service-oriented architecture-related products
Oracle Fusion Middleware
Technology,Engineering
903
12,917,699
https://en.wikipedia.org/wiki/Desulfurococcus
In taxonomy, Desulfurococcus is a genus of the Desulfurococcaceae. See also List of Archaea genera References Further reading External links Type strain of Desulfurococcus mobilis at BacDive - the Bacterial Diversity Metadatabase Archaea genera Formatotrophs Thermoproteota
Desulfurococcus
Biology
67
47,193,220
https://en.wikipedia.org/wiki/Penicillium%20pullum
Penicillium pullum is a species of fungus in the genus Penicillium. References pullum Fungi described in 2002 Fungus species
Penicillium pullum
Biology
30
23,555,003
https://en.wikipedia.org/wiki/Grifola%20frondosa
Grifola frondosa (also known as hen-of-the-woods, in Japanese, ram's head or sheep's head) is a polypore mushroom that grows at the base of trees, particularly old growth oaks or maples. It is native to China, Europe, and North America. Description Like the sulphur shelf mushroom, G. frondosa is a perennial fungus that often grows in the same place for several years in succession. G. frondosa grows from an underground tuber-like structure known as a sclerotium, about the size of a potato. The fruiting body, individually up to across but whole clumps up to , rarely , is a cluster consisting of multiple grayish-brown caps which are often curled or spoon-shaped, with wavy margins and broad. The undersurface of each cap bears about one to three pores per millimeter, with the tubes rarely deeper than . The milky-white stipe (stalk) has a branchy structure and becomes tough as the mushroom matures. In Japan, the can grow to more than . Identification This is a very distinct mushroom except for its cousin, the black staining mushroom, which is similar in taste but rubbery. Edible species which look similar to G. frondosa include Meripilus sumstinei (which stains black), Sparassis spathulata and Laetiporus sulphureus, another edible bracket fungus that is commonly called chicken of the woods or "sulphur shelf". Distribution and habitat It is native to China, Europe (August to October), and North America. It occurs most prolifically in the northeastern regions of the United States, but has been found as far west as Idaho. Uses The species is a choice edible mushroom. Maitake has been consumed for centuries in China and Japan where it is one of the major culinary mushrooms. The mushroom is used in many Japanese dishes, such as nabemono. The softer caps must be thoroughly cooked. Research Although under laboratory and preliminary clinical research for many years, particularly for the possible biological effects of its polysaccharides, there are no completed, high-quality clinical studies for the species . See also Medicinal fungi References External links Edible fungi Experimental cancer treatments Fungi in cultivation Fungi described in 1785 Fungi of Europe Fungi of North America Medicinal fungi Meripilaceae Taxa named by James Dickson (botanist) Fungus species Fungi used for fiber dyes
Grifola frondosa
Biology
503
7,994,927
https://en.wikipedia.org/wiki/Ileocecal%20fold
The ileocecal fold (or ileocaecal fold) is an anatomical structure of the human abdomen formed by a layer of peritoneum between the ileum and cecum. The upper border of the ileocecal fold is fixed to the ileum opposite its mesenteric attachment, and the lower border passes over the ileocecal junction to join the mesentery of the appendix (and sometimes the appendix itself as well). Behind the ileocecal fold is the inferior ileocecal fossa. The ileocecal fold is also called a ligament, veil, or bloodless fold of Treves (after English surgeon Sir Frederick Treves). Despite the latter name, the ileocecal fold in fact often contains a vessel. Additional images References External links Description at UMich.edu Digestive system
Ileocecal fold
Biology
177
32,496
https://en.wikipedia.org/wiki/Vacuum%20tube
A vacuum tube, electron tube, valve (British usage), or tube (North America) is a device that controls electric current flow in a high vacuum between electrodes to which an electric potential difference has been applied. The type known as a thermionic tube or thermionic valve utilizes thermionic emission of electrons from a hot cathode for fundamental electronic functions such as signal amplification and current rectification. Non-thermionic types such as a vacuum phototube, however, achieve electron emission through the photoelectric effect, and are used for such purposes as the detection of light intensities. In both types, the electrons are accelerated from the cathode to the anode by the electric field in the tube. The simplest vacuum tube, the diode (i.e. Fleming valve), was invented in 1904 by John Ambrose Fleming. It contains only a heated electron-emitting cathode and an anode. Electrons can flow in only one direction through the devicefrom the cathode to the anode. Adding one or more control grids within the tube allows the current between the cathode and anode to be controlled by the voltage on the grids. These devices became a key component of electronic circuits for the first half of the twentieth century. They were crucial to the development of radio, television, radar, sound recording and reproduction, long-distance telephone networks, and analog and early digital computers. Although some applications had used earlier technologies such as the spark gap transmitter for radio or mechanical computers for computing, it was the invention of the thermionic vacuum tube that made these technologies widespread and practical, and created the discipline of electronics. In the 1940s, the invention of semiconductor devices made it possible to produce solid-state devices, which are smaller, safer, cooler, and more efficient, reliable, durable, and economical than thermionic tubes. Beginning in the mid-1960s, thermionic tubes were being replaced by the transistor. However, the cathode-ray tube (CRT) remained the basis for television monitors and oscilloscopes until the early 21st century. Thermionic tubes are still employed in some applications, such as the magnetron used in microwave ovens, certain high-frequency amplifiers, and high end audio amplifiers, which many audio enthusiasts prefer for their "warmer" tube sound, and amplifiers for electric musical instruments such as guitars (for desired effects, such as "overdriving" them to achieve a certain sound or tone). Not all electronic circuit valves or electron tubes are vacuum tubes. Gas-filled tubes are similar devices, but containing a gas, typically at low pressure, which exploit phenomena related to electric discharge in gases, usually without a heater. Classifications One classification of thermionic vacuum tubes is by the number of active electrodes. A device with two active elements is a diode, usually used for rectification. Devices with three elements are triodes used for amplification and switching. Additional electrodes create tetrodes, pentodes, and so forth, which have multiple additional functions made possible by the additional controllable electrodes. Other classifications are: by frequency range (audio, radio, VHF, UHF, microwave) by power rating (small-signal, audio power, high-power radio transmitting) by cathode/filament type (indirectly heated, directly heated) and warm-up time (including "bright-emitter" or "dull-emitter") by characteristic curves design (e.g., sharp- versus remote-cutoff in some pentodes) by application (receiving, transmitting, amplifying or switching, rectification, mixing) specialized parameters (long life, very low microphonic sensitivity and low-noise audio amplification, rugged or military versions) specialized functions (light or radiation detectors, video imaging tubes) tubes used to display information ("magic eye" tubes, vacuum fluorescent displays, CRTs) Vacuum tubes may have other components and functions than those described above, and are described elsewhere. These include as cathode-ray tubes, which create a beam of electrons for display purposes (such as the television picture tube, in electron microscopy, and in electron beam lithography); X-ray tubes; phototubes and photomultipliers (which rely on electron flow through a vacuum where electron emission from the cathode depends on energy from photons rather than thermionic emission). Description A vacuum tube consists of two or more electrodes in a vacuum inside an airtight envelope. Most tubes have glass envelopes with a glass-to-metal seal based on kovar sealable borosilicate glasses, although ceramic and metal envelopes (atop insulating bases) have been used. The electrodes are attached to leads which pass through the envelope via an airtight seal. Most vacuum tubes have a limited lifetime, due to the filament or heater burning out or other failure modes, so they are made as replaceable units; the electrode leads connect to pins on the tube's base which plug into a tube socket. Tubes were a frequent cause of failure in electronic equipment, and consumers were expected to be able to replace tubes themselves. In addition to the base terminals, some tubes had an electrode terminating at a top cap. The principal reason for doing this was to avoid leakage resistance through the tube base, particularly for the high impedance grid input. The bases were commonly made with phenolic insulation which performs poorly as an insulator in humid conditions. Other reasons for using a top cap include improving stability by reducing grid-to-anode capacitance, improved high-frequency performance, keeping a very high plate voltage away from lower voltages, and accommodating one more electrode than allowed by the base. There was even an occasional design that had two top cap connections. The earliest vacuum tubes evolved from incandescent light bulbs, containing a filament sealed in an evacuated glass envelope. When hot, the filament in a vacuum tube (a cathode) releases electrons into the vacuum, a process called thermionic emission. This can produce a controllable unidirectional current though the vacuum known as the Edison effect. A second electrode, the anode or plate, will attract those electrons if it is at a more positive voltage. The result is a net flow of electrons from the filament to plate. However, electrons cannot flow in the reverse direction because the plate is not heated and does not emit electrons. The filament has a dual function: it emits electrons when heated; and, together with the plate, it creates an electric field due to the potential difference between them. Such a tube with only two electrodes is termed a diode, and is used for rectification. Since current can only pass in one direction, such a diode (or rectifier) will convert alternating current (AC) to pulsating DC. Diodes can therefore be used in a DC power supply, as a demodulator of amplitude modulated (AM) radio signals and for similar functions. Early tubes used the filament as the cathode; this is called a "directly heated" tube. Most modern tubes are "indirectly heated" by a "heater" element inside a metal tube that is the cathode. The heater is electrically isolated from the surrounding cathode and simply serves to heat the cathode sufficiently for thermionic emission of electrons. The electrical isolation allows all the tubes' heaters to be supplied from a common circuit (which can be AC without inducing hum) while allowing the cathodes in different tubes to operate at different voltages. H. J. Round invented the indirectly heated tube around 1913. The filaments require constant and often considerable power, even when amplifying signals at the microwatt level. Power is also dissipated when the electrons from the cathode slam into the anode (plate) and heat it; this can occur even in an idle amplifier due to the quiescent current necessary to ensure linearity and low distortion. In a power amplifier, this heating can be considerable and can destroy the tube if driven beyond its safe limits. Since the tube contains a vacuum, the anodes in most small and medium power tubes are cooled by radiation through the glass envelope. In some special high power applications, the anode forms part of the vacuum envelope to conduct heat to an external heat sink, usually cooled by a blower, or water-jacket. Klystrons and magnetrons often operate their anodes (called collectors in klystrons) at ground potential to facilitate cooling, particularly with water, without high-voltage insulation. These tubes instead operate with high negative voltages on the filament and cathode. Except for diodes, additional electrodes are positioned between the cathode and the plate (anode). These electrodes are referred to as grids as they are not solid electrodes but sparse elements through which electrons can pass on their way to the plate. The vacuum tube is then known as a triode, tetrode, pentode, etc., depending on the number of grids. A triode has three electrodes: the anode, cathode, and one grid, and so on. The first grid, known as the control grid, (and sometimes other grids) transforms the diode into a voltage-controlled device: the voltage applied to the control grid affects the current between the cathode and the plate. When held negative with respect to the cathode, the control grid creates an electric field that repels electrons emitted by the cathode, thus reducing or even stopping the current between cathode and anode. As long as the control grid is negative relative to the cathode, essentially no current flows into it, yet a change of several volts on the control grid is sufficient to make a large difference in the plate current, possibly changing the output by hundreds of volts (depending on the circuit). The solid-state device which operates most like the pentode tube is the junction field-effect transistor (JFET), although vacuum tubes typically operate at over a hundred volts, unlike most semiconductors in most applications. History and development The 19th century saw increasing research with evacuated tubes, such as the Geissler and Crookes tubes. The many scientists and inventors who experimented with such tubes include Thomas Edison, Eugen Goldstein, Nikola Tesla, and Johann Wilhelm Hittorf. With the exception of early light bulbs, such tubes were only used in scientific research or as novelties. The groundwork laid by these scientists and inventors, however, was critical to the development of subsequent vacuum tube technology. Although thermionic emission was originally reported in 1873 by Frederick Guthrie, it was Thomas Edison's apparently independent discovery of the phenomenon in 1883, referred to as the Edison effect, that became well known. Although Edison was aware of the unidirectional property of current flow between the filament and the anode, his interest (and patent) concentrated on the sensitivity of the anode current to the current through the filament (and thus filament temperature). It was years later that John Ambrose Fleming applied the rectifying property of the Edison effect to detection of radio signals, as an improvement over the magnetic detector. Amplification by vacuum tube became practical only with Lee de Forest's 1907 invention of the three-terminal "audion" tube, a crude form of what was to become the triode. Being essentially the first electronic amplifier, such tubes were instrumental in long-distance telephony (such as the first coast-to-coast telephone line in the US) and public address systems, and introduced a far superior and versatile technology for use in radio transmitters and receivers. Diodes At the end of the 19th century, radio or wireless technology was in an early stage of development and the Marconi Company was engaged in development and construction of radio communication systems. Guglielmo Marconi appointed English physicist John Ambrose Fleming as scientific advisor in 1899. Fleming had been engaged as scientific advisor to Edison Telephone (1879), as scientific advisor at Edison Electric Light (1882), and was also technical consultant to Edison-Swan. One of Marconi's needs was for improvement of the detector, a device that extracts information from a modulated radio frequency. Marconi had developed a magnetic detector, which was less responsive to natural sources of radio frequency interference than the coherer, but the magnetic detector only provided an audio frequency signal to a telephone receiver. A reliable detector that could drive a printing instrument was needed. As a result of experiments conducted on Edison effect bulbs, Fleming developed a vacuum tube that he termed the oscillation valve because it passed current in only one direction. The cathode was a carbon lamp filament, heated by passing current through it, that produced thermionic emission of electrons. Electrons that had been emitted from the cathode were attracted to the plate (anode) when the plate was at a positive voltage with respect to the cathode. Electrons could not pass in the reverse direction because the plate was not heated and not capable of thermionic emission of electrons. Fleming filed a patent for these tubes, assigned to the Marconi company, in the UK in November 1904 and this patent was issued in September 1905. Later known as the Fleming valve, the oscillation valve was developed for the purpose of rectifying radio frequency current as the detector component of radio receiver circuits. While offering no advantage over the electrical sensitivity of crystal detectors, the Fleming valve offered advantage, particularly in shipboard use, over the difficulty of adjustment of the crystal detector and the susceptibility of the crystal detector to being dislodged from adjustment by vibration or bumping. Triodes In the 19th century, telegraph and telephone engineers had recognized the need to extend the distance that signals could be transmitted. In 1906, Robert von Lieben filed for a patent for a cathode-ray tube which used an external magnetic deflection coil and was intended for use as an amplifier in telephony equipment. This von Lieben magnetic deflection tube was not a successful amplifier, however, because of the power used by the deflection coil. Von Lieben would later make refinements to triode vacuum tubes. Lee de Forest is credited with inventing the triode tube in 1907 while experimenting to improve his original (diode) Audion. By placing an additional electrode between the filament (cathode) and plate (anode), he discovered the ability of the resulting device to amplify signals. As the voltage applied to the control grid (or simply "grid") was lowered from the cathode's voltage to somewhat more negative voltages, the amount of current from the filament to the plate would be reduced. The negative electrostatic field created by the grid in the vicinity of the cathode would inhibit the passage of emitted electrons and reduce the current to the plate. With the voltage of the grid less than that of the cathode, no direct current could pass from the cathode to the grid. Thus a change of voltage applied to the grid, requiring very little power input to the grid, could make a change in the plate current and could lead to a much larger voltage change at the plate; the result was voltage and power amplification. In 1908, de Forest was granted a patent () for such a three-electrode version of his original Audion for use as an electronic amplifier in radio communications. This eventually became known as the triode. De Forest's original device was made with conventional vacuum technology. The vacuum was not a "hard vacuum" but rather left a very small amount of residual gas. The physics behind the device's operation was also not settled. The residual gas would cause a blue glow (visible ionization) when the plate voltage was high (above about 60 volts). In 1912, de Forest and John Stone Stone brought the Audion for demonstration to AT&T's engineering department. Dr. Harold D. Arnold of AT&T recognized that the blue glow was caused by ionized gas. Arnold recommended that AT&T purchase the patent, and AT&T followed his recommendation. Arnold developed high-vacuum tubes which were tested in the summer of 1913 on AT&T's long-distance network. The high-vacuum tubes could operate at high plate voltages without a blue glow. Finnish inventor Eric Tigerstedt significantly improved on the original triode design in 1914, while working on his sound-on-film process in Berlin, Germany. Tigerstedt's innovation was to make the electrodes concentric cylinders with the cathode at the centre, thus greatly increasing the collection of emitted electrons at the anode. Irving Langmuir at the General Electric research laboratory (Schenectady, New York) had improved Wolfgang Gaede's high-vacuum diffusion pump and used it to settle the question of thermionic emission and conduction in a vacuum. Consequently, General Electric started producing hard vacuum triodes (which were branded Pliotrons) in 1915. Langmuir patented the hard vacuum triode, but de Forest and AT&T successfully asserted priority and invalidated the patent. Pliotrons were closely followed by the French type 'TM' and later the English type 'R' which were in widespread use by the allied military by 1916. Historically, vacuum levels in production vacuum tubes typically ranged from 10 μPa down to 10 nPa ( down to ). The triode and its derivatives (tetrodes and pentodes) are transconductance devices, in which the controlling signal applied to the grid is a voltage, and the resulting amplified signal appearing at the anode is a current. Compare this to the behavior of the bipolar junction transistor, in which the controlling signal is a current and the output is also a current. For vacuum tubes, transconductance or mutual conductance () is defined as the change in the plate(anode)/cathode current divided by the corresponding change in the grid to cathode voltage, with a constant plate(anode) to cathode voltage. Typical values of for a small-signal vacuum tube are 1 to 10 millisiemens. It is one of the three 'constants' of a vacuum tube, the other two being its gain μ and plate resistance or . The Van der Bijl equation defines their relationship as follows: The non-linear operating characteristic of the triode caused early tube audio amplifiers to exhibit harmonic distortion at low volumes. Plotting plate current as a function of applied grid voltage, it was seen that there was a range of grid voltages for which the transfer characteristics were approximately linear. To use this range, a negative bias voltage had to be applied to the grid to position the DC operating point in the linear region. This was called the idle condition, and the plate current at this point the "idle current". The controlling voltage was superimposed onto the bias voltage, resulting in a linear variation of plate current in response to positive and negative variation of the input voltage around that point. This concept is called grid bias. Many early radio sets had a third battery called the "C battery" (unrelated to the present-day C cell, for which the letter denotes its size and shape). The C battery's positive terminal was connected to the cathode of the tubes (or "ground" in most circuits) and whose negative terminal supplied this bias voltage to the grids of the tubes. Later circuits, after tubes were made with heaters isolated from their cathodes, used cathode biasing, avoiding the need for a separate negative power supply. For cathode biasing, a relatively low-value resistor is connected between the cathode and ground. This makes the cathode positive with respect to the grid, which is at ground potential for DC. However C batteries continued to be included in some equipment even when the "A" and "B" batteries had been replaced by power from the AC mains. That was possible because there was essentially no current draw on these batteries; they could thus last for many years (often longer than all the tubes) without requiring replacement. When triodes were first used in radio transmitters and receivers, it was found that tuned amplification stages had a tendency to oscillate unless their gain was very limited. This was due to the parasitic capacitance between the plate (the amplifier's output) and the control grid (the amplifier's input), known as the Miller capacitance. Eventually the technique of neutralization was developed whereby the RF transformer connected to the plate (anode) would include an additional winding in the opposite phase. This winding would be connected back to the grid through a small capacitor, and when properly adjusted would cancel the Miller capacitance. This technique was employed and led to the success of the Neutrodyne radio during the 1920s. However, neutralization required careful adjustment and proved unsatisfactory when used over a wide range of frequencies. Tetrodes and pentodes To combat the stability problems of the triode as a radio frequency amplifier due to grid-to-plate capacitance, the physicist Walter H. Schottky invented the tetrode or screen grid tube in 1919. He showed that the addition of an electrostatic shield between the control grid and the plate could solve the problem. This design was refined by Hull and Williams. The added grid became known as the screen grid or shield grid. The screen grid is operated at a positive voltage significantly less than the plate voltage and it is bypassed to ground with a capacitor of low impedance at the frequencies to be amplified. This arrangement substantially decouples the plate and the control grid, eliminating the need for neutralizing circuitry at medium wave broadcast frequencies. The screen grid also largely reduces the influence of the plate voltage on the space charge near the cathode, permitting the tetrode to produce greater voltage gain than the triode in amplifier circuits. While the amplification factors of typical triodes commonly range from below ten to around 100, tetrode amplification factors of 500 are common. Consequently, higher voltage gains from a single tube amplification stage became possible, reducing the number of tubes required. Screen grid tubes were marketed by late 1927. However, the useful region of operation of the screen grid tube as an amplifier was limited to plate voltages greater than the screen grid voltage, due to secondary emission from the plate. In any tube, electrons strike the plate with sufficient energy to cause the emission of electrons from its surface. In a triode this secondary emission of electrons is not important since they are simply re-captured by the plate. But in a tetrode they can be captured by the screen grid since it is also at a positive voltage, robbing them from the plate current and reducing the amplification of the tube. Since secondary electrons can outnumber the primary electrons over a certain range of plate voltages, the plate current can decrease with increasing plate voltage. This is the dynatron region or tetrode kink and is an example of negative resistance which can itself cause instability. Another undesirable consequence of secondary emission is that screen current is increased, which may cause the screen to exceed its power rating. The otherwise undesirable negative resistance region of the plate characteristic was exploited with the dynatron oscillator circuit to produce a simple oscillator only requiring connection of the plate to a resonant LC circuit to oscillate. The dynatron oscillator operated on the same principle of negative resistance as the tunnel diode oscillator many years later. The dynatron region of the screen grid tube was eliminated by adding a grid between the screen grid and the plate to create the pentode. The suppressor grid of the pentode was usually connected to the cathode and its negative voltage relative to the anode repelled secondary electrons so that they would be collected by the anode instead of the screen grid. The term pentode means the tube has five electrodes. The pentode was invented in 1926 by Bernard D. H. Tellegen and became generally favored over the simple tetrode. Pentodes are made in two classes: those with the suppressor grid wired internally to the cathode (e.g. EL84/6BQ5) and those with the suppressor grid wired to a separate pin for user access (e.g. 803, 837). An alternative solution for power applications is the beam tetrode or beam power tube, discussed below. Multifunction and multisection tubes Superheterodyne receivers require a local oscillator and mixer, combined in the function of a single pentagrid converter tube. Various alternatives such as using a combination of a triode with a hexode and even an octode have been used for this purpose. The additional grids include control grids (at a low potential) and screen grids (at a high voltage). Many designs use such a screen grid as an additional anode to provide feedback for the oscillator function, whose current adds to that of the incoming radio frequency signal. The pentagrid converter thus became widely used in AM receivers, including the miniature tube version of the "All American Five". Octodes, such as the 7A8, were rarely used in the United States, but much more common in Europe, particularly in battery operated radios where the lower power consumption was an advantage. To further reduce the cost and complexity of radio equipment, two separate structures (triode and pentode for instance) can be combined in the bulb of a single multisection tube. An early example is the Loewe 3NF. This 1920s device has three triodes in a single glass envelope together with all the fixed capacitors and resistors required to make a complete radio receiver. As the Loewe set had only one tube socket, it was able to substantially undercut the competition, since, in Germany, state tax was levied by the number of sockets. However, reliability was compromised, and production costs for the tube were much greater. In a sense, these were akin to integrated circuits. In the United States, Cleartron briefly produced the "Multivalve" triple triode for use in the Emerson Baby Grand receiver. This Emerson set also has a single tube socket, but because it uses a four-pin base, the additional element connections are made on a "mezzanine" platform at the top of the tube base. By 1940 multisection tubes had become commonplace. There were constraints, however, due to patents and other licensing considerations (see British Valve Association). Constraints due to the number of external pins (leads) often forced the functions to share some of those external connections such as their cathode connections (in addition to the heater connection). The RCA Type 55 is a double diode triode used as a detector, automatic gain control rectifier and audio preamplifier in early AC powered radios. These sets often include the 53 Dual Triode Audio Output. Another early type of multi-section tube, the 6SN7, is a "dual triode" which performs the functions of two triode tubes while taking up half as much space and costing less. The 12AX7 is a dual "high mu" (high voltage gain) triode in a miniature enclosure, and became widely used in audio signal amplifiers, instruments, and guitar amplifiers. The introduction of the miniature tube base (see below) which can have 9 pins, more than previously available, allowed other multi-section tubes to be introduced, such as the 6GH8/ECF82 triode-pentode, quite popular in television receivers. The desire to include even more functions in one envelope resulted in the General Electric Compactron which has 12 pins. A typical example, the 6AG11, contains two triodes and two diodes. Some otherwise conventional tubes do not fall into standard categories; the 6AR8, 6JH8 and 6ME8 have several common grids, followed by a pair of beam deflection electrodes which deflected the current towards either of two anodes. They were sometimes known as the 'sheet beam' tubes and used in some color TV sets for color demodulation. The similar 7360 was popular as a balanced SSB (de)modulator. Beam power tubes A beam tetrode (or "beam power tube") forms the electron stream from the cathode into multiple partially collimated beams to produce a low potential space charge region between the anode and screen grid to return anode secondary emission electrons to the anode when the anode potential is less than that of the screen grid. Formation of beams also reduces screen grid current. In some cylindrically symmetrical beam power tubes, the cathode is formed of narrow strips of emitting material that are aligned with the apertures of the control grid, reducing control grid current. This design helps to overcome some of the practical barriers to designing high-power, high-efficiency power tubes. Manufacturer's data sheets often use the terms beam pentode or beam power pentode instead of beam power tube, and use a pentode graphic symbol instead of a graphic symbol showing beam forming plates. Beam power tubes offer the advantages of a longer load line, less screen current, higher transconductance and lower third harmonic distortion than comparable power pentodes. Beam power tubes can be connected as triodes for improved audio tonal quality but in triode mode deliver significantly reduced power output. Gas-filled tubes Gas-filled tubes such as discharge tubes and cold cathode tubes are not hard vacuum tubes, though are always filled with gas at less than sea-level atmospheric pressure. Types such as the voltage-regulator tube and thyratron resemble hard vacuum tubes and fit in sockets designed for vacuum tubes. Their distinctive orange, red, or purple glow during operation indicates the presence of gas; electrons flowing in a vacuum do not produce light within that region. These types may still be referred to as "electron tubes" as they do perform electronic functions. High-power rectifiers use mercury vapor to achieve a lower forward voltage drop than high-vacuum tubes. Miniature tubes Early tubes used a metal or glass envelope atop an insulating bakelite base. In 1938 a technique was developed to use an all-glass construction with the pins fused in the glass base of the envelope. This allowed the design of a much smaller tube profile, known as the miniature tube, having seven or nine pins. Making tubes smaller reduced the voltage where they could safely operate, and also reduced the power dissipation of the filament. Miniature tubes became predominant in consumer applications such as radio receivers and hi-fi amplifiers. However, the larger older styles continued to be used especially as higher-power rectifiers, in higher-power audio output stages and as transmitting tubes. Sub-miniature tubes Sub-miniature tubes with a size roughly that of half a cigarette were used in consumer applications as hearing-aid amplifiers. These tubes did not have pins plugging into a socket but were soldered in place. The "acorn tube" (named due to its shape) was also very small, as was the metal-cased RCA nuvistor from 1959, about the size of a thimble. The nuvistor was developed to compete with the early transistors and operated at higher frequencies than those early transistors could. The small size supported especially high-frequency operation; nuvistors were used in aircraft radio transceivers, UHF television tuners, and some HiFi FM radio tuners (Sansui 500A) until replaced by high-frequency capable transistors. Improvements in construction and performance The earliest vacuum tubes strongly resembled incandescent light bulbs and were made by lamp manufacturers, who had the equipment needed to manufacture glass envelopes and the vacuum pumps required to evacuate the enclosures. de Forest used Heinrich Geissler's mercury displacement pump, which left behind a partial vacuum. The development of the diffusion pump in 1915 and improvement by Irving Langmuir led to the development of high-vacuum tubes. After World War I, specialized manufacturers using more economical construction methods were set up to fill the growing demand for broadcast receivers. Bare tungsten filaments operated at a temperature of around 2200 °C. The development of oxide-coated filaments in the mid-1920s reduced filament operating temperature to a dull red heat (around 700 °C), which in turn reduced thermal distortion of the tube structure and allowed closer spacing of tube elements. This in turn improved tube gain, since the gain of a triode is inversely proportional to the spacing between grid and cathode. Bare tungsten filaments remain in use in small transmitting tubes but are brittle and tend to fracture if handled roughlye.g. in the postal services. These tubes are best suited to stationary equipment where impact and vibration is not present. Indirectly heated cathodes The desire to power electronic equipment using AC mains power faced a difficulty with respect to the powering of the tubes' filaments, as these were also the cathode of each tube. Powering the filaments directly from a power transformer introduced mains-frequency (50 or 60 Hz) hum into audio stages. The invention of the "equipotential cathode" reduced this problem, with the filaments being powered by a balanced AC power transformer winding having a grounded center tap. A superior solution, and one which allowed each cathode to "float" at a different voltage, was that of the indirectly heated cathode: a cylinder of oxide-coated nickel acted as an electron-emitting cathode and was electrically isolated from the filament inside it. Indirectly heated cathodes enable the cathode circuit to be separated from the heater circuit. The filament, no longer electrically connected to the tube's electrodes, became simply known as a "heater", and could as well be powered by AC without any introduction of hum. In the 1930s, indirectly heated cathode tubes became widespread in equipment using AC power. Directly heated cathode tubes continued to be widely used in battery-powered equipment as their filaments required considerably less power than the heaters required with indirectly heated cathodes. Tubes designed for high gain audio applications may have twisted heater wires to cancel out stray electric fields, fields that could induce objectionable hum into the program material. Heaters may be energized with either alternating current (AC) or direct current (DC). DC is often used where low hum is required. Use in electronic computers Vacuum tubes used as switches made electronic computing possible for the first time, but the cost and relatively short mean time to failure of tubes were limiting factors. "The common wisdom was that valveswhich, like light bulbs, contained a hot glowing filamentcould never be used satisfactorily in large numbers, for they were unreliable, and in a large installation too many would fail in too short a time". Tommy Flowers, who later designed Colossus, "discovered that, so long as valves were switched on and left on, they could operate reliably for very long periods, especially if their 'heaters' were run on a reduced current". In 1934 Flowers built a successful experimental installation using over 3,000 tubes in small independent modules; when a tube failed, it was possible to switch off one module and keep the others going, thereby reducing the risk of another tube failure being caused; this installation was accepted by the Post Office (who operated telephone exchanges). Flowers was also a pioneer of using tubes as very fast (compared to electromechanical devices) electronic switches. Later work confirmed that tube unreliability was not as serious an issue as generally believed; the 1946 ENIAC, with over 17,000 tubes, had a tube failure (which took 15 minutes to locate) on average every two days. The quality of the tubes was a factor, and the diversion of skilled people during the Second World War lowered the general quality of tubes. During the war Colossus was instrumental in breaking German codes. After the war, development continued with tube-based computers including, military computers ENIAC and Whirlwind, the Ferranti Mark 1 (one of the first commercially available electronic computers), and UNIVAC I, also available commercially. Advances using subminiature tubes included the Jaincomp series of machines produced by the Jacobs Instrument Company of Bethesda, Maryland. Models such as its Jaincomp-B employed just 300 such tubes in a desktop-sized unit that offered performance to rival many of the then room-sized machines. Colossus Colossus I and its successor Colossus II (Mk2) were designed by Tommy Flowers and built by the General Post Office for Bletchley Park (BP) during World War II to substantially speed up the task of breaking the German high level Lorenz encryption. Colossus replaced an earlier machine based on relay and switch logic (the Heath Robinson). Colossus was able to break in a matter of hours messages that had previously taken several weeks; it was also much more reliable. Colossus was the first use of vacuum tubes working in concert on such a large scale for a single machine. Tommy Flowers (who conceived Colossus) wrote that most radio equipment was "carted round, dumped around, switched on and off and generally mishandled. But I'd introduced valves into telephone equipment in large numbers before the war and I knew that if you never moved them and never switched them on and off they would go on forever". Colossus was "that reliable, extremely reliable". On its first day at BP a problem with a known answer was set. To the amazement of BP (Station X), after running for four hours with each run taking half an hour the answer was the same every time (the Robinson did not always give the same answer). Colossus I used about 1600 valves, and Colossus II about 2400 valves (some sources say 1500 (Mk I) and 2500 (Mk II); the Robinson used about a hundred valves; some sources say fewer). Whirlwind and "special-quality" tubes To meet the reliability requirements of the 1951 US digital computer Whirlwind, "special-quality" tubes with extended life, and a long-lasting cathode in particular, were produced. The problem of short lifetime was traced largely to evaporation of silicon, used in the tungsten alloy to make the heater wire easier to draw. The silicon forms barium orthosilicate at the interface between the nickel sleeve and the cathode barium oxide coating. This "cathode interface" is a high-resistance layer (with some parallel capacitance) which greatly reduces the cathode current when the tube is switched into conduction mode. Elimination of silicon from the heater wire alloy (and more frequent replacement of the wire drawing dies) allowed the production of tubes that were reliable enough for the Whirlwind project. High-purity nickel tubing and cathode coatings free of materials such as silicates and aluminum that can reduce emissivity also contribute to long cathode life. The first such "computer tube" was Sylvania's 7AK7 pentode of 1948 (these replaced the 7AD7, which was supposed to be better quality than the standard 6AG7 but proved too unreliable). Computers were the first tube devices to run tubes at cutoff (enough negative grid voltage to make them cease conduction) for quite-extended periods of time. Running in cutoff with the heater on accelerates cathode poisoning and the output current of the tube will be greatly reduced when switched into conduction mode. The 7AK7 tubes improved the cathode poisoning problem, but that alone was insufficient to achieve the required reliability. Further measures included switching off the heater voltage when the tubes were not required to conduct for extended periods, turning on and off the heater voltage with a slow ramp to avoid thermal shock on the heater element, and stress testing the tubes during offline maintenance periods to bring on early failure of weak units. Another commonly used computer tube was the 5965, also labeled as E180CC. This, according to a memorandom from MIT for Project Whirwind, was developed for IBM by General Electric, primarily for use in the IBM 701 calculators, and was designated as a general-purpose triode tube. The tubes developed for Whirlwind were later used in the giant SAGE air-defense computer system. By the late 1950s, it was routine for special-quality small-signal tubes to last for hundreds of thousands of hours if operated conservatively. This increased reliability also made mid-cable amplifiers in submarine cables possible. Heat generation and cooling A considerable amount of heat is produced when tubes operate, from both the filament (heater) and the stream of electrons bombarding the plate. In power amplifiers, this source of heat is greater than cathode heating. A few types of tube permit operation with the anodes at a dull red heat; in other types, red heat indicates severe overload. The requirements for heat removal can significantly change the appearance of high-power vacuum tubes. High power audio amplifiers and rectifiers required larger envelopes to dissipate heat. Transmitting tubes could be much larger still. Heat escapes the device by black-body radiation from the anode (plate) as infrared radiation, and by convection of air over the tube envelope. Convection is not possible inside most tubes since the anode is surrounded by vacuum. Tubes which generate relatively little heat, such as the 1.4-volt filament directly heated tubes designed for use in battery-powered equipment, often have shiny metal anodes. 1T4, 1R5 and 1A7 are examples. Gas-filled tubes such as thyratrons may also use a shiny metal anode since the gas present inside the tube allows for heat convection from the anode to the glass enclosure. The anode is often treated to make its surface emit more infrared energy. High-power amplifier tubes are designed with external anodes that can be cooled by convection, forced air or circulating water. The water-cooled 80 kg, 1.25 MW 8974 is among the largest commercial tubes available today. In a water-cooled tube, the anode voltage appears directly on the cooling water surface, thus requiring the water to be an electrical insulator to prevent high voltage leakage through the cooling water to the radiator system. Water as usually supplied has ions that conduct electricity; deionized water, a good insulator, is required. Such systems usually have a built-in water-conductance monitor which will shut down the high-tension supply if the conductance becomes too high. The screen grid may also generate considerable heat. Limits to screen grid dissipation, in addition to plate dissipation, are listed for power devices. If these are exceeded then tube failure is likely. Tube packages Most modern tubes have glass envelopes, but metal, fused quartz (silica) and ceramic have also been used. A first version of the 6L6 used a metal envelope sealed with glass beads, while a glass disk fused to the metal was used in later versions. Metal and ceramic are used almost exclusively for power tubes above 2 kW dissipation. The nuvistor was a modern receiving tube using a very small metal and ceramic package. The internal elements of tubes have always been connected to external circuitry via pins at their base which plug into a socket. Subminiature tubes were produced using wire leads rather than sockets, however, these were restricted to rather specialized applications. In addition to the connections at the base of the tube, many early triodes connected the grid using a metal cap at the top of the tube; this reduces stray capacitance between the grid and the plate leads. Tube caps were also used for the plate (anode) connection, particularly in transmitting tubes and tubes using a very high plate voltage. High-power tubes such as transmitting tubes have packages designed more to enhance heat transfer. In some tubes, the metal envelope is also the anode. The 4CX1000A is an external anode tube of this sort. Air is blown through an array of fins attached to the anode, thus cooling it. Power tubes using this cooling scheme are available up to 150 kW dissipation. Above that level, water or water-vapor cooling are used. The highest-power tube currently available is the Eimac , a forced water-cooled power tetrode capable of dissipating 2.5 megawatts. By comparison, the largest power transistor can only dissipate about 1 kilowatt. Names The generic name "[thermionic] valve" used in the UK derives from the unidirectional current flow allowed by the earliest device, the thermionic diode emitting electrons from a heated filament, by analogy with a non-return valve in a water pipe. The US names "vacuum tube", "electron tube", and "thermionic tube" all simply describe a tubular envelope which has been evacuated ("vacuum"), has a heater and controls electron flow. In many cases, manufacturers and the military gave tubes designations that said nothing about their purpose (e.g., 1614). In the early days some manufacturers used proprietary names which might convey some information, but only about their products; the KT66 and KT88 were "kinkless tetrodes". Later, consumer tubes were given names that conveyed some information, with the same name often used generically by several manufacturers. In the US, Radio Electronics Television Manufacturers' Association (RETMA) designations comprise a number, followed by one or two letters, and a number. The first number is the (rounded) heater voltage; the letters designate a particular tube but say nothing about its structure; and the final number is the total number of electrodes (without distinguishing between, say, a tube with many electrodes, or two sets of electrodes in a single envelopea double triode, for example). For example, the 12AX7 is a double triode (two sets of three electrodes plus heater) with a 12.6V heater (which, as it happens, can also be connected to run from 6.3V). The "AX" designates this tube's characteristics. Similar, but not identical, tubes are the 12AD7, 12AE7...12AT7, 12AU7, 12AV7, 12AW7 (rare), 12AY7, and the 12AZ7. A system widely used in Europe known as the Mullard–Philips tube designation, also extended to transistors, uses a letter, followed by one or more further letters, and a number. The type designator specifies the heater voltage or current (one letter), the functions of all sections of the tube (one letter per section), the socket type (first digit), and the particular tube (remaining digits). For example, the ECC83 (equivalent to the 12AX7) is a 6.3V (E) double triode (CC) with a miniature base (8). In this system special-quality tubes (e.g., for long-life computer use) are indicated by moving the number immediately after the first letter: the E83CC is a special-quality equivalent of the ECC83, the E55L a power pentode with no consumer equivalent. Special-purpose tubes Some special-purpose tubes are constructed with particular gases in the envelope. For instance, voltage-regulator tubes contain various inert gases such as argon, helium or neon, which will ionize at predictable voltages. The thyratron is a special-purpose tube filled with low-pressure gas or mercury vapor. Like vacuum tubes, it contains a hot cathode and an anode, but also a control electrode which behaves somewhat like the grid of a triode. When the control electrode starts conduction, the gas ionizes, after which the control electrode can no longer stop the current; the tube "latches" into conduction. Removing anode (plate) voltage lets the gas de-ionize, restoring its non-conductive state. Some thyratrons can carry large currents for their physical size. One example is the miniature type 2D21, often seen in 1950s jukeboxes as control switches for relays. A cold-cathode version of the thyratron, which uses a pool of mercury for its cathode, is called an ignitron; some can switch thousands of amperes. Thyratrons containing hydrogen have a very consistent time delay between their turn-on pulse and full conduction; they behave much like modern silicon-controlled rectifiers, also called thyristors due to their functional similarity to thyratrons. Hydrogen thyratrons have long been used in radar transmitters. A specialized tube is the krytron, which is used for rapid high-voltage switching. Krytrons are used to initiate the detonations used to set off a nuclear weapon; krytrons are heavily controlled at an international level. X-ray tubes are used in medical imaging among other uses. X-ray tubes used for continuous-duty operation in fluoroscopy and CT imaging equipment may use a focused cathode and a rotating anode to dissipate the large amounts of heat thereby generated. These are housed in an oil-filled aluminum housing to provide cooling. The photomultiplier tube is an extremely sensitive detector of light, which uses the photoelectric effect and secondary emission, rather than thermionic emission, to generate and amplify electrical signals. Nuclear medicine imaging equipment and liquid scintillation counters use photomultiplier tube arrays to detect low-intensity scintillation due to ionizing radiation. The Ignatron tube was used in resistance welding equipment in the early 1970s. The Ignatron had a cathode, anode and an igniter. The tube base was filled with mercury and the tube was used as a very high current switch. A large current potential was placed between the anode and cathode of the tube but was only permitted to conduct when the igniter in contact with the mercury had enough current to vaporize the mercury and complete the circuit. Because this was used in resistance welding there were two Ignatrons for the two phases of an AC circuit. Because of the mercury at the bottom of the tube they were extremely difficult to ship. These tubes were eventually replaced by SCRs (Silicon Controlled Rectifiers). Powering the tube Batteries Batteries provided the voltages required by tubes in early radio sets. Three different voltages were generally required, using three different batteries designated as the A, B, and C battery. The "A" battery or LT (low-tension) battery provided the filament voltage. Tube heaters were designed for single, double or triple-cell lead-acid batteries, giving nominal heater voltages of 2 V, 4 V or 6 V. In portable radios, dry batteries were sometimes used with 1.5 or 1 V heaters. Reducing filament consumption improved the life span of batteries. By 1955 towards the end of the tube era, tubes using only 50 mA down to as little as 10 mA for the heaters had been developed. The high voltage applied to the anode (plate) was provided by the "B" battery or the HT (high-tension) supply or battery. These were generally of dry cell construction and typically came in 22.5-, 45-, 67.5-, 90-, 120- or 135-volt versions. After the use of B-batteries was phased out and rectified line-power was employed to produce the high voltage needed by tubes' plates, the term "B+" persisted in the US when referring to the high voltage source. Most of the rest of the English speaking world refers to this supply as just HT (high tension). Early sets used a grid bias battery or "C" battery which was connected to provide a negative voltage. Since no current flows through a tube's grid connection, these batteries had no current drain and lasted the longest, usually limited by their own shelf life. The supply from the grid bias battery was rarely, if ever, disconnected when the radio was otherwise switched off. Even after AC power supplies became commonplace, some radio sets continued to be built with C batteries, as they would almost never need replacing. However more modern circuits were designed using cathode biasing, eliminating the need for a third power supply voltage; this became practical with tubes using indirect heating of the cathode along with the development of resistor/capacitor coupling which replaced earlier interstage transformers. The "C battery" for bias is a designation having no relation to the "C cell" battery size. AC power Battery replacement was a major operating cost for early radio receiver users. The development of the battery eliminator, and, in 1925, batteryless receivers operated by household power, reduced operating costs and contributed to the growing popularity of radio. A power supply using a transformer with several windings, one or more rectifiers (which may themselves be vacuum tubes), and large filter capacitors provided the required direct current voltages from the alternating current source. As a cost reduction measure, especially in high-volume consumer receivers, all the tube heaters could be connected in series across the AC supply using heaters requiring the same current and with a similar warm-up time. In one such design, a tap on the tube heater string supplied the 6 volts needed for the dial light. By deriving the high voltage from a half-wave rectifier directly connected to the AC mains, the heavy and costly power transformer was eliminated. This also allowed such receivers to operate on direct current, a so-called AC/DC receiver design. Many different US consumer AM radio manufacturers of the era used a virtually identical circuit, given the nickname All American Five. Where the mains voltage was in the 100–120 V range, this limited voltage proved suitable only for low-power receivers. Television receivers either required a transformer or could use a voltage doubling circuit. Where 230 V nominal mains voltage was used, television receivers as well could dispense with a power transformer. Transformer-less power supplies required safety precautions in their design to limit the shock hazard to users, such as electrically insulated cabinets and an interlock tying the power cord to the cabinet back, so the line cord was necessarily disconnected if the user or service person opened the cabinet. A cheater cord was a power cord ending in the special socket used by the safety interlock; servicers could then power the device with the hazardous voltages exposed. To avoid the warm-up delay, "instant on" television receivers passed a small heating current through their tubes even when the set was nominally off. At switch on, full heating current was provided and the set would play almost immediately. Reliability One reliability problem of tubes with oxide cathodes is the possibility that the cathode may slowly become "poisoned" by gas molecules from other elements in the tube, which reduce its ability to emit electrons. Trapped gases or slow gas leaks can also damage the cathode or cause plate (anode) current runaway due to ionization of free gas molecules. Vacuum hardness and proper selection of construction materials are the major influences on tube lifetime. Depending on the material, temperature and construction, the surface material of the cathode may also diffuse onto other elements. The resistive heaters that heat the cathodes may break in a manner similar to incandescent lamp filaments, but rarely do, since they operate at much lower temperatures than lamps. The heater's failure mode is typically a stress-related fracture of the tungsten wire or at a weld point and generally occurs after accruing many thermal (power on-off) cycles. Tungsten wire has a very low resistance when at room temperature. A negative temperature coefficient device, such as a thermistor, may be incorporated in the equipment's heater supply or a ramp-up circuit may be employed to allow the heater or filaments to reach operating temperature more gradually than if powered-up in a step-function. Low-cost radios had tubes with heaters connected in series, with a total voltage equal to that of the line (mains). Some receivers made before World War II had series-string heaters with total voltage less than that of the mains. Some had a resistance wire running the length of the power cord to drop the voltage to the tubes. Others had series resistors made like regular tubes; they were called ballast tubes. Following World War II, tubes intended to be used in series heater strings were redesigned to all have the same ("controlled") warm-up time. Earlier designs had quite-different thermal time constants. The audio output stage, for instance, had a larger cathode and warmed up more slowly than lower-powered tubes. The result was that heaters that warmed up faster also temporarily had higher resistance, because of their positive temperature coefficient. This disproportionate resistance caused them to temporarily operate with heater voltages well above their ratings, and shortened their life. Another important reliability problem is caused by air leakage into the tube. Usually oxygen in the air reacts chemically with the hot filament or cathode, quickly ruining it. Designers developed tube designs that sealed reliably. This was why most tubes were constructed of glass. Metal alloys (such as Cunife and Fernico) and glasses had been developed for light bulbs that expanded and contracted in similar amounts, as temperature changed. These made it easy to construct an insulating envelope of glass, while passing connection wires through the glass to the electrodes. When a vacuum tube is overloaded or operated past its design dissipation, its anode (plate) may glow red. In consumer equipment, a glowing plate is universally a sign of an overloaded tube. However, some large transmitting tubes are designed to operate with their anodes at red, orange, or in rare cases, white heat. "Special quality" versions of standard tubes were often made, designed for improved performance in some respect, such as a longer life cathode, low noise construction, mechanical ruggedness via ruggedized filaments, low microphony, for applications where the tube will spend much of its time cut off, etc. The only way to know the particular features of a special quality part is by reading the datasheet. Names may reflect the standard name (12AU7==>12AU7A, its equivalent ECC82==>E82CC, etc.), or be absolutely anything (standard and special-quality equivalents of the same tube include 12AU7, ECC82, B329, CV491, E2163, E812CC, M8136, CV4003, 6067, VX7058, 5814A and 12AU7A). The longest recorded valve life was earned by a Mazda AC/P pentode valve (serial No. 4418) in operation at the BBC's main Northern Ireland transmitter at Lisnagarvey. The valve was in service from 1935 until 1961 and had a recorded life of 232,592 hours. The BBC maintained meticulous records of their valves' lives with periodic returns to their central valve stores. Vacuum A vacuum tube needs an extremely high vacuum (or hard vacuum, from X-ray terminology) to avoid the consequences of generating positive ions within the tube. Residual gas atoms ionize when struck by an electron and can adversely affect the cathode, reducing emission. Larger amounts of residual gas can create a visible glow discharge between the tube electrodes and cause overheating of the electrodes, producing more gas, damaging the tube and possibly other components due to excess current. To avoid these effects, the residual pressure within the tube must be low enough that the mean free path of an electron is much longer than the size of the tube (so an electron is unlikely to strike a residual atom and very few ionized atoms will be present). Commercial vacuum tubes are evacuated at manufacture to about . To prevent gases from compromising the tube's vacuum, modern tubes are constructed with getters, which are usually metals that oxidize quickly, barium being the most common. For glass tubes, while the tube envelope is being evacuated, the internal parts except the getter are heated by RF induction heating to evolve any remaining gas from the metal parts. The tube is then sealed and the getter trough or pan, for flash getters, is heated to a high temperature, again by radio frequency induction heating, which causes the getter material to vaporize and react with any residual gas. The vapor is deposited on the inside of the glass envelope, leaving a silver-colored metallic patch that continues to absorb small amounts of gas that may leak into the tube during its working life. Great care is taken with the valve design to ensure this material is not deposited on any of the working electrodes. If a tube develops a serious leak in the envelope, this deposit turns a white color as it reacts with atmospheric oxygen. Large transmitting and specialized tubes often use more exotic getter materials, such as zirconium. Early gettered tubes used phosphorus-based getters, and these tubes are easily identifiable, as the phosphorus leaves a characteristic orange or rainbow deposit on the glass. The use of phosphorus was short-lived and was quickly replaced by the superior barium getters. Unlike the barium getters, the phosphorus did not absorb any further gases once it had fired. Getters act by chemically combining with residual or infiltrating gases, but are unable to counteract (non-reactive) inert gases. A known problem, mostly affecting valves with large envelopes such as cathode-ray tubes and camera tubes such as iconoscopes, orthicons, and image orthicons, comes from helium infiltration. The effect appears as impaired or absent functioning, and as a diffuse glow along the electron stream inside the tube. This effect cannot be rectified (short of re-evacuation and resealing), and is responsible for working examples of such tubes becoming rarer and rarer. Unused ("New Old Stock") tubes can also exhibit inert gas infiltration, so there is no long-term guarantee of these tube types surviving into the future. Transmitting tubes Large transmitting tubes have carbonized tungsten filaments containing a small trace (1% to 2%) of thorium. An extremely thin (molecular) layer of thorium atoms forms on the outside of the wire's carbonized layer and, when heated, serve as an efficient source of electrons. The thorium slowly evaporates from the wire surface, while new thorium atoms diffuse to the surface to replace them. Such thoriated tungsten cathodes usually deliver lifetimes in the tens of thousands of hours. The end-of-life scenario for a thoriated-tungsten filament is when the carbonized layer has mostly been converted back into another form of tungsten carbide and emission begins to drop off rapidly; a complete loss of thorium has never been found to be a factor in the end-of-life in a tube with this type of emitter. WAAY-TV in Huntsville, Alabama achieved 163,000 hours (18.6 years) of service from an Eimac external cavity klystron in the visual circuit of its transmitter; this is the highest documented service life for this type of tube. It has been said that transmitters with vacuum tubes are better able to survive lightning strikes than transistor transmitters do. While it was commonly believed that vacuum tubes were more efficient than solid-state circuits at RF power levels above approximately 20 kilowatts, this is no longer the case, especially in medium wave (AM broadcast) service where solid-state transmitters at nearly all power levels have measurably higher efficiency. FM broadcast transmitters with solid-state power amplifiers up to approximately 15 kW also show better overall power efficiency than tube-based power amplifiers. Receiving tubes Cathodes in small "receiving" tubes are coated with a mixture of barium oxide and strontium oxide, sometimes with addition of calcium oxide or aluminium oxide. An electric heater is inserted into the cathode sleeve and insulated from it electrically by a coating of aluminum oxide. This complex construction causes barium and strontium atoms to diffuse to the surface of the cathode and emit electrons when heated to about 780 degrees Celsius. Failure modes Catastrophic failures A catastrophic failure is one that suddenly makes the vacuum tube unusable. A crack in the glass envelope will allow air into the tube and destroy it. Cracks may result from stress in the glass, bent pins or impacts; tube sockets must allow for thermal expansion, to prevent stress in the glass at the pins. Stress may accumulate if a metal shield or other object presses on the tube envelope and causes differential heating of the glass. Glass may also be damaged by high-voltage arcing. Tube heaters may also fail without warning, especially if exposed to over voltage or as a result of manufacturing defects. Tube heaters do not normally fail by evaporation like lamp filaments since they operate at much lower temperature. The surge of inrush current when the heater is first energized causes stress in the heater and can be avoided by slowly warming the heaters, gradually increasing current with a NTC thermistor included in the circuit. Tubes intended for series-string operation of the heaters across the supply have a specified controlled warm-up time to avoid excess voltage on some heaters as others warm up. Directly heated filament-type cathodes as used in battery-operated tubes or some rectifiers may fail if the filament sags, causing internal arcing. Excess heater-to-cathode voltage in indirectly heated cathodes can break down the insulation between elements and destroy the heater. Arcing between tube elements can destroy the tube. An arc can be caused by applying voltage to the anode (plate) before the cathode has come up to operating temperature, or by drawing excess current through a rectifier, which damages the emission coating. Arcs can also be initiated by any loose material inside the tube, or by excess screen voltage. An arc inside the tube allows gas to evolve from the tube materials, and may deposit conductive material on internal insulating spacers. Tube rectifiers have limited current capability and exceeding ratings will eventually destroy a tube. Degenerative failures Degenerative failures are those caused by the slow deterioration of performance over time. Overheating of internal parts, such as control grids or mica spacer insulators, can result in trapped gas escaping into the tube; this can reduce performance. A getter is used to absorb gases evolved during tube operation but has only a limited ability to combine with gas. Control of the envelope temperature prevents some types of gassing. A tube with an unusually high level of internal gas may exhibit a visible blue glow when plate voltage is applied. The getter (being a highly reactive metal) is effective against many atmospheric gases but has no (or very limited) chemical reactivity to inert gases such as helium. One progressive type of failure, especially with physically large envelopes such as those used by camera tubes and cathode-ray tubes, comes from helium infiltration. The exact mechanism is not clear: the metal-to-glass lead-in seals are one possible infiltration site. Gas and ions within the tube contribute to grid current which can disturb operation of a vacuum-tube circuit. Another effect of overheating is the slow deposit of metallic vapors on internal spacers, resulting in inter-element leakage. Tubes on standby for long periods, with heater voltage applied, may develop high cathode interface resistance and display poor emission characteristics. This effect occurred especially in pulse and digital circuits, where tubes had no plate current flowing for extended times. Tubes designed specifically for this mode of operation were made. Cathode depletion is the loss of emission after thousands of hours of normal use. Sometimes emission can be restored for a time by raising heater voltage, either for a short time or a permanent increase of a few percent. Cathode depletion was uncommon in signal tubes but was a frequent cause of failure of monochrome television cathode-ray tubes. Usable life of this expensive component was sometimes extended by fitting a boost transformer to increase heater voltage. Other failures Vacuum tubes may develop defects in operation that make an individual tube unsuitable in a given device, although it may perform satisfactorily in another application. Microphonics refers to internal vibrations of tube elements which modulate the tube's signal in an undesirable way; sound or vibration pick-up may affect the signals, or even cause uncontrolled howling if a feedback path (with greater than unity gain) develops between a microphonic tube and, for example, a loudspeaker. Leakage current between AC heaters and the cathode may couple into the circuit, or electrons emitted directly from the ends of the heater may also inject hum into the signal. Leakage current due to internal contamination may also inject noise. Some of these effects make tubes unsuitable for small-signal audio use, although unobjectionable for other purposes. Selecting the best of a batch of nominally identical tubes for critical applications can produce better results. Tube pins can develop non-conducting or high resistance surface films due to heat or dirt. Pins can be cleaned to restore conductance. Testing Vacuum tubes can be tested outside of their circuitry using a vacuum tube tester. Other vacuum tube devices Most small signal vacuum tube devices have been superseded by semiconductors, but some vacuum tube electronic devices are still in common use. The magnetron is the type of tube used in all microwave ovens. In spite of the advancing state of the art in power semiconductor technology, the vacuum tube still has reliability and cost advantages for high-frequency RF power generation. Some tubes, such as magnetrons, traveling-wave tubes, Carcinotrons, and klystrons, combine magnetic and electrostatic effects. These are efficient (usually narrow-band) RF generators and still find use in radar, microwave ovens and industrial heating. Traveling-wave tubes (TWTs) are very good amplifiers and are even used in some communications satellites. High-powered klystron amplifier tubes can provide hundreds of kilowatts in the UHF range. Cathode-ray tubes The cathode-ray tube (CRT) is a vacuum tube used particularly for display purposes. Although there are still many televisions and computer monitors using cathode-ray tubes, they are rapidly being replaced by flat panel displays whose quality has greatly improved even as their prices drop. This is also true of digital oscilloscopes (based on internal computers and analog-to-digital converters), although traditional analog scopes (dependent upon CRTs) continue to be produced, are economical, and preferred by many technicians. At one time many radios used "magic eye tubes", a specialized sort of CRT used in place of a meter movement to indicate signal strength or input level in a tape recorder. A modern indicator device, the vacuum fluorescent display (VFD) is also a sort of cathode-ray tube. The X-ray tube is a type of cathode-ray tube that generates X-rays when high voltage electrons hit the anode. Gyrotrons or vacuum masers, used to generate high-power millimeter band waves, are magnetic vacuum tubes in which a small relativistic effect, due to the high voltage, is used for bunching the electrons. Gyrotrons can generate very high powers (hundreds of kilowatts)., Free-electron lasers, used to generate high-power coherent light and even X-rays, are highly relativistic vacuum tubes driven by high-energy particle accelerators. Thus, these are sorts of cathode-ray tubes. Electron multipliers A photomultiplier is a phototube whose sensitivity is greatly increased through the use of electron multiplication. This works on the principle of secondary emission, whereby a single electron emitted by the photocathode strikes a special sort of anode known as a dynode causing more electrons to be released from that dynode. Those electrons are accelerated toward another dynode at a higher voltage, releasing more secondary electrons; as many as 15 such stages provide a huge amplification. Despite great advances in solid-state photodetectors (e.g. Single-photon avalanche diode), the single-photon detection capability of photomultiplier tubes makes this vacuum tube device excel in certain applications. Such a tube can also be used for detection of ionizing radiation as an alternative to the Geiger–Müller tube (itself not an actual vacuum tube). Historically, the image orthicon TV camera tube widely used in television studios prior to the development of modern CCD arrays also used multistage electron multiplication. For decades, electron-tube designers tried to augment amplifying tubes with electron multipliers in order to increase gain, but these suffered from short life because the material used for the dynodes "poisoned" the tube's hot cathode. (For instance, the interesting RCA 1630 secondary-emission tube was marketed, but did not last.) However, eventually, Philips of the Netherlands developed the EFP60 tube that had a satisfactory lifetime and was used in at least one product, a laboratory pulse generator. By that time, however, transistors were rapidly improving, making such developments superfluous. One variant called a "channel electron multiplier" does not use individual dynodes but consists of a curved tube, such as a helix, coated on the inside with material with good secondary emission. One type had a funnel of sorts to capture the secondary electrons. The continuous dynode was resistive, and its ends were connected to enough voltage to create repeated cascades of electrons. The microchannel plate consists of an array of single stage electron multipliers over an image plane; several of these can then be stacked. This can be used, for instance, as an image intensifier in which the discrete channels substitute for focusing. Tektronix made a high-performance wideband oscilloscope CRT with a channel electron multiplier plate behind the phosphor layer. This plate was a bundled array of a huge number of short individual c.e.m. tubes that accepted a low-current beam and intensified it to provide a display of practical brightness. (The electron optics of the wideband electron gun could not provide enough current to directly excite the phosphor.) Vacuum tubes in the 21st century Industrial, commercial, and military niche applications Although vacuum tubes have been largely replaced by solid-state devices in most amplifying, switching, and rectifying applications, there are certain exceptions. In addition to the special functions noted above, tubes have some niche applications. In general, vacuum tubes are much less susceptible than corresponding solid-state components to transient overvoltages, such as mains voltage surges or lightning, the electromagnetic pulse effect of nuclear explosions, or geomagnetic storms produced by giant solar flares. This property kept them in use for certain military applications long after more practical and less expensive solid-state technology was available for the same applications, as for example with the MiG-25. Vacuum tubes are practical alternatives to solid-state devices in generating high power at radio frequencies in applications such as industrial radio frequency heating, particle accelerators, and broadcast transmitters. This is particularly true at microwave frequencies where such devices as the klystron and traveling-wave tube provide amplification at power levels unattainable using semiconductor devices. The household microwave oven uses a magnetron tube to efficiently generate hundreds of watts of microwave power. Solid-state devices such as gallium nitride are promising replacements, but are very expensive and in early stages of development. In military applications, a high-power vacuum tube can generate a 10–100 megawatt signal that can burn out an unprotected receiver's frontend. Such devices are considered non-nuclear electromagnetic weapons; they were introduced in the late 1990s by both the U.S. and Russia. In music Tube amplifiers remain commercially viable in three niches where their warm sound, performance when overdriven, and ability to replicate prior-era tube-based recording are prized: audiophile equipment, musical instrument amplifiers, and devices used in recording studios. Many guitarists prefer using valve amplifiers to solid-state models, often due to the way they tend to distort when overdriven. Any amplifier can only accurately amplify a signal to a certain volume; past this limit, the amplifier will begin to distort the signal. Different circuits will distort the signal in different ways; some guitarists prefer the distortion characteristics of vacuum tubes. Most popular vintage models use vacuum tubes. Displays Cathode-ray tube The cathode-ray tube was the dominant display technology for televisions and computer monitors at the start of the 21st century. However, rapid advances and falling prices of LCD flat panel technology soon took the place of CRTs in these devices. By 2010, most CRT production had ended. Vacuum tubes using field electron emitters In the early years of the 21st century there has been renewed interest in vacuum tubes, this time with the electron emitter formed on a flat silicon substrate, as in integrated circuit technology. This subject is now called vacuum nanoelectronics. The most common design uses a cold cathode in the form of a large-area field electron source (for example a field emitter array). With these devices, electrons are field-emitted from a large number of closely spaced individual emission sites. Such integrated microtubes may find application in microwave devices including mobile phones, for Bluetooth and Wi-Fi transmission, and in radar and satellite communication. , they were being studied for possible applications in field emission display technology, but there were significant production problems. As of 2014, NASA's Ames Research Center was reported to be working on vacuum-channel transistors produced using CMOS techniques. Characteristics Space charge of a vacuum tube When a cathode is heated and reaches an operating temperature around , free electrons are driven from its surface. These free electrons form a cloud in the empty space between the cathode and the anode, known as the space charge. This space charge cloud supplies the electrons that create the current flow from the cathode to the anode. As electrons are drawn to the anode during the operation of the circuit, new electrons will boil off the cathode to replenish the space charge. The space charge is an example of an electric field. Voltage – Current characteristics of vacuum tube All tubes with one or more control grids are controlled by an AC (Alternating Current) input voltage applied to the control grid, while the resulting amplified signal appears at the anode as a current. Due to the high voltage placed on the anode, a relatively small anode current can represent a considerable increase in energy over the value of the original signal voltage. The space charge electrons driven off the heated cathode are strongly attracted by the positive anode. The control grid(s) in a tube mediate this current flow by combining the small AC signal current with the grid's slightly negative value. When the signal sine (AC) wave is applied to the grid, it rides on this negative value, driving it both positive and negative as the AC signal wave changes. This relationship is shown with a set of Plate Characteristics curves, (see example above,) which visually display how the output current from the anode () can be affected by a small input voltage applied on the grid (), for any given voltage on the plate(anode) (). Every tube has a unique set of such characteristic curves. The curves graphically relate the changes to the instantaneous plate current driven by a much smaller change in the grid-to-cathode voltage () as the input signal varies. The V-I characteristic depends upon the size and material of the plate and cathode. Express the ratio between voltage plate and plate current. V-I curve (Voltage across filaments, plate current) Plate current, plate voltage characteristics DC plate resistance of the plateresistance of the path between anode and cathode of direct current AC plate resistance of the plateresistance of the path between anode and cathode of alternating current Size of electrostatic field Size of electrostatic field is the size between two or more plates in the tube. Patents Instrument for converting alternating electric currents into continuous currents (Fleming valve patent) Device for amplifying feeble electrical currents de Forest's three electrode Audion See also Bogey valueclose to manufacturer's stated parameter values Fetrona solid-state, plug-compatible, replacement for vacuum tubes List of vacuum tubesa list of type numbers. List of vacuum-tube computers Mullard–Philips tube designation Nixie tubea gas-filled display device sometimes misidentified as a vacuum tube RETMA tube designation RMA tube designation Russian tube designations Tube caddy Tube tester Valve amplifier Zetatron References Bibliography Further reading Eastman, Austin V., Fundamentals of Vacuum Tubes, McGraw-Hill, 1949 Millman, J. & Seely, S. Electronics, 2nd ed. McGraw-Hill, 1951. Philips Technical Library. Books published in the UK in the 1940s and 1950s by Cleaver Hume Press on design and application of vacuum tubes. RCA. Radiotron Designer's Handbook, 1953 (4th Edition). Contains chapters on the design and application of receiving tubes. RCA. Receiving Tube Manual, RC15, RC26 (1947, 1968) Issued every two years, contains details of the technical specs of the tubes that RCA sold. Shiers, George, "The First Electron Tube", Scientific American, March 1969, p. 104. Stokes, John, 70 Years of Radio Tubes and Valves, Vestal Press, New York, 1982, pp. 3–9. Thrower, Keith, History of the British Radio Valve to 1940, MMA International, 1982, pp 9–13. Tyne, Gerald, Saga of the Vacuum Tube, Ziff Publishing, 1943, (reprint 1994 Prompt Publications), pp. 30–83. Basic Electronics: Volumes 1–5; Van Valkenburgh, Nooger & Neville Inc.; John F. Rider Publisher; 1955. Wireless World. Radio Designer's Handbook. UK reprint of the above. "Vacuum Tube Design"; 1940; RCA. External links The Vacuum Tube FAQFAQ from rec.audio The invention of the thermionic valve . Fleming discovers the thermionic (or oscillation) valve, or 'diode'. "Tubes Vs. Transistors: Is There an Audible Difference?"1972 AES paper on audible differences in sound quality between vacuum tubes and transistors. The cathode-ray tube site O'Neill's Electronic museumvacuum tube museum Vacuum tubes for beginnersJapanese Version NJ7P Tube DatabaseData manual for tubes used in North America. Vacuum tube data sheet locator Characteristics and datasheets Tuning eye tubes 1904 in science 1904 in technology Electrical components English inventions Glass applications Telecommunications-related introductions in 1904 Vacuum
Vacuum tube
Physics,Technology,Engineering
17,441
2,448,809
https://en.wikipedia.org/wiki/Inhibitor%20of%20DNA-binding%20protein
Inhibitor of DNA-binding/differentiation proteins, also known as ID proteins comprise a family of proteins that heterodimerize with basic helix-loop-helix (bHLH) transcription factors to inhibit DNA binding of bHLH proteins. ID proteins also contain the HLH-dimerization domain but lack the basic DNA-binding domain and thus regulate bHLH transcription factors when they heterodimerize with bHLH proteins. The first helix-loop-helix proteins identified were named E-proteins because they bind to Ephrussi-box (E-box) sequences. In normal development, E proteins form dimers with other bHLH transcription factors, allowing transcription to occur. However, in cancerous phenotypes, ID proteins can regulate transcription by binding E proteins, so no dimers can be formed and transcription is inactive. E proteins are members of the class I bHLH family and form dimers with bHLH proteins from class II to regulate transcription. Four ID proteins exist in humans: ID1, ID2, ID3, and ID4. The ID homologue gene in Drosophila is called extramacrochaetae (EMC) and encodes a transcription factor of the helix-loop-helix family that lacks a DNA binding domain. EMC regulates cell proliferation, formation of organs like the midgut, and wing development. ID proteins could be potential targets for systemic cancer therapies without inhibiting the functioning of most normal cells because they are highly expressed in embryonic stem cells, but not in differentiated adult cells. Evidence suggests that ID proteins are overexpressed in many types of cancer. For example, ID1 is overexpressed in pancreatic, breast, and prostate cancers. ID2 is upregulated in neuroblastoma, Ewing’s sarcoma, and squamous cell carcinoma of the head and neck. Function ID proteins are key regulators of development where they function to prevent premature differentiation of stem cells. By inhibiting the formation of E-protein dimers that promote differentiation, ID proteins can regulate the timing of differentiation of stem cells during development. An increase in ID expression is seen in embryonic and adult stem cells. ID proteins also promote cell cycle progression, delaying senescence, and help facilitate cell migration. In contrast, inappropriate regulation of ID proteins in differentiated cells can contribute to tumorigenesis. Generally, IDs function as oncogenes. When ID proteins are overexpressed, cell proliferation is enhanced and cells become insensitive to growth factor depletion. Expression of ID proteins in neurons halts neuron axon growth and allows elongation of neurons. Knockout mouse data show that ID genes are essential for heart development. There is some controversy surrounding the ID proteins and their role in cancer, but overexpression is seen in most tumor types. There are a few exceptions, for example, an increase in ID1 expression in brain cancer is correlated with a better prognosis, while a decrease in ID4 expression in colon and rectal cancers is linked to a poorer prognosis. ID proteins can bind E-proteins, preventing them from binding bHLH proteins and halting transcription, a case often seen in cancerous phenotypes. Subtypes Humans express four types of Id proteins (called ID1, ID2, ID3, and ID4). A recent publication in Cancer Research (August 2010) has shown that ID1 can be used to mark endothelial progenitor cells which are critical to tumour growth and angiogenesis. This publication has demonstrated that targeting ID1 resulted in decreased tumour growth. Therefore, ID1 could be used to design a novel cancer therapy. Perk, Iavarone, and Benezra, (2005), reviewed fifteen studies and compiled a list of the phenotypic effects of each ID gene when knocked out in mice. When ID1 was knocked out, a defect in T-cell migration was seen. A knockout of ID2 showed that 25% of mice died perinatally, and those born lacked lymph nodes and showed defects in mammary proliferation. Generally, normal development was seen in mice with an ID3 knockout, but they did have a defect in B-cell proliferation. Neural defects and premature differentiation were seen in mice lacking ID4. Knockout of both ID1 and ID3 resulted in embryonic lethality due to brain hemorrhages and abnormalities in cardiac development. References External links Enzyme inhibitors Protein families
Inhibitor of DNA-binding protein
Biology
928
76,459,550
https://en.wikipedia.org/wiki/Tatra%20marmot
The Tatra marmot (Marmota marmota latirostris) is an endemic subspecies of marmot found in the Tatra Mountains. In the past, it was a game animal, but in the 19th century, its population drastically declined. It is a herbivore active in the summer, living in territorial family clans in the mountains from the upper montane to the alpine zone. It is one of the rarest vertebrates in Poland and is subject to strict legal protection. It is also legally protected in Slovakia. The and the classify the Tatra marmot as a strongly endangered subspecies (EN), while the Red List for the Carpathians in Poland designates it as "CR" – critically endangered. It is a relatively poorly researched animal. History of discovery and research The wider recognition of the marmot in Poland was influenced by the slow progress of settlement in the Tatra region, dating back to the privilege granted by Bolesław V the Chaste in 1255 to the Cistercian Abbey of Szczyrzyc: We also grant to the abbot: free hunting, all in the surrounding forests up to the mountains called Tatras. Historical accounts mention that over time, a group of hunters specializing in marmot hunting emerged, known as "whistlers". Descriptions of Tatra marmots were very modest and mostly related to their hunting value. Hungarian pastor wrote in 1774: The Carpathian marmot resides in the highest mountain peaks' dens in summer and winter. It feeds on roots and herbs and has fat, tasty meat. The meat, skins, and especially the fat, which had wide applications in folk medicine, were valued. Various authors wrote about marmots; in 1719, teacher , in 1721, Polish naturalist and Jesuit Gabriel Rzączyński, in 1750, Gdańsk naturalist Jacob Theodor Klein, and in 1779, Polish naturalist and clergyman Jan Krzysztof Kluk mentioned "whistlers" in the Carpathians. Marmots were hunted almost without restrictions until around 1868, and on the Hungarian side of the Tatras until 1883 when regulations protecting this species were introduced. In 1865, the first description of marmot biology was published. The author of the work About the Marmot was Maksymilian Nowicki – co-founder of the , researcher of Tatra fauna and flora, and a pioneer of nature conservation in Poland. Nowicki, using the name Arctomys ( – bear, and – mouse, rat) as the generic name for marmots, used the term "little bear mouse," while Ludwik Zejszner, writing at the same time, used the term "bear mouse": this peculiar little animal looks like it consists of two others; with a head similar to a mouse, and the rest of the body like a bear, covered with long hair similar to martens. Highlanders call it a marmot because of its special barking, as the individual barking sounds are so drawn out that they bear a great resemblance to whistling... The marmot makes very long burrows in the Tatra hollows, lining them with moss, grass, and, like many animals, undergoes hibernation. At the end of summer, it stores numerous root supplies in its burrows and, having fattened up excessively, falls asleep in this winter abode, only waking up completely emaciated with spring. Systematics and evolution Belonging to the family Sciuridae, the species Marmota marmota began to inhabit European territories already in the Pleistocene. It occurred over a vast area – from present-day Belgium and the shores of the English Channel to the Pannonian Basin. In the Holocene, with the warming climate, marmots had to choose more favorable locations. The moderate warmth of forested areas was not suitable for them, as their bodies are not well adapted to higher temperatures. They found cooler habitats in the elevated mountain ranges of Europe. Over time, they had to narrow their territories to the Alpine range and the Tatra Mountains. The separation of the Alpine population from the Carpathian one could have occurred from 15 to 50 thousand years ago, but for many years, no differences were noticed between the marmot living in the Tatras and its Alpine cousin, and both populations were treated as geographically separated locations of the same species. In the late 1950s and early 1960s, Czechoslovak scientists undertook comparative studies. For this purpose, on 31 May 1961, a marmot was shot from a colony in . The holotype studies were conducted by , a zoologist from the agricultural university in Brno, who found significant differences in the structure of the nasal bones compared to representatives of Alpine populations. After conducting additional comparisons of the results of cranial measurements of the skulls of 10 Tatra marmots (selected from 16 obtained for study) and 40 marmots from the Alps (27 skulls were personally examined, and the measurement data of 13 were the result of Gerrit Miller's work from 1912), Kratochvíl concluded that there was a regularity in the comparative group, namely, that the anterior, facial part of the nasal bone was significantly wider and longer in Tatra marmots than in animals from the Alps. Additionally, he found that marmots living in the Tatras are smaller than their cousins and have slightly different fur coloration. Ultimately, Kratochvíl classified marmots from the Tatras as a separate subspecies, M. marmota latirostris, while marmots from the Alpine population were designated as M. marmota marmota. Some zoologists question whether the observed differences could be merely manifestations of individual variability within the small population. However, genetic studies have not yet been conducted to settle this matter. Etymology The generic name Marmota may derive from Gallo-Romance languages, meaning "murmuring" or "purring", or from Latin, being associated with the term mus montanus, which translates to "mountain mouse". The subspecies epithet latirostris originates from two Latin words: the element lati– comes from Latin lātus, meaning "wide", and –rostris from Latin rōstrum, referring to the nose or beak, and together it can be translated as "broad-nosed". The epithet refers to the flattened and wider facial part of the nasal bone (os nasale) of the animal compared to representatives of the Alpine population. Morphology The Tatra marmot is one of the largest rodents in Europe. It is similar in size to a domestic cat, with a massive torso. Its length, including the head, ranges from 45 cm to 65 cm (although another source by the same author provides a range of 40–60 cm). In spring, the body mass of adult males ranges from 2.7 to 3.4 kg, while females weigh between 2.5 and 3.0 kg. During the season from spring to autumn, marmots start to consume more calories, taking in more carbohydrates from grass seeds, and their brown adipose tissue significantly expands, creating an energy reserve for the next hibernation period. Consequently, the body mass of marmots begins to increase noticeably during this period and can reach over 6 kg by autumn, with over 2 kg attributed to fat tissue. The fluffy tail measures between 13 and 17 cm. The primary fur consists of long, strong, and thick guard hairs, with the down hair being dense, composed of shorter, woolly, and slightly twisted hairs. The fur color is described as reddish-brown transitioning to dark brown-black. The hair coloration is highly varied, with black or dark brown dominating at the base, while shades of fawn, black, or reddish prevail higher up, fading to fawn or beige at the ends. The fur on the abdomen is lighter, ranging from light beige to yellowish. The head is covered with shorter hair, usually dark, black, or gray, with a light patch between the eyes. The muzzle is lighter, with a grayish tone, while the tail is blackish-brown, with a black tip. The fur of young individuals under 1 year old is notably darker and fluffier. Moulting typically occurs once a year, around June. Females, weakened by nursing new offspring, may have incomplete fur, and moulting may be delayed by about four weeks. As they age, the fur becomes more twisted and bushy. Older marmots, especially after hibernation, may have areas of thinning fur on their backs and tails. Five rows of whiskers, measuring up to 8 cm in length, grow on the sides of the marmot's muzzle, with sensory hairs also distributed on the eyebrows. Marmots do not have sweat glands. The forehead is flat and wide, with short (2–2.5 cm), densely furred ears almost entirely concealed in the head's fur. The eyes are small and black. The front paws are short, robust, and dexterous, equipped with four hairy toes ending in claws measuring from 2 to 2.5 cm, which serve as useful tools for digging burrows and holding food. The muscular hind limbs end in five toes with sharp claws. The external surface of the incisors is covered with hard enamel with longitudinal grooves, whose color changes with the animal's age, starting white in juveniles and darkening to orange and nearly brown with age. The posterior, concave surface of the incisors is made up of brittle dentin. The incisor's occlusal arrangement is scissor-like. The dentition for marmots is: . In females, there are five pairs of mammary glands. Morphological in comparison to the alpine marmot The main feature that allowed the Tatra subspecies of marmot to be distinguished was the dimensions of the facial part of the nasal bone: Craniological comparisons (, 1961) Furthermore, Tatra marmots are characterized by lighter fur and have a grayish-brown coloration, while the alpine subspecies has a darker brown coloration, often with a reddish hue. M. marmota latirostris has a smaller body mass. However, sources do not provide details of this difference. Despite differences in the habitats of both subspecies (alpine marmots occupy locations at altitudes up to 3200 m above sea level), no differences have been observed in behavior and way of life. Lifestyle The Tatra marmot is a diurnal animal with territorial and social behavior. It is monogamous, and individual families join colonies, with the nucleus typically being a dominant pair of animals. Annual life cycle The annual life cycle of the marmot consists of two periods: summer activity and hibernation. Summer activity The summer activity of the marmot begins around April and May and lasts until the second half of September or the first half of October. The earliest emergence of marmots on the surface has been observed on April 22, while the latest on May 10. This period lasts 139–158 days (or 139–161), with an average of 148 (or 150.7). The first spring activity after awakening from winter hibernation is associated with the mating season. the estrous cycle typically occurs in the second decade of May. Mating takes place both inside and outside the burrow. Gestation lasts about 33 days, and the young are born in the second decade of June. Usually, from 1 to 6 offspring are born, most commonly from 2 to 3. They remain under the care of the female in the burrow until the second half of July. They start consuming solid food at the age of 8 weeks. They will only become independent after 3–4 years when they reach sexual maturity. The beginning of summer activity is also marked by the search for new locations. Marmots migrate up to a distance of 3000 meters. During the summer period, marmots spend their time foraging (43.9%) and patrolling the territory (40.3%). Other activities such as moving, gathering winter supplies, digging and tidying burrows, playing, hygiene, typically account for only 8.9% of their time. Spending time inside the burrow during the day constitutes an average of 6.9%. There are three different types of contacts between marmots. The locomotor contact is the most common, followed by acoustic and visual contact. Hibernation At the end of September and beginning of October, the period of summer activity ends for the Tatra marmot, and the hibernation period begins. Changes in the environment signal the marmot's organism. The air temperature drops, the days become shorter, and the vegetation, which serves as food, becomes scarce. Marmots curl up into a ball and collectively arrange themselves at the bottom of the winter burrow chamber. Metabolic processes slow down, allowing the body temperature to decrease from 37.7 °C to 8–10 °C, which is about 2–3 °C lower than the temperature inside the burrow. In exceptional situations, the body temperature may drop to 3–5 °C. Respiratory rate decreases from 16 to as low as 2–3 (or 4–5) breaths per minute, as oxygen consumption during hibernation decreases by about thirty times, and the heart rate decreases from 220 to 30 contractions per minute. Other sources specify the frequency of heart ventricular contractions in the summer period as 130, decreasing during hibernation to 15 per minute. Young marmots usually hibernate in the middle of the family, enveloped by older individuals, which helps them survive the burden of winter sleep more easily. Adult marmots accumulate more fat for this period. They also maintain a slightly higher body temperature. During hibernation, the marmot wakes up approximately every 3 weeks. The awakening lasts for about 12–30 hours, after which the animal returns to sleep. The trigger for awakening may be a decrease in the temperature inside the burrow to 0 °C. Too many awakenings during hibernation are very energetically costly for the marmot's organism. During these short awakenings, the marmot's body temperature rises to about 34 °C, sharply increasing energy expenditure, and up to 90% of the reserves accumulated in the form of brown adipose tissue are consumed. In the case of prolonged winter, the organism may become excessively depleted, leading to the death of the animal. During hibernation, the marmot's body mass significantly decreases. Hibernation lasts for 201–227 days, 215 on average. Social structure Family colony can consist of several individuals – the dominant pair, their offspring from different years, and adopted individuals. According to former director of Tatra National Park, , several marmot families can comprise a colony. Marmot territorialism manifests in the daily need for the dominant male to patrol the boundaries demonstratively, systematically marking the territory with its scent. This is most likely done through secretions from glands located in the anal region, cheek glands, and glands in the paw pads. The marked territory is defended by the male against individuals from neighboring colonies. The first warnings are visual signals. The male raises its fur and waves its tail vertically. If the warning is not enough, it leads to a fight, where decisive wounds are inflicted using incisors. If the fight is evenly matched and neither male surrenders or flees, the brawl can last even the whole day. Aggressive defense is mainly directed towards foreign adult males. Young individuals can visit even the center of a foreign colony without fear. They are seen as adoption candidates for the colony without posing a threat to the dominant male's position. The social nature of marmot colonies is evident in their collective actions. They build burrows together, sleep together, and help each other groom. Young ones play together in groups, wrestling, standing in a "pillar" position, or chasing each other around the colony area. During a meeting, marmots greet each other by touching noses and sniffing. Sometimes, they also show excitement by moving their tails up and down. Reproduction is also subject to specific rules and is reserved only for the dominant pair. Mature individuals in the colony must adhere to the leader in this regard. Periodic battles for leadership may result in the overthrow of the dominant male. He is then expelled from the colony, and the victor may sometimes kill his offspring to solidify his power. Sounds Marmots also communicate with each other using whistles or rather screams, as the sound emanates from their wide-open mouths. Its primary function is to warn colony members of danger. Several different levels of alarm whistles can be distinguished. The presence of a human, dog, or fox is signaled by a series of moderately intense whistles – then marmots swiftly move to the nearest burrow and observe from its entrance how events unfold. The series of whistles usually also discourages a fox, who is aware that after hearing the alarm signals, no marmot will become its prey. The greatest threat, such as the appearance of an eagle, is signaled by a single sharp whistle. Then, marmots flee without hesitation and hide in their burrows. Marmots also emit other sounds, which can be described as squeaks or murmurs. These do not convey warning information. Geographical distribution The range of the Marmota marmota species spans between 44° and 49° N. Natural populations have survived in two mountain ranges: the Alps and the Tatra Mountains. The Tatra Mountains represent the northern range of the species (49° 14" N). The Alps are inhabited by the subspecies Marmota marmota marmota, while Marmota marmota latirostris is an endemic species living in Poland and Slovakia. The vertical distribution range of Tatra marmots' habitats ranges from 1380 to 2050 meters above sea level, or according to other sources: in the Polish Tatra Mountains from 1750 to 1950 meters above sea level (with an average of 1870 meters above sea level), and in the entire Tatras massif from 1380 to 2330 meters above sea level. In the main Tatra range, 207 main burrows were inventoried, and in the Low Tatras, 46 burrows were counted, or according to other sources, 40 burrows. The population of Tatra marmots on the Polish side of the Tatras was estimated at 150–200 individuals, while the total population in the entire Tatras is less than 1000 individuals. The issue of precisely determining the locations of M. marmota latirostris in the Tatra Mountains encounters certain difficulties. While it is easy to establish that the autochthonous subspecies of the Tatra marmot occurs in the Western and Eastern Tatras, the origin of the marmots occurring in the Slovakian Low Tatras (with the highest peak being Ďumbier) separated from the main massif of the Tatras by the is not clear. Although Ludwik Zejszner claimed that marmots were already living in the area of Solisko and Ďumbier in 1845, some researchers believe that the population in the Low Tatras is introduced from other parts of the Tatras. The fact of the 19th-century introduction in the area of Kráľova hoľa in the eastern part of the Low Tatras is reported by many researchers. Some even specify a specific time frame for it, around 1859–1867. Others believe that the marmots from the western part of the Low Tatras constitute a post-glacial population because, in their opinion, marmots would not be able to penetrate through the forested areas of the central part of the Low Tatras from previous introductions. In 1961, an expert on Tatra fauna, zoologist from the research station of the national park, Milič Blahout, wrote directly about the introduction of marmots from the Austrian Alps released in the Low Tatras between 1859 and 1867. Barbara Chovancová, who leads the marmot and chamois protection program in the Slovak Tatra National Park (TANAP), has no doubts that introductions were carried out twice in the Ďumbier area of the Low Tatras. Alpine marmots were released there in 1859, and in 1867, two pairs of marmots from the High Tatras were also released. If this were indeed the case, hybridization between representatives of both marmot subspecies could have occurred. Slovak authors mention a possible gene exchange between populations from the western and eastern parts of the Low Tatras. However, most believe that only Marmota marmota latirostris inhabits the Low Tatras. Kratochvíl proposed a hypothesis regarding the potential historical introduction confusion. He pointed out that historical information about marmots in the Carpathians could refer to populations still living in the 19th century in the Carpathians in Romania and Ukrainian Zakarpattia. However, there is a lack of reliable cranial or genetic studies that would resolve the issue of the origin of the populations in the Slovak Low Tatras. Data about historical locations of marmots in the Tatra Mountains was derived from extensive local nomenclature referring to marmot habitats: Svišťový štít, Svišťová dolina, Svišťovský potok, Świstówka Roztocka, Świstówka Waksmundzka, Malá Svišťovka, Veľká Svišťovka, Svišťové sedlo, Świstowa Grań with Svišťové veže included, Świstowa Rówień, Svišťové plieska, Svišťový roh, Svišťová kôpka, Svišťový priechod, Svišťový chrbát, Sedlo pod Svišťovkou, Svišťovka, Nižná svišťová jaskyňa, Vyšná svišťová jaskyňa. These names appear in literature only in the 17th century, but Gabriel Rzączyński already in 1721 mentioned, "is found in the Alps and Carpathian mountains in a valley named Świszcza" (probably referring to Svišťová dolina). In the last decade, a small number of marmots from the population in the Slovak Tatras were introduced into the Ukrainian part of the Eastern Carpathians. Fossil traces of occurrence Fossil traces of marmot occurrence have been found in Moravia, and in Poland in the vicinity of Jasło. Ecology The Tatra marmot is a herbivore. The main components of its diet include: herbaceous vegetation, shrubs and prostrate shrubs, roots, and tubers. The composition of its diet changes with the periods of vegetation in the Tatra Mountains. Favorite spring dishes include grasses, and in summer, the marmot most willingly eats spotted gentian, Luzula alpinopilosa, alpine coltsfoot, alpine avens, Mutellina purpurea, and Poa granitica. More than 40 species of consumed plants are mentioned; among them are alpine bartsia, wood cranesbill, Oreochloa disticha, European blueberry, Veratrum lobelianum, Campanula alpina, , Ranunculus pseudomontanus, large white buttercup, alpine hawkweed, brown clover, colorful fescue, Valeriana sambucifolia, Valeriana tripteris, Thymus alpestris, Thymus pulcherrimus, Adenostyles alliariae, dandelions, Solidago alpestris, Doronicum austriacum, Doronicum clusii, golden cinquefoil, , bistort, alpine bistort, golden root, alpine pasqueflower, alpine sainfoin, Juncus trifidus, Rumex alpestris, wavy hair-grass, eyebright, Soldanella carpatica, alpine meadow-grass, narcissus anemone and round-headed rampion. An adult marmot consumes about 1.5 kg of food daily. Water requirements are met by consuming juicy plants. The Tatra marmot is vulnerable to attacks from predators such as the golden eagle, gray wolf, Eurasian lynx, and red fox. The mere presence of humans also causes pressure and changes in behavior. In relation to humans, the marmot maintains a safe escape distance of several dozen to several hundred meters. In relation to predators, this distance is significantly extended and can be several hundred meters. Habitat The entire population of the Tatra marmot is located within the territory of the Slovak Tatra National Park (TANAP) and the Polish Tatra National Park, so the habitat has a primitive character. The typical habitat of M. marmota latirostris consists of Tatran areas in the altitudinal zonation: alpine zone, subalpine zone (grassy fragments), mountain zone (open spaces), and to a limited extent, foothill zone, with an average annual temperature ranging from -3 to +3 °C. The marmot prefers sunny places. The slope angle of the mountain habitats does not exceed 40°. The marmot enjoys the surroundings of rocks, which provide a good observation point and can also serve as shelter for hiding. The soil must have sufficient thickness to allow the digging of a burrow. In the Tatras, the upper limit of vertical distribution of habitats is determined by orographic conditions. Above 2300 m above sea level, the low thickness of the soil basically prevents the digging of burrows. The lowest recorded site was inventoried at an altitude of 1350 m above sea level. The central part of the colony typically occupies an area of about 2.5 hectares. The surface area of the territory depends, of course, on the vegetation coverage and the size of the colony. However, most authors report much smaller surface areas occupied by the colony: Peter Bopp 2000–2500 m2, Josef Kratochvíl 7900 m2, Milíč Blahout 2500–3600 m2, Dymitr I. Bibikov 500–4500 m2. The maximum occupied area can reach 2–7 hectares. Burrow The burrow is the most important habitat for the marmot. It spends the entire period of hibernation in it, and during the summer, the burrow provides shelter. Therefore, suitable soil conditions, which allow easy excavation of tunnels, largely determine the location of a given colony. Both the thickness of the soil and the arrangement of aquifers are important, as they influence the avoidance of flooding or inundation. The nesting chamber is lined with dried grasses from the local vegetation. The marmot utilizes Juncus trifidus, Oreochloa disticha, fescue, reed grass, moss and lichens. The gathered lining is not used by marmots as food. It has also been observed that marmots collect tissues or fabrics discarded by tourists for this purpose. Marmots maintain cleanliness of their burrows, so each burrow is equipped with a latrine, which is created in alcoves along the side tunnels. Only there does the marmot relieve itself. Similarly, local marmot latrines can also be found outside formal burrows. They are created temporarily on the family's territory, in short tunnels, or in depressions under rocks. Hibernation burrow The hibernation burrow plays a crucial role in the life of these animals. A marmot family spends an average of 215 days per year in it, hence the term "main burrow". It can also serve as the center of a marmot's life. The winter burrow consists of an entire system of branching tunnels, which can have a total length of over 10 meters. The tunnels have an oval shape with a diameter of 15–18 cm and are relatively shallow, but below the frost line – typically 1.2 meters below ground level. The central chamber of the burrow is located even up to 7 meters deep, accessed by several tunnels allowing the use of distant entrances. The chamber is abundantly lined with hay collected by the animals. To prepare one bedroom for winter, these rodents can use up to 15 kg of grass. Near the entrances, there is usually an earth mound formed by the removal of material during tunneling. The tunnels are equipped with small widenings, which allow family members to pass each other going in opposite directions. Before winter, the entrances to the burrows are carefully sealed by the marmots from the inside with packed earth, stones, and feces. Each burrow houses a family hibernating, usually consisting of 4 to 10 marmots. In the past, accounts from hunters indicated that they found between 2 and 15 individuals dug out from a winter burrow. Summer burrow The summer burrow, sometimes referred to as transitional, is a system of tunnels built for temporary use during the summer season. It is excavated more shallowly (does not need to protect from the cold of frozen ground) and has a shorter tunnel system. Sometimes, after summer, it may be deepened and used for hibernation. However, if a family uses the hibernation burrow in the summer, they do not build a separate summer burrow. Escape burrow The most commonly encountered underground structure of the marmot is the escape burrow, also known as emergency or rescue burrow. These are makeshift shelters from predators. Their construction is very simple – sometimes their length does not exceed 1 meter, and they are equipped with only one or two entrances and do not have many branches. The marmot tries to cover its territory with a network of such makeshift shelters. Within the area of one colony, there can be found from 16 to 20 such hiding places, and when disturbed, the animal can remain underground for several hours. The marmot tries not to stray more than 10–15 meters from the nearest shelter. Over time, even escape burrows can be expanded and elevated to a higher status. Tatra marmot in captivity Tatra marmots are not currently bred in Polish zoos. Between 1924 and 1935, the zoo in Poznań possessed three individuals from the Polish population of M. marmota, while from 1927 onwards, the Kraków Bażantarnia (which was succeeded by the local zoo) bred four Tatra marmots. After World War II, individual specimens of this subspecies were exhibited in Czechoslovakian zoos. Threats and conservation Legal conservation International law Bern Convention – annex III Habitats Directive – annex II and IV National law Species protection in Poland – strict protection Threat categories – EN – EN Carpathian List – EN (in Poland – CR) IUCN classification: Marmota marmota marmota is abundant, least concerning (LC). The subspecies Marmota marmota latirostris is rare and requires strict protection. Highly endangered in Poland due to a small population. Threats Until the 19th century, regular hunting of marmots was conducted in the Tatra Mountains, where their skins, meat, and fat were highly sought-after commodities. There was a particularly high demand for marmot fat due to its use in traditional medicine and the widespread belief in its miraculous healing properties. Marmot fat was used both externally and internally for various purposes, including the treatment of hernias. It was also given with milk to women in labor to facilitate childbirth and as a strengthener. Additionally, it was believed to heal wounds, treat swollen glands (lymph nodes), and alleviate coughs. Towards the end of the 19th century, when Zakopane gained status of a spa town, many tuberculosis patients sought "miraculous remedies" in the form of marmot fat. Marmot skins were used to cover horse collars or sold as fur to urban residents. However, the highlanders in the Podhale region did not use them to make their own clothing. Freshly removed marmot skins were applied to painful areas for rheumatism. Marmot meat was also considered the best among all game meats. In the second half of the 19th century, only refuges inaccessible to hunters remained. In 1881, only 30 individuals were inventoried on the Polish side of the Tatras, and by 1888, there were 35 individuals left. On 5 October 1868, under pressure from members of the local history commission of the (Maksymilian Nowicki, Ludwik Zejszner, and Father Dr. ), the Diet of Galicia and Lodomeria in Lviv adopted a law regarding the prohibition of capturing, exterminating, and selling alpine animals proper to the Tatras, marmots, and wild goats. This was the world's first case of parliamentary enactment of a law protecting animal species. The situation of the Tatra marmot worsened again during World War I, leading to an increase in hunting. Since then, sheep grazing in marmot habitats has also begun. A significant improvement in the situation came with the establishment of the Tatra National Park in 1954, which encompasses all marmot locations in the Polish Tatras. The main threats to M. marmota latirostris include pressure from predators and hunting. Changes in behavior are induced by excessive tourist and sports activity. Human presence also limits contacts between colonies (e.g., in the Kasprowy Wierch area), which are necessary for genetic exchange, and increases the risk of additional diseases, including parasites. Population Since the significant decline of marmots in the 19th century, their population has slowly been increasing. In 1881, there were 30 marmots in the Polish Tatras, between 1888 – 35, in 1928 and 1952 – 50, in 1982 between 108 and 132, and around 190 individuals in 2003. Altogether, on both sides of the Tatras at the beginning of the 21st century, there were approximately 700–800 individuals of this subspecies. See also List of mammals of Poland Fauna of Poland References Marmots Tatra Mountains Rodents of Europe Herbivorous animals Endangered species Critically endangered animals
Tatra marmot
Biology
6,974
5,905,242
https://en.wikipedia.org/wiki/Circlip
A circlip (a portmanteau of "circle" and "clip"), also known as a C-clip, snap ring, or ', is a type of fastener or retaining ring that consists of a semi-flexible metal ring with open ends that can be snapped into place into a machined groove on a dowel pin or other part to permit rotation but to prevent axial movement. There are two basic types of circlips: internal (fitted into a bore) and external (fitted over a shaft). Circlips are used to secure pinned connections. Details The term "Jesus clip" is a comical reference given to it due to its tendency to come loose and launch itself at high speed while removing or installing it, often leading to the remark "Oh Jesus, where did it go?" E-clip Common examples include e-clips (e-rings) and the snap ring (both internal and external) or circlip. These general types of fasteners are sized to provide an interference fit onto (or into, in the case of an internal fastener) a groove or land when in use, such that they must be elastically deformed in order to install or remove them. Installation and maintenance The name snap ring generally refers to circlips that have the ends formed to aid installation and removal and are not formed from wire (i.e., do not have a round cross-section). These rings are designed to be installed and removed with special pliers. Some of these special pliers can be configured for internal or external clips, while in other cases, one plier is used for internal clips and another for external clips. For expediency in the field, a pair of needle-nose pliers (for internal clips) or leverage with a flat-headed screwdriver (internal or external) is sometimes used. Since most snap rings are stamped from sheet steel, one side is slightly rounded and the other has sharp, rough edges. This is due to the stamping die behaving like a cookie cutter and causing a slight rounding of the upper edge of the cut clip. The snap ring must always be installed such that force is transmitted to the retaining groove from the rounded side of the ring, not the rough or square-edged side. If a snap ring is positioned such that its flat side is pressed into the rounded edge of the groove, then when load or force is applied, the flat edge of the snap ring will "bite" into the rounded edge of the retaining groove. The snap ring will distort and ride up the rounded edge, spreading an external snap ring and compressing an internal snap ring. This leaves the clip prone to being forced out of its groove and failing at its retaining function. The accompanying images illustrate the correct orientation of the snap ring in its groove. Wet or dry lubrication is recommended to reduce friction against the circlip and maintain function. References External links Definition with picture Historical background Fasteners
Circlip
Engineering
616
5,743,437
https://en.wikipedia.org/wiki/Schwyll%20Aquifer
Schwyll Aquifer (pronounced 'Shwill') was historically known as 'the Great Spring of Glamorgan'. Welsh Water uses the resurgence at the Schwyll Spring near Ewenny in the Vale of Glamorgan as the main source of water for the Bridgend area. Now functioning as a backup supply, it has a number of associated source protection zones policed by Natural Resources Wales. The aquifer is an underground layer of water-bearing permeable rock. In this case, it consists of an underground waterway in the carboniferous limestone. Due to the delay between local heavy rain and discoloration of the water at the spring, it is believed that the origination is 20 miles away. This would locate the main source in the limestone of the southern edge of the Brecon Beacons. The lack of an access at the rising hinders exploration of the cave system. The only known point of access is at the pumping shaft of the extraction plant at Schwyll. The system was explored by cave divers in 1998 to 440 metres from the shaft. At 400 metres, a large cavern was discovered containing bones which were identified as horse. The bones were submitted for dating. The outflow of the spring is far larger than any other spring in Wales and greater than that from the Wookey Hole or the Cheddar Gorge risings in England. See also External links Well diving in Wales - Feb 1998 Environment Agency 2008 Porthcawl Scwyll Aquifer Report-type 'Schwyll' into 'publication report' box Aquifers Springs of Wales Bodies of water of the Vale of Glamorgan
Schwyll Aquifer
Environmental_science
335
50,388,178
https://en.wikipedia.org/wiki/Political%20views%20of%20American%20academics
The political views of American academics began to receive attention in the 1930s, and investigation into faculty political views expanded rapidly after the rise of McCarthyism. Demographic surveys of faculty that began in the 1950s and continue to the present have found higher percentages of liberals than of conservatives, particularly among those who work in the humanities and social sciences. Researchers and pundits disagree about survey methodology and about the interpretations of the findings. History Pre- and post-WWII Carol Smith and Stephen Leberstein have documented investigations of professors' political views at the City College of New York (CCNY) during the 1930s and 1940s. Citing the tactics of private hearings, requiring respondents to name others, and denying rights of legal representation, Smith calls the investigations a "dress rehearsal for McCarthyism". Smith described the case of Max Yergan, who was the first African American professor hired at the CCNY. After complaints that he expressed liberal and progressive views in his classes on Negro History and Culture, Yergan was terminated in 1936. In 1938, the U.S. House of Representatives created the House Un-American Activities Committee; one of the committee's first actions was to attempt to investigate the political views of faculty in the New York public colleges. In 1940, Bertrand Russell was denied employment as a philosophy professor at CCNY because of his political beliefs. That same year, the New York State Legislature created the Rapp-Coutert Committee, which held hearings in 1940–41 during which faculty accused of holding communist political beliefs were interrogated. More than 50 faculty and staff at CCNY resigned or were terminated as a result of the hearings. One professor, Morris Schappes, served a year in prison on perjury charges for refusing to name colleagues who may have been affiliated with the Communist party. Smith believes that the investigations caused the largest political purge on one campus in the history of the US. In 1942, the Federal Bureau of Investigation (FBI), began investigating the political views of W.E.B. DuBois, an African American sociologist who taught at Atlanta University. The investigation centered on DuBois's 1940 autobiography, Dusk of Dawn. Although the investigation was dismissed, Atlanta University fired DuBois in 1943. Public outcry led the university to reinstate DuBois, but he retired in 1944. In 1949, the House Un-American Activities Committee summoned faculty members from the University of Washington, and three tenured faculty members were fired. Public concern about the political opinions of college teachers intensified after World War II ended in 1945. Sociologists who were investigated by the FBI for their political beliefs during this period include Ernest Burgess, William Fielding Ogburn, Robert Staughton Lynd, Helen Lynd, E. Franklin Frazier, Pitirim A. Sorokin, Talcott Parsons, Herbert Blumer, Samuel Stouffer, C. Wright Mills, and Edwin H. Sutherland. McCarthyism and loyalty oaths Although government employees and entertainment figures were most often investigated for alleged communist sympathies during the "Second Red Scare" of the 1950s, many university faculty were accused as well. In their 1955 study of 2,451 social scientists who taught at American colleges and universities, Lazarsfeld and Thielens noted that the period of 1945–55 was especially marked by suspicion and attacks on colleges for the political views of their faculty. These authors label this period "the difficult years." In 1950, the University of California Board of Regents and its administration began to require faculty to sign a two-part political loyalty oath: one part required faculty to declare they were not Communists, and did not believe in the tenets of Communism; the other part was an oath of loyalty to the state of California and the US Constitution in accordance with the Levering Act. In early March, 1950, the faculty, who numbered 900, unanimously refused to sign even though the Regents threatened non-signers with termination. Faculty who refused to sign the loyalty oath were terminated, although most of the terminations were later overturned by a California state court. In 1951, members of the American Legion began accusing various university faculty of being communists. University administrations responded by banning left-wing student groups and communist speakers. Joseph McCarthy's Senate committee investigated 18 faculty members at Sarah Lawrence College, some of whom were pressured to resign. According to historian Ellen Schrecker, "it is very clear that an academic blacklist was in operation during the McCarthy era." An estimated 100 university faculty were terminated during the McCarthy era due to suspicions about their political beliefs. In 1970, Federal Bureau of Investigation Director J. Edgar Hoover sent an open letter to US college students, advising them to reject leftist politics, and throughout the 1970s and 1980s, the FBI conducted a secret counterintelligence program in libraries. Surveys Ford Foundation In 1955, Robert Maynard Hutchins led an effort within the Ford Foundation to document and analyze the effects of McCarthyism on academic freedom. He commissioned sociologist Paul Lazarsfeld to conduct a study of university faculty in the United States, and the results were published by Lazarsfeld and Wagner Thielens in a book, The Academic Mind. As part of a survey of faculty views about academic freedom during the "Second Red Scare", they asked 2,451 professors of social science a large number of questions, and found that about two thirds of these faculty members had been visited by the FBI and had been asked questions about the political beliefs of their colleagues, students, and themselves. They also included a few questions about political party affiliations and recent voting patterns, and reported that there were more Democrats than Republicans, 47% to 16%. According to sociologist Neil Gross, the study was significant because it was the first effort to poll university faculty specifically about their political views. Carnegie Commission on Higher Education The Lazarsfeld and Thielens study had examined a sample of 2,451 social science faculty members. A second study, conducted in 1969 on behalf of the Carnegie Commission on Higher Education, was the first to be performed with a large survey sample, extensive questions about political views, and what Neil Gross characterized as highly rigorous analytic methods. The study was conducted in 1969 by political scientist Everett Carll Ladd and sociologist Seymour Martin Lipset, who surveyed 60,000 academics in multiple fields of study at 303 institutions about their political views. Publishing their results in the 1975 book The Divided Academy, Ladd and Lipset found that about 46% of professors described themselves as liberal, 27% described themselves as moderates, and 28% described themselves as conservative. They also reported that faculty in the humanities and social sciences tended to be the most liberal, while those in "applied professional schools such as nursing and home economics" and in agriculture were the most conservative. Younger faculty tended to be more liberal than older faculty, and faculty across the political spectrum tended to disapprove of the student activism of the 1960s. Smaller follow-up surveys on behalf of the Carnegie Foundation held in 1975, 1984, 1989, and 1997 showed an increased trend among professors toward the left, apart from a small movement to the right in 1984. By the 1997 study, 57% of the professors surveyed identified as liberals, 20% as moderates, and 24% as conservatives. Later surveys As later surveys were published, some scholars pointed to the harmful effects of a political imbalance in the faculty, and one editorial described the effects as "ruining college". Other scholars said that there were serious methodological problems that led to overestimates of the disparity between liberals and conservatives, and that there were political motivations for such overestimates. Higher Education Research Institute Beginning in 1989, the Higher Education Research Institute (HERI) at the University of California, Los Angeles has conducted a survey of full-time faculty at American four-year colleges and universities every three years. The HERI Faculty Survey gathers comprehensive information about the faculty experience, such as position, field, institutional details, and personal opinion and views, including a single question asking respondents to self-identify their political orientation as "far left", "liberal", "moderate/middle of the road", "conservative", or "far right". Between 1989 and 1998, the survey showed negligible change in the number of professors who described themselves as far left or liberal, approximately 45%. , surveying 16,112 professors, the percentage of liberal/far left had increased to 60%. When asked in 2012 about the significance of the findings on political views, the director of HERI, Sylvia Hurtado, said that the numbers on political views attract a lot of attention, but that this attention may be misplaced because there may be trivial reasons for the shifts. North American Academic Survey Study Ladd and Lipset, who had conducted the original Carnegie survey, designed a telephone survey in 1999 of approximately 4000 faculty, administrators, and students, called the North American Academic Survey Study (NAASS). The survey found the ratio of those identifying themselves as Democrat to those identifying as Republican to be 12 to 1 in the humanities, and 6.5 to 1 in the social sciences. Stanley Rothman, the project lead after the passing of Ladd and Lipset, published a paper using NAASS data along with Neil Nevitte and S. Robert Lichter which concluded "complaints of ideologically-based discrimination in academic advancement deserve serious consideration and further study". Rothman along with co-authors Matthew Woessner and April Kelly-Woessner reported their extended findings in a book titled The Still Divided Academy. Politics of the American Professoriate Neil Gross and Solon Simmons conducted a survey starting in 2006 called the Politics of the American Professoriate which led to several study papers and books. They designed their survey to improve on past studies which they felt had not included community college professors, addressed low response rates, or used standardized questions. The survey drew upon a sample size of 1417 full-time professors from 927 institutions. In 2007, Gross and Simmons concluded in The Social and Political Views of American Professors that the professors were 44% liberal, 46% moderates, and 9% conservative. Inside Higher Ed reported that economist Lawrence H. Summers made his own analysis of the data collected by Gross and Simmons and found a larger gap among faculty teaching "core disciplines for undergraduate education" at selective research universities, but the report also concluded that "there was widespread praise for the way the survey was conducted, with Summers and others predicting that their data may become the definitive source for understanding professors' political views." Gross published a more extensive analysis in his 2013 book Why Are Professors Liberal and Why Do Conservatives Care? and, with Simmons, in their 2014 compilation Professors and Their Politics. They strongly criticized what they saw as conservative political influence on the interpretation of data about faculty political views, arising from activists and think tanks seeking political reform of American higher education. Sociologist Joseph Hermanowicz described Professors and Their Politics as "a welcome addition to sociological literature examining higher education, which, in the case of its intersection with politics, has not received serious attention since Paul Lazarsfeld and Wagner Theilen's classic study of 1958 and Seymour Martin Lipset and Everett Carll Ladd's 1976 work." Regional and disciplinary variations Several studies have found that the political views of academics vary considerably between different regions of the United States, and between academic disciplines. In a 2016 opinion column in The New York Times, for example, political scientist Samuel J. Abrams used HERI data to argue that the ratio of liberal to conservative faculty varied greatly between regions. According to Abrams, the ratio of liberal to conservative professors was highest in New England, where this ratio was 28:1, compared to 6:1 nationally. Abrams also commented on these findings that "This previously unspecified ideological imbalance on campuses has led to cries of discrimination against right of center professors and scores of reports from both academic and popular press sources which have chronicled the concerns with this "beleaguered" and "oppressed" minority on campus... The data clearly reveal that conservative faculty are not only as satisfied with their career choice – if not more so – as their liberal counterparts, but that these faculty are also as progressive in their teaching methods and maintain almost identical outlooks toward their personal and professional lives." Mitchell Langbert examined variations in political party registration in 2018, describing a higher concentration of Democrats in elite liberal arts institutions in the northeast, and found more Democrats among female faculty than male faculty. He also found the greatest ratio of Democrats to Republicans in interdisciplinary studies and the humanities, and the lowest ratio in professional studies and science and engineering. Focusing specifically on social psychology academics, a 2014 study found that "[b]y 2006, however, the ratio of Democrats to Republicans had climbed to more than 11:1." The six authors, all from different universities and members of the Heterodox Academy, also said, by 2012, "that for every politically conservative social psychologist in academia there are about 14 liberal psychologists" according to Arthur C. Brooks. Academy member Steven Pinker described the study as "one of the most important papers in the recent history of the social sciences". Russell Jacoby questioned the focus of the study on the social sciences rather than STEM fields saying that the "reason is obvious: Liberals do not outnumber conservatives in many of those disciplines". Effects On research A 2020 study asked participants to read the abstract of 194 psychology papers and judge which political side (if any) the findings seemed to support. The researchers found no relationship between perceived political slant and replicability, impact factor, or the quality of the research design. They did however find modest evidence that research with a greater perceived political slant — whether liberal or conservative — was less replicable. On students Since the modern conservative movement in the United States began in the mid-20th century, conservative authors have argued that college students are no longer taught how to think, but what to think, as a result of the domination of far-left faculty. William F. Buckley's God and Man at Yale: The Superstitions of "Academic Freedom", Allan Bloom's The Closing of the American Mind, Dinesh D'Souza's Illiberal Education, and Roger Kimball's Tenured Radicals have made such arguments. George Yancey argues that there is little evidence that the political orientation of faculty members affects the political attitudes of their students. A study by Mack D. Mariani and Gordon J. Hewitt published in 2008 examined ideological changes in college students between their first and senior years and found that these changes correlated with that of most Americans between the ages of 18 and 24 during the same time period, and there was no evidence that faculty ideology was "associated with changes in students' ideological orientation" and concluded that students at more liberal schools "were not statistically more likely to move to the left" than students at other institutions. Similarly, Stanley Rothman, April Kelly-Woessner, and Mathew Wossner found in 2010 that students' "aggregate attitudes do not appear to vary much between their first and final years," and wrote that this "raises some questions about charges that campuses politically indoctrinate students." Analysis of a survey of students' political attitudes by M. Kent Jennings and Laura Stoker found that the tendency of college graduates to be more liberal is largely due to "the fact that more liberal students are more likely to go to college in the first place." According to a 2020 study, there is regression to the mean effect among individuals who go to college. Both left-wing and right-wing students become more moderate during their time in college. On faculty Rothman, Kelly-Woessner, and Woessner also found in 2010 that 33% of conservative faculty say they are "very satisfied" with their careers, while 24% of liberal faculty say so. Over 90% of Republican-voting professors said that they would still become professors if they could do it all over again. The authors concluded that, although such numbers are not definitive as to how faculty members feel that they have been treated, they provide some evidence against the idea that conservative faculty members are systematically discriminated against. Woessner and Kelly-Woessner also examined what might have given rise to the differences in the numbers of liberals and conservatives. They looked at the choices made by undergraduate students when planning future careers. They found that there were no differences in intellectual ability between conservative and liberal students, but that liberal students were significantly more likely to choose to pursue PhD degrees and academic careers, whereas conservative students of identical academic accomplishments were more likely to pursue business careers. They concluded that the greater numbers of liberal than conservative professors could be accounted for by self-selection in career paths, rather than by bias in hiring or promotion. Lawrence Summers said at a symposium about The Social and Political Views of American Professors that he considers it a problem that some academics express an "extreme hostility" to conservative opinions. He observed that faculty who were invited to give Tanner Lectures on Human Values were almost always liberals, and expressed concern that an imbalance in political representation at universities could impede rigorous examination of issues. He also attributed the small numbers of conservative professors largely to the career choices made by people comparing academic careers with other options. One outcome of these controversies was the founding of the Heterodox Academy in 2015, a bipartisan organization of professors seeking to increase the acceptance of diverse political viewpoints in academic discourse. As of February 2018, over 1500 college professors had joined Heterodox Academy. The group publishes a ranking which rates the top 150 universities in the United States based on their commitment to diversity of viewpoint. Jon Shields and Joshua Dunn surveyed 153 conservative professors for their 2016 study Passing on the Right: Conservative Professors in the Progressive University. The authors wrote that these professors sometimes have to use "coping strategies that gays and lesbians have used in the military and other inhospitable work environments" in order to preserve their political identity. One tactic used by about one-third of the professors was to "pass" (or pretend) to hold liberal views around their colleagues. Shields stated his view that the populist right may overstate the bias that does exist and that conservatives can succeed using mechanisms like academic tenure to protect their freedom. See also Academic bias Media bias Political issues in higher education in the United States Political correctness in education References Further reading Academia in the United States Academic freedom Academic terminology Bias Censorship Conformity Education controversies in the United States Liberalism in the United States Progressivism in the United States McCarthyism Anti-communism in the United States
Political views of American academics
Biology
3,766
26,334,458
https://en.wikipedia.org/wiki/Journal%20of%20Polymer%20Science
Journal of Polymer Science is a peer-reviewed journal of polymer science currently published by John Wiley & Sons. It was originally established as the Journal of Polymer Science in 1946 by Interscience Publishers and the founding editor Herman F. Mark, but it was split in various parts in 1962. The journal has undergone re-organization several times since. In 2020, the journal will consolidate in one single publication. The editor-in-chief is Joseph W Krumpfer. History Establishment Journal of Polymer Science (1946–1962), First re-organization Journal of Polymer Science Part A: General Papers (1963–1965), Journal of Polymer Science Part A-1: Polymer Chemistry (1966–September 1972), Journal of Polymer Science Part A-2: Polymer Physics (1966–September 1972), Journal of Polymer Science Part B: Polymer Letters (1963–September 1972), Journal of Polymer Science Part C: Polymer Symposia (1963–1972), The coverage of biopolymers was split into a distinct journal, Biopolymers. Second re-organization Journal of Polymer Science: Polymer Physics Edition (October 1972 – 1985), Journal of Polymer Science: Polymer Letters Edition (October 1972 – 1985), Journal of Polymer Science: Polymer Chemistry Edition (1973–1985), Journal of Polymer Science: Polymer Symposia (1973–1986), Third re-organization Journal of Polymer Science Part A: Polymer Chemistry (1986–2019), Journal of Polymer Science Part B: Polymer Physics (1986–2019), Journal of Polymer Science Part C: Polymer Letters (1986–1990), Fourth re-organization Journal of Polymer Science (2020 onwards), References External links Chemistry journals Materials science journals Academic journals established in 1946 Wiley (publisher) academic journals English-language journals
Journal of Polymer Science
Materials_science,Engineering
365
3,127,492
https://en.wikipedia.org/wiki/Geon%20%28psychology%29
Geons are the simple 2D or 3D forms such as cylinders, bricks, wedges, cones, circles and rectangles corresponding to the simple parts of an object in Biederman's recognition-by-components theory. The theory proposes that the visual input is matched against structural representations of objects in the brain. These structural representations consist of geons and their relations (e.g., an ice cream cone could be broken down into a sphere located above a cone). Only a modest number of geons (< 40) are assumed. When combined in different relations to each other (e.g., on-top-of, larger-than, end-to-end, end-to-middle) and coarse metric variation such as aspect ratio and 2D orientation, billions of possible 2- and 3-geon objects can be generated. Two classes of shape-based visual identification that are not done through geon representations, are those involved in: a) distinguishing between similar faces, and b) classifications that don’t have definite boundaries, such as that of bushes or a crumpled garment. Typically, such identifications are not viewpoint-invariant. Properties of geons There are 4 essential properties of geons: View-invariance: Each geon can be distinguished from the others from almost any viewpoints except for “accidents” at highly restricted angles in which one geon projects an image that could be a different geon, as, for example, when an end-on view of a cylinder can be a sphere or circle. Objects represented as an arrangement of geons would, similarly, be viewpoint invariant. Stability or resistance to visual noise: Because the geons are simple, they are readily supported by the Gestalt property of smooth continuation, rendering their identification robust to partial occlusion and degradation by visual noise as, for example, when a cylinder might be viewed behind a bush. Invariance to illumination direction and surface markings and texture. High distinctiveness: The geons differ qualitatively, with only two or three levels of an attributes, such as straight vs. curved, parallel vs. non parallel, positive vs. negative curvature. These qualitative differences can be readily distinguished thus rendering the geons readily distinguishable and the objects so composed, readily distinguishable. Derivation of invariant properties of geons Viewpoint invariance: The viewpoint invariance of geons derives from their being distinguished by three nonaccidental properties (NAPs) of contours that do not change with orientation in depth: Whether the contour is straight or curved, The vertex that is formed when two or three contours coterminate (that is, end together at the same point), in the image, i.e., an L (2 contours), fork (3 contours with all angles < 180°), or an arrow (3 contours, with one angle > 180°), and Whether a pair of contours is parallel or not (with allowance for perspective). When not parallel, the contours can be straight (converging or diverging) or curved, with positive or negative curvature forming a convex or concave, envelope, respectively (see Figure below). NAPs can be distinguished from metric properties (MPs), such as the degree of non-zero curvature of a contour or its length, which do vary with changes in orientation in depth. Invariance to lighting direction and surface characteristics Geons can be determined from the contours that mark the edges at orientation and depth discontinuities of an image of an object, i.e., the contours that specify a good line drawing of the object’s shape or volume. Orientation discontinuities define those edges where there is a sharp change in the orientation of the normal to the surface of a volume, as occurs at the contour at the boundaries of the different sides of a brick. A depth discontinuity is where the observer’s line of sight jumps from the surface of an object to the background (i.e., is tangent to the surface), as occurs at the sides of a cylinder. The same contour might mark both an orientation and depth discontinuity, as with the back edge of a brick. Because the geons are based on these discontinuities, they are invariant to variations in the direction of lighting, shadows, and surface texture and markings. Geons and generalized cones The geons constitute a partition of the set of generalized cones, which are the volumes created when a cross section is swept along an axis. For example, a circle swept along a straight axis would define a cylinder (see Figure). A rectangle swept along a straight axis would define a "brick" (see Figure). Four dimensions with contrastive values (i.e., mutually exclusive values) define the current set of geons (see Figure): Shape of cross section: round vs. straight. For example, as stated above, a rectangle swept along a straight axis would define a "brick" and the cross section would be straight. Axis: straight vs. curved. Size of cross-section as it is swept along an axis: constant vs. expanding (or contracting) vs. expanding then contracting vs. contracting then expanding. The cross section size of a "brick" would be constant. Termination of geon with constant sized cross-sections: truncated vs. converging to a point vs. rounded. These variations in the generating of geons create shapes that differ in NAPs. Experimental tests of the viewpoint invariance of geons There is now considerable support for the major assumptions of geon theory (See Recognition-by-components theory). One issue that generated some discussion was the finding that the geons were viewpoint invariant with little or no cost in the speed or accuracy of recognizing or matching a geon from an orientation in depth not previously experienced. Some studies reported modest costs in matching geons at new orientations in depth but these studies had several methodological shortcomings. Research on geons There is much research out about geons and how they are interpreted. Kim Kirkpatrick-Steger, Edward A. Wasserman and Irving Biederman have found that the individual geons along with their spatial composition are important in recognition. Furthermore, the findings in this research seem to indicate that non-accidental sensitivity can be found in all shape discriminating species. Notes Vision Perception Spatial cognition
Geon (psychology)
Physics
1,333
66,558,416
https://en.wikipedia.org/wiki/Davis%20reagent
Davis reagent (3-phenyl-2-(phenylsulfonyl)-1,2-oxaziridine or 2-(benzenesulfonyl)-3-phenyloxaziridine) is a reagent used for oxidation in the Davis oxidation reaction, as well as oxidation of thiols to sulfones. It is named for Franklin A. Davis. References Reagents for organic chemistry
Davis reagent
Chemistry
97
3,648,704
https://en.wikipedia.org/wiki/OpenBSD%20security%20features
The OpenBSD operating system focuses on security and the development of security features. According to author Michael W. Lucas, OpenBSD "is widely regarded as the most secure operating system available anywhere, under any licensing terms." API and build changes Bugs and security flaws are often caused by programmer error. A common source of error is the misuse of the strcpy and strcat string functions in the C programming language. There are two common alternatives, strncpy and strncat, but they can also be difficult to understand and easy to misuse, so OpenBSD developers Todd C. Miller and Theo de Raadt designed the strlcpy and strlcat functions. These functions are intended to make it harder for programmers to accidentally leave buffers unterminated or allow them to be overflowed. They have been adopted by the NetBSD and FreeBSD projects but not by the GNU C Library. On OpenBSD, the linker has been changed to issue a warning when unsafe string manipulation functions, such as strcpy, strcat, or sprintf, are found. All occurrences of these functions in the OpenBSD source tree have been replaced. In addition, a static bounds checker is included in OpenBSD in an attempt to find other common programming mistakes at compile time. Other security-related APIs developed by the OpenBSD project include issetugid and arc4random. Kernel randomization In a June 2017 email, Theo de Raadt stated that a problem with stable systems was that they could be running for months at a time. Although there is considerable randomization within the kernel, some key addresses remain the same. The project in progress modifies the linker so that on every boot, the kernel is relinked, as well as all other randomizations. This differs from kernel ASLR; in the email he states that "As a result, every new kernel is unique. The relative offsets between functions and data are unique ... [The current] change is scaffolding to ensure you boot a newly-linked kernel upon every reboot ... so that a new random kernel can be linked together ... On a fast machine it takes less than a second ... A reboot runs the new kernel, and yet another kernel is built for the next boot. The internal deltas between functions inside the kernel are not where an attacker expects them to be, so he'll need better info leaks". Memory protection OpenBSD integrates several technologies to help protect the operating system from attacks such as buffer overflows or integer overflows. Developed by Hiroaki Etoh, ProPolice is a GCC extension designed to protect applications from stack-smashing attacks. It does this through a number of operations: local stack variables are reordered to place buffers after pointers, protecting them from corruption in case of a buffer overflow; pointers from function arguments are also placed before local buffers; and a canary value is placed after local buffers which, when the function exits, can sometimes be used to detect buffer overflows. ProPolice chooses whether or not to protect a buffer based on automatic heuristics which judge how vulnerable it is, reducing the performance overhead of the protection. It was integrated in OpenBSD's version GCC in December 2002, and first made available in OpenBSD 3.3; it was applied to the kernel in release 3.4. The extension works on all the CPU architectures supported by OpenBSD and is enabled by default, so any C code compiled will be protected without user intervention. In May 2004, OpenBSD on the SPARC platform received further stack protection in the form of StackGhost. This makes use of features of the SPARC architecture to help prevent exploitation of buffer overflows. Support for SPARC64 was added to in March 2005. OpenBSD 3.4 introduced W^X, a memory management scheme to ensure that memory is either writable or executable, but never both, which provides another layer of protection against buffer overflows. While this is relatively easy to implement on a platform like x86-64, which has hardware support for the NX bit, OpenBSD is one of the few OSes to support this on the generic i386 platform, which lacks built in per-page execute controls. During the development cycle of the 3.8 release, changes were made to the malloc memory management functions. In traditional Unix operating systems, malloc allocates more memory by extending the Unix data segment, a practice that has made it difficult to implement strong protection against security problems. The malloc implementation now in OpenBSD makes use of the mmap system call, which was modified so that it returns random memory addresses and ensures that different areas are not mapped next to each other. In addition, allocation of small blocks in shared areas are now randomized and the free function was changed to return memory to the kernel immediately rather than leaving it mapped into the process. A number of additional, optional checks were also added to aid in development. These features make program bugs easier to detect and harder to exploit: instead of memory being corrupted or an invalid access being ignored, they often result in a segmentation fault and abortion of the process. This has brought to light several issues with software running on OpenBSD 3.8, particularly with programs reading beyond the start or end of a buffer, a type of bug that would previously not be detected directly but can now cause an error. These abilities took more than three years to implement without considerable performance loss. Cryptography and randomization One of the goals of the OpenBSD project is the integration of facilities and software for strong cryptography into the core operating system. To this end, a number of low-level features are provided, including a source of strong pseudo random numbers; built-in cryptographic hash functions and transforms; and support for cryptographic hardware (OpenBSD Cryptographic Framework). These abilities are used throughout OpenBSD, including the bcrypt password-hashing algorithm derived from Bruce Schneier's Blowfish block cipher, which takes advantage of the CPU-intensive Blowfish key schedule, making brute-force attacks less practical. In OpenBSD 5.3, support for full disk encryption was introduced, but enabling it during the installation of OpenBSD had required manual intervention from the user by exiting the installer and entering some commands. Starting from OpenBSD 7.3, the installer supports enabling full disk encryption using a guided procedure, not requiring manual intervention anymore. To protect sensitive information such as passwords from leaking on to disk, where they can persist for many years, OpenBSD supports encryption of swap space. The swap space is split up into many small regions that are each assigned their own encryption key, which is generated randomly and automatically with no input from the user, held entirely in memory, and never written to disk except when hibernating; as soon as the data in a region is no longer required, OpenBSD discards its encryption key, effectively transforming the data in that region into useless garbage. Toggling this feature can be done using a single sysctl configuration option, and doesn't require any prior setup, disk partitioning, or partition-related settings to be done/changed; furthermore, there is no choice of encryption parameters (such as the algorithm or key length to use), as strong parameters are always used. There is no harm and no loss of functionality with this feature, because the encryption keys used to access swapped processes are only lost when the computer crashes (e.g. power loss), after which all operating systems discard the previous contents of the memory and swap anyway, and because hibernation continues to work as usual with this feature. This feature is enabled by default in OpenBSD 3.8 (released in November 2005) and later; OpenBSD, as of 2022, remains the only prominent operating system to have swap encrypted by default independently of disk encryption and its user-provided password. (Windows requires toggling a configuration setting that is not presented in its user-facing Control Panel and Settings apps, and other operating systems, including macOS, FreeBSD, and every Linux-based operating system, rely on the existing disk encryption features to encrypt the swap, which often (a) need to be enabled by the user manually, (b) require setup (if disk encryption wasn't chosen during the operating system's installation) which is not as trivial to do as toggling swap encryption on OpenBSD, and (c) use the user-provided password, which users need to remember and could be weak/guessable or even extracted out of the users.) The network stack also makes heavy use of randomization to increase security and reduce the predictability of various values that may be of use to an attacker, including TCP initial sequence numbers and timestamps, and ephemeral source ports. A number of features to increase network resilience and availability, including countermeasures for problems with ICMP and software for redundancy, such as CARP and pfsync, are also included. The project was the first to disable the plain-text telnet daemon in favor of the encrypted SSH daemon, in 1999, and features other integrated cryptographic software such as IPsec. The telnet daemon was completely removed from OpenBSD in 2005 before the release of OpenBSD version 3.8. Signify The OpenBSD project had invented their own utility for cryptographic signing and verification of files, signify, instead of using existing standards and software such as OpenPGP and GnuPG. The creator of the signify utility, Ted Unangst, wrote in 2015, speaking of OpenPGP and GnuPG: "The concerns I had using an existing tool were complexity, quality, and complexity." This is in line with the project's longtime tendency to reduce complexity, and in turn, reduce the probability of vulnerabilities existing in the software, and help the user understand the software better and make more security-educated decisions. signify is integrated into the base operating system and used for verification of all releases, patches, and packages starting with OpenBSD 5.5. In contrast, other Free Software operating systems and security-focused software tend to use OpenPGP for release verification, and as of 2022 continue to do so, including: Debian, a prominent operating system that's also used as a base for other operating systems, including Ubuntu; Kali Linux, a specialized operating system for penetration testing, security research, digital forensics, and reverse engineering; Qubes OS, a security-focused operating system; Tor Browser, an anonymous Web browser; SecureDrop, a software package for journalists and whistleblowers to exchange information securely and anonymously over the Internet; and VeraCrypt, a software program for on-the-fly encryption and full disk encryption. X11 In X11 on OpenBSD, neither the X server nor X clients normally have any escalated direct memory or hardware privileges: When driving X with the Intel(4) or Radeon(4) drivers, these normally interact with the underlying hardware via the Direct Rendering Management(4) kernel interface only, so that lowlevel memory/hardware access is handled solely by the kernel. Other drivers such as WSFB follow a similar pattern. For this reason, X11 on OpenBSD does not open up lowlevel memory or hardware access to user/root programs as is done on some other systems, and as was done in the past, which then needed the user to escalate the machdep.allowaperture setting from its default zero setting, to an unsecure setting. OpenBSD's version of the X Window System (named Xenocara) has some security modifications. The server and some of the default applications are patched to make use of privilege separation, and OpenBSD provides an "aperture" driver to limit X's access to memory. However, after work on X security flaws by Loïc Duflot, Theo de Raadt commented that the aperture driver was merely "the best we can do" and that X "violates all the security models you will hear of in a university class." He went on to castigate X developers for "taking their time at solving this > 10-year-old problem." On November 29, 2006, a VESA kernel driver was developed that permitted X to run, albeit more slowly, without the use of the aperture driver. On February 15, 2014, X was further modified to allow it to run without root privileges. After the discovery of a security vulnerability in X, OpenBSD doesn't support the running of X as a root user and only supports running X via a display manager as a dedicated _x11 user. Other features Privilege separation, privilege revocation, chrooting and randomized loading of libraries also play a role in increasing the security of the system. Many of these have been applied to the OpenBSD versions of common programs such as tcpdump and Apache, and to the BSD Authentication system. OpenBSD has a history of providing its users with full disclosure in relation to various bugs and security breaches detected by the OpenBSD team. This is exemplified by the project's slogan: "Only two remote holes in the default install, in a heck of a long time!" OpenBSD is intended to be secure by default, which includes (but is not limited to) having all non-essential services be disabled by default. This is done not only to not require users to learn how and waste time to secure their computers after installing OpenBSD, but also in hope of making users more aware of security considerations, by requiring them to make conscious decisions to enable features that could reduce their security. OpenBSD 5.9 included support for the then–new pledge system call (introduced in OpenBSD 5.8 as tame and renamed in 5.9 to pledge) for restricting process capabilities to a minimal subset required for correct operation. If the process is compromised and attempts to perform an unintended behavior, it will be terminated by the kernel. OpenBSD 6.4 introduced the unveil system call for restricting filesystem visibility to a minimum level. pledge and unveil are used together to confine applications, further limiting what they're otherwise permitted to do under the user account they're running as. Since the introduction of pledge, base OpenBSD programs (included out of the box in OpenBSD), applications (handled by their developers), and ports (of applications, handled by the OpenBSD team) have been updated to be confined with pledge and/or unveil. Some examples of third-party applications updated with these features (by their developers or in OpenBSD's app ports) include the Chromium and Firefox web browsers. References External links Exploit Mitigation Techniques: an Update After 10 Years Theo de Raadt's email about secure programming: On the matter of strlcpy/strlcat acceptance by industry Security Operating system security Embedded operating systems OpenBSD
OpenBSD security features
Technology
3,163
8,690,896
https://en.wikipedia.org/wiki/Hydroamination
In organic chemistry, hydroamination is the addition of an bond of an amine across a carbon-carbon multiple bond of an alkene, alkyne, diene, or allene. In the ideal case, hydroamination is atom economical and green. Amines are common in fine-chemical, pharmaceutical, and agricultural industries. Hydroamination can be used intramolecularly to create heterocycles or intermolecularly with a separate amine and unsaturated compound. The development of catalysts for hydroamination remains an active area, especially for alkenes. Although practical hydroamination reactions can be effected for dienes and electrophilic alkenes, the term hydroamination often implies reactions metal-catalyzed processes. History Hydroamination is well-established technology for generating fragrances from myrcene. In this conversion, diethylamine adds across the diene substituent, the reaction being catalyzed by lithium diethylamide. Intramolecular hydroaminations were reported by Tobin J. Marks in 1989 using metallocene derived from rare-earth metals such as lanthanum, lutetium, and samarium. Catalytic rates correlated inversely with the ionic radius of the metal, perhaps as a consequence of steric interference from the ligands. In 1992, Marks developed the first chiral hydroamination catalysts by using a chiral auxiliary, which were the first hydroamination catalysts to favor only one specific stereoisomer. Chiral auxiliaries on the metallocene ligands were used to dictate the stereochemistry of the product. The first non-metallocene chiral catalysts were reported in 2003, and used bisarylamido and aminophenolate ligands to give higher enantioselectivity. Reaction scope Hydroamination has been examined with a variety of amines, unsaturated substrates, and vastly different catalysts. Amines that have been investigated span a wide scope including primary, secondary, cyclic, acyclic, and anilines with diverse steric and electronic substituents. The unsaturated substrates that have been investigated include alkenes, dienes, alkynes, and allenes. For intramolecular hydroamination, various aminoalkenes have been examined. Products Addition across the unsaturated carbon-carbon bond can be Markovnikov or anti-Markovnikov depending on the catalyst. When considering the possibly of R/S chirality, four products can be obtained: Markovnikov with R or S and anti-Markovnikov addition with R or S. Although there have been many reports of catalytic hydroamination with a wide range of metals, there are far fewer describing enantioselective catalysis to selectively make one of the four possible products. Recently, there have been reports of selectively making the thermodynamic or kinetic product, which can be related to the racemic Markovnikov or anti-Markovnikov structures (see Thermodynamic and Kinetic Product below). Catalysts and catalytic cycle Hydroamination reactions are atom-efficient processes that generally use readily available and cheap starting materials, therefore a general catalytic strategy is highly desirable. Also, direct catalytic hydroamination strategies have in principle significant benefits over more classical methods to prepare amine containing compounds, including the reduction in the number of synthetic steps required. However, hydroamination reactions pose some tough challenges for catalysis: Strong electron repulsion of the nitrogen atom lone pair and the electron rich carbon-carbon multiple bond, coupled with hydroamination reactions being entropically disfavoured (particularly the intermolecular version), results in a large reaction barrier. Regioselectivity issues also hamper the synthetic utility of the resulting products, with Markovnikov addition of the amine being the most common outcome over the less favoured anti-Markovnikov addition (see figure). As a result, there are now numerous catalysts that can be utilised in the hydroamination of alkene, allene and alkyne substrates, including various metal based heterogeneous catalysts, early-transition metal complexes (e.g. titanium and zirconium), late-transition metal complexes (e.g. ruthenium and palladium), lanthanide and actinide complexes (e.g. samarium and lanthanum), as well as Brønsted acids and bases. Catalysts Many metal-ligand combinations have been reported to catalyze hydroamination, including main group elements including alkali metals such as lithium, group 2 metals such as calcium, as well as group 3 metals such as aluminum, indium, and bismuth. In addition to these main group examples, extensive research has been conducted on the transition metals with reports of early, mid, and late metals, as well as first, second, and third row elements. Finally the lanthanides have been thoroughly investigated. Zeolites have also shown utility in hydroamination. Catalytic cycles The mechanism of metal-catalyzed hydroamination has been well studied. Particularly well studied is the organolanthanide catalyzed intramolecular hydroamination of alkenes. First, the catalyst is activated by amide exchange, generating the active catalyst (i). Next, the alkene inserts into the Ln-N bond (ii). Finally, protonolysis occurs generating the cyclized product while also regenerating the active catalyst (iii). Although this mechanism depicts the use of a lanthanide catalyst, it is the basis for rare-earth, actinide, and alkali metal based catalysts. Late transition metal hydroamination catalysts have multiple models based on the regioselective determining step. The four main categories are (1) nucleophilic attack on an alkene alkyne, or allyl ligand and (2) insertion of the alkene into the metal-amide bond. Generic catalytic cycles appear below. Mechanisms are supported by rate studies, isotopic labeling, and trapping of the proposed intermediates. Thermodynamics and kinetics The hydroamination reaction is approximately thermochemically neutral. The reaction however suffers from a high activation barrier, perhaps owing to the repulsion of the electron-rich substrate and the amine nucleophile. The intermolecular reaction also is accompanied by highly negative changing entropy, making it unfavorable at higher temperatures. Consequently, catalysts are necessary for this reaction to proceed. As usual in chemistry, intramolecular processes occur at faster rates than intermolecular versions. Thermodynamic vs kinetic product In general, most hydroamination catalysts require elevated temperatures to function efficiently, and as such, only the thermodynamic product is observed. The isolation and characterization of the rarer and more synthetically valuable kinetic allyl amine product was reported when allenes was used at the unsaturated substrate. One system utilized temperatures of 80 °C with a rhodium catalyst and aniline derivatives as the amine. The other reported system utilized a palladium catalyst at room temperature with a wide range of primary and secondary cyclic and acyclic amines. Both systems produced the desired allyl amines in high yield, which contain an alkene that can be further functionalized through traditional organic reactions. Base catalyzed hydroamination Strong bases catalyze hydroamination, an example being the ethylation of piperidine using ethene: Such base catalyzed reactions proceed well with ethene but higher alkenes are less reactive. Hydroamination catalyzed by group (IV) complexes Certain titanium and zirconium complexes catalyze intermolecular hydroamination of alkynes and allenes. Both stoichiometric and catalytic variants were initially examined with zirconocene bis(amido) complexes. Titanocene amido and sulfonamido complexes catalyze the intra-molecular hydroamination of aminoalkenes via a [2+2] cycloaddition that forms the corresponding azametallacyclobutane, as illustrated in the figure below. Subsequent protonolysis by incoming substrate gives the α-vinyl-pyrrolidine (1) or tetrahydropyridine (2) product. Experimental and theoretical evidence support the proposed imido intermediate and mechanism with neutral group IV catalysts. Formal hydroamination The addition of hydrogen and an amino group (NR2) using reagents other than the amine HNR2 is known as a "formal hydroamination" reaction. Although the advantages of atom economy and/or ready available of the nitrogen source are diminished as a result, the greater thermodynamic driving force, as well as ability to tune the aminating reagent are potentially useful. In place of the amine, hydroxylamine esters and nitroarenes have been reported as nitrogen sources. Applications Hydroamination could find applications due to the valuable nature of the resulting amine, as well as the greenness of the process. Functionalized allylamines, which can be produced through hydroamination, have extensive pharmaceutical application, although presently such species are not prepared by hydroamination. Hydroamination has been utilized to synthesize the allylamine Cinnarizine in quantitative yield. Cinnarizine treats both vertigo and motion sickness related nausea. Hydroamination is also promising for the synthesis of alkaloids. An example was the hydroamination step used in the total synthesis of (-)-epimyrtine. See also Ammoxidation - reaction of ammonia with alkenes to give nitriles Hydroboration Hydrosilylation (Olefin) Hydration Hydrofunctionalization References Addition reactions Organometallic chemistry Homogeneous catalysis Catalysis
Hydroamination
Chemistry
2,084
42,334,151
https://en.wikipedia.org/wiki/Brain%20atlas
A brain atlas is composed of serial sections along different anatomical planes of the healthy or diseased developing or adult animal or human brain where each relevant brain structure is assigned a number of coordinates to define its outline or volume. Brain atlases are contiguous, comprehensive results of visual brain mapping and may include anatomical, genetic or functional features. A functional brain atlas is made up of regions of interest, where these regions are typically defined as spatially contiguous and functionally coherent patches of gray matter. In most atlases, the three dimensions are: latero-lateral (x), dorso-ventral (y) and rostro-caudal (z). The possible sections are coronal sagittal transverse Surface maps are sometimes used in addition to the 3D serial section maps Besides the human brain, brain atlases exist for the brains of the mouse, rhesus macaques, Drosophila, pig and others. Notable examples include the Allen Brain Atlas, BrainMaps, BigBrain, Infant Brain Atlas, and the work of the International Consortium for Brain Mapping (ICBM). See also Brain mapping Connectome Neuroanatomy Stereotaxic atlas Stereotaxy References Neuroanatomy Biological databases
Brain atlas
Biology
251
38,218,600
https://en.wikipedia.org/wiki/53%20Persei
53 Persei is a single variable star in the northern constellation of Perseus. It has the Bayer designation d Persei, while 53 Persei is the Flamsteed designation. The star is visible to the naked eye as a faint, blue-white hued point of light with an apparent visual magnitude of 4.80. It is located approximately 480 light years away from the Sun, as determined from parallax, and is drifting further away with a radial velocity of +7.3 km/s. This star has a stellar classification of B4IV, and was the prototype of a class of variable stars known as slowly pulsating B stars. It was one of the first mid-B type variable stars in the northern hemisphere to be studied. The star undergoes non-radial pulsations with a primary period of 2.36 days. Observation of the star with the BRITE satellite revealed eight separate frequencies in the star's light curve. 53 Persei is around 50 million years old with a projected rotational velocity of 15 km/s. It has six times the mass of the Sun and four times the Sun's radius. The star is radiating 780 times the luminosity of the Sun from its photosphere at an effective temperature of 16,720 K. References B-type subgiants Slowly pulsating B-type stars Perseus (constellation) Persei, d Persei, 53 BD+46 872 027396 020354 1350 Persei, V469
53 Persei
Astronomy
314
26,511,501
https://en.wikipedia.org/wiki/Landscape%20evolution%20model
A landscape evolution model is a physically-based numerical model that simulates changing terrain over the course of time. The change in, or evolution of, terrain, can be due to: glacial or fluvial erosion, sediment transport and deposition, regolith production, the slow movement of material on hillslopes, more intermittent events such as rockfalls, debris flows, landslides, and other surface processes. These changes occur in response to the land surface being uplifted above sea-level (or other base-level) by surface uplift, and also respond to subsidence. A typical landscape evolution model takes many of these factors into account. Landscape evolution models are used primarily in the field of geomorphology. As they improve, they are beginning to be consulted by land managers to aid in decision making, most recently in the area of degraded landscapes. The earliest landscape evolution models were developed in the 1970s. In those models, flow of water across a mesh was simulated, and cell elevations were changed in response to calculated erosional power. Modern landscape evolution models can leverage graphics processing units and other acceleration hardware and software, to run more quickly. See also Hillslope evolution SIBERIA, CAESAR-Lisflood, LANDIS II, , an open-source, forest landscape model that simulates future forests pyBadlands, Community Surface Dynamics Modeling System, References Geomorphology models Mathematical modeling
Landscape evolution model
Mathematics
289
327,817
https://en.wikipedia.org/wiki/Kaseya%20Center
Kaseya Center (Pat Riley Court at Kaseya Center) is a multi-purpose arena on Biscayne Bay in Miami, Florida. The arena is home to the Miami Heat of the National Basketball Association. The arena was previously named American Airlines Arena from opening in 1999 until 2021, FTX Arena from 2021 until 2023 following the bankruptcy of FTX, and Miami-Dade Arena during an interim period in 2023. Since April 2023, the naming rights to the arena are owned by Kaseya under a 17-year, $117.4 million agreement. The arena has capacity for 19,500 people, including 2,105 club seats, 80 luxury suites, and 76 private boxes. Additionally, for more intimate performances, The Waterfront Theater, the largest indoor theater in Florida, is within the arena complex, seating between 3,000 and 5,800 patrons. The theater can be configured for concerts, worship events, family events, musical theatre shows and other stage productions. American Airlines, which has a hub at Miami International Airport, maintains a travel center at the venue. The arena is directly served by the Miami Metrorail at Government Center station via free transfers to Metromover Omni Loop, providing direct service to Freedom Tower station and Park West station stations, within walking distance. It is also within walking distance from the Historic Overtown/Lyric Theatre station. The arena has 939 parking spaces, with those spaces reserved for premium seat and Dewar's 12 Clubhouse ticket holders during Heat games. ParkJockey manages the arena's on-site parking. History In 1997, the owners of the Miami Heat of the National Basketball Association, which then played in the eight-year-old, publicly financed Miami Arena, threatened to move to Broward County unless they were given the $38 million parcel of land for the new arena by Alex Penelas, then-mayor of Miami-Dade County. The agreement provided that the county receive 40% of annual profits of the arena, which was above $14 million. Construction began on February 6, 1998. The arena was designed by Arquitectonica and 360 Architecture. Kaseya Center opened as the American Airlines Arena on December 31, 1999, and its construction cost was $213 million. Architectural design team members included George Heinlein, Cristian Petschen, Reinaldo Borges, and Lance Simon. The arena's opening was inaugurated with a concert by Gloria Estefan. Two days later, on January 2, 2000, the Miami Heat played its first game in the new arena by defeating the Orlando Magic 111–103. As part of its sponsorship arrangement, American Airlines had a giant aircraft painted atop the arena's roof, with an American Airlines logo in the center. The design was visible from airplanes taking off and landing at Miami International Airport, where American has a hub. The arena also has luxury skyboxes called "Flagship Lounges", a trademark originally used for American's premium-class lounges at certain airports. Until the date that the arena was renamed in 2020–2021, the arena used the 1967-2013 logo of American Airlines. The arena was sometimes referred to as "Triple-A" or "A3" (A cubed). The arena is known for its unusual scoreboard, designed by artist Christopher Janney and installed in 1998 as part of the original construction. Drawing on the underwater anemone forms, the scoreboard also changes colors depending on the atmosphere. For concerts in an arena configuration, end stage capacity is 12,202 for 180° shows, 15,402 for 270° shows, and 18,309 for 360° shows. For center stage concerts the arena can seat 19,146. WTVJ, the city's NBC-owned and operated station in Miami, had their Downtown Miami Studios in the back of the arena from 2001 until 2011. In 2013, the Miami Heat paid rent on the arena for the first time pursuant to the percentage rent agreement with the county; the payment was $3.32 million. On September 10, 2019, American Airlines said that it would not renew its naming rights upon expiration at the end of 2019. The American Airlines Arena court decals were removed from the Heat's floor before the 2020–21 season and replaced temporarily with the logo of team/league vehicle sponsor Kia Motors. In March 2021, FTX acquired the naming rights to the arena in a $135 million, 19-year agreement. The NBA approved the deal in early April, and the arena was renamed FTX Arena in June 2021, just after the Miami Heat were swept by the Milwaukee Bucks in the first round of the 2021 NBA playoffs. As part of the bankruptcy of FTX, the naming rights agreement was terminated effective January 2023. After three months under the temporary name of Miami-Dade Arena, a 17-year naming rights agreement was reached with Miami-based software company Kaseya to name the arena Kaseya Center beginning April 2023. Under the terms of the contract, the county receives the majority of the naming rights revenue while the Heat receives $2 million annually. In October 2024, it was announced that the court would be dedicated to longtime coach and executive Pat Riley, who led the Heat to three championships and helped the team acquire LeBron James and Chris Bosh in 2010. Notable events Circus In January 2017, the closing of the Ringling Bros. and Barnum & Bailey Circus was announced after shows at the arena. Basketball The then-named American Airlines Arena, along with the American Airlines Center in Dallas, hosted the 2006 NBA Finals and the 2011 NBA Finals as the Miami Heat played the Dallas Mavericks. The Heat won the championship in 2006 in Dallas and the Mavericks won in the 2011 rematch in Miami. These series were the first and second appearances in the NBA Finals for both franchises. As the airline held naming rights to both venues, people nicknamed the matchups as the "American Airlines series". The arena hosted the 2012, 2013 and 2014 NBA Finals along with the Chesapeake Energy Arena in Oklahoma City in 2012, and the AT&T Center in San Antonio in 2013 and 2014. In 2012, the Heat defeated the Oklahoma City Thunder in five games, winning the championship at home. In 2013, the Heat played the San Antonio Spurs. The Heat faced a 3–2 series deficit returning to Miami but won games 6 and 7 to defend their championship. In 2014, the Spurs defeated the Heat in five games in San Antonio and won the championship and the rematch. The arena hosted the 2023 NBA Finals under its current name of Kaseya Center, along with the Ball Arena in Denver as the Heat played the Denver Nuggets. The Nuggets defeated the Heat in five games to win their first championship. Since 2015, the arena has hosted the annual Hoophall Miami Invitational, an NCAA Division I college basketball showcase event. Professional wrestling The arena hosted Uncensored (2000), the World Championship Wrestling WCW Uncensored pay-per-view. Four major WWE pay-per-view events have been held at the arena: the Royal Rumble in 2006, Survivor Series in 2007 and 2010, and WWE Hell in a Cell in 2013. It has also hosted various episodes of WWE Raw and WWE SmackDown. Mixed martial arts On April 25, 2003, the arena hosted the first Ultimate Fighting Championship event in Florida, UFC 42: Sudden Impact. The UFC returned to the arena after twenty years on April 8, 2023, for UFC 287: Pereira vs. Adesanya 2. The promotion returned again on March 9, 2024, for UFC 299: O'Malley vs. Vera 2. Other sports The arena features a regulation NHL ice rink, though the arena has never hosted the sport, as the Florida Panthers have played in Sunrise at the Amerant Bank Arena since October 1998. The rink, lined with a smaller wall, instead accommodates ice shows such as Disney on Ice. The Waterfront Theatre at the arena hosted the 2020 NFL Honors on February 1, 2020, which was broadcast by Fox Broadcasting Company. Music Notable musicians to perform at the arena include Olivia Rodrigo, Doja Cat, Gloria Estefan, Phish, Shakira, Dua Lipa, Kylie Minogue, Mariah Carey, Cher, Kelly Clarkson, Clay Aiken, Britney Spears, U2, Soda Stereo, Kanye West, Tina Turner, Celine Dion, Justin Bieber, Lady Gaga, Coldplay, Jennifer Lopez, SZA, Madonna, Miley Cyrus, Hillsong United, Justin Timberlake, One Direction, Katy Perry, Demi Lovato, Ariana Grande, Chris Brown, Janet Jackson, Taylor Swift, The Weeknd, Rihanna, Selena Gomez, Maroon 5, Adele, Carrie Underwood, Jimmie Allen, Ricardo Arjona, RBD, and Tini. The 2004 MTV Video Music Awards and the 2005 MTV Video Music Awards, Sensation, as well as the For Darfur benefit concert were held at the arena. Awards ceremonies The arena hosts the annual Premio Lo Nuestro Latin music awards since 2001. The awards are held on a Thursday night in late February. The Kaseya Center hosted the Latin Grammy Awards, with the 2003, the 2020 and in 2024 on November 14. Gallery See also List of indoor arenas by capacity References External links Satellite view from Google Maps 1999 establishments in Florida American Airlines Arquitectonica buildings Basketball venues in Florida Boxing venues in the United States Leadership in Energy and Environmental Design certified buildings Miami Heat Miami Sol Mixed martial arts venues in Florida Music venues completed in 1999 Music venues in Florida NBA venues Sports venues completed in 1999 Sports venues in Miami Tourist attractions in Miami Women's National Basketball Association venues
Kaseya Center
Engineering
1,961
11,420,830
https://en.wikipedia.org/wiki/FinP
FinP encodes an antisense non-coding RNA gene that is complementary to part of the TraJ 5' UTR. The FinOP system regulates the transfer of F-like plasmids. The traJ gene encodes a protein required for transcription from the major transfer promoter, pY. The FinO protein is essential for effective repression, acting by binding to FinP and protecting it from RNase E degradation. References External links Antisense RNA
FinP
Chemistry
97
77,350,508
https://en.wikipedia.org/wiki/Nots%C3%A9%20Walls
The walls of Notsé (; ) or the Agbogbo and the Agbobovi are a sacred enclosure erected in Notsé, Togo, between the 16th and 17th centuries. The walls delineate two different areas, one called "Agbogbo" and the other called "Agbogbovi". Associated with the figure of Agokoli, the ruler of the city-state, they gained significant importance in West Africa, as the refusal to participate in their construction is said to have caused the exodus of the Ewe from Notsé, an event considered by the Ewe as the origin of their people. Although they were never completed, as the construction undertaken under Agokoli is said to have led to the ruin of the city, parts of the walls still remain at the beginning of the 21st century. History Context The ancestors of the Ewe were likely a people already present in the region of Togo and Ghana in the 13th century. However, it is difficult to trace their journey and evolution before their settlement in Notsé, where they founded a prosperous city and kingdom during the 15th century. According to surviving oral traditions, they were guided to the site of Notsé by the hunter Afotsè, also called Ndétsi, or under the leadership of an ancestor named Noin or Da. There, they merged with the populations already present in the area and founded the city. Although it was prosperous and housed the regional sanctuary of the god Mawu, political unrest quickly broke out among the city's ruling classes, weakening the priest-king. In the 17th century, one of these kings, Agokoli, took power after the death of his father, Ago. It appears that this king attempted to expand beyond the limited scope of his duties, purging his advisors and replacing them with his supporters. Construction and exodus In this context, Agokoli undertook the construction of the walls of Notsé, intended to be a sacred enclosure of monumental proportions for the time. The walls are called, or delineate, two different areas, known as "Agbogbovi" for the older one and "Agbogbo" for the more recent one. Despite the religious objections of several leaders, which reflected hostility to the project from part of the population, especially since the construction had to be carried out under difficult conditions due to the project's size, Agokoli persisted with his plans. The enclosure wall, discovered by archaeologists, is not a fortification wall but rather a religious and symbolic enclosure. In some traditional accounts, this wall is reinterpreted as having been made of "human blood and clay". Agokoli is a very negative figure among the Ewe people, although this portrayal of him as an entirely negative and tyrannical king might be a later historical reconstruction. The erection of this wall triggered the exodus of the Ewe from Notsé, and the project was never completed. The walls extend over a little more than 14 kilometers and are two meters wide. Restoration work was undertaken in 2017 on the remaining portions of the structure. Legacy An important ceremony of the Ewe people, called "Agbogbo-Za", takes place in Notsé. It reenacts the original exile of the people and their crossing of the sacred enclosure. References Building types Types of monuments and memorials Sacral architecture Religious buildings and structures Kingdom of Notsé Sacred sites in traditional African religions
Notsé Walls
Engineering
697
14,833,531
https://en.wikipedia.org/wiki/PcrA
PcrA, standing for plasmid copy reduced is a helicase that was originally discovered in a screen for chromosomally encoded genes that are affected in plasmid rolling circle replication in the Gram-positive pathogen Staphylococcus aureus. Biological functions Genetic and biochemical studies have shown that the helicase is essential for plasmid rolling-circle replication and repair of DNA damage caused by exposure to ultraviolet radiation. It catalyzes the unwinding of double-stranded plasmid DNA that has been nicked at the replication origin by the replication initiation protein. Genetic and biochemical studies have also shown that the helicase plays an important role in cell-survival by regulating the levels of RecA-mediated recombination in Gram-positive bacteria. Biochemical properties The helicase is a monomeric translocase and utilizes ATP to unwind DNA. The preferred substrates are single-stranded DNA containing 3' overhangs. The processivity of PcrA is increased in the presence of plasmid replication initiation protein. Crystal Structure The structure of the helicase has been solved at high resolution and indicates "inchworming" as the mechanism of translocation on single-stranded DNA. A Mexican-wave model has been proposed based on the changes in conformation of the helicase observed in the product versus substrate complex. Classification PcrA belongs to the SF1 superfamily of helicases, which also include the E. coli helicases UvrD and Rep and the eukaryotic helicase Srs2. Literature External links Enzymes DNA replication DNA repair
PcrA
Biology
335
520,869
https://en.wikipedia.org/wiki/List%20of%20New%20Zealand%20railway%20museums%20and%20heritage%20lines
This is a list of groups involved in Railway preservation in New Zealand. Members of the Federation of Rail Organisations New Zealand Members of the Federation of Rail Organisations of New Zealand: Railway museums, heritage lines, societies, clubs, trusts, etc., in New Zealand. This also include model engineering clubs and narrow gauge railways. North Island Northland Bay of Islands Vintage Railway Charitable Trust Whangarei Steam & Model Railway Club Whangarei Model Engineering Club Auckland Mainline Steam Glenbrook Vintage Railway Railway Enthusiasts Society The Waitakere Tramline Society Watercare Services Western Springs Railway Museum of Transport and Technology Western Springs Tramway Museum of Transport and Technology Auckland Society of Model Engineers Incorporated Manukau Live Steamers Waikato / Coromandel Bush Tramway Club DF 1501 Restoration Charitable Trust Driving Creek Railway Goldfields Railway Victoria Battery Tramway Society Te Aroha Mountain Railway Thames Small Gauge Railway Society Hamilton Model Engineers Cambridge Model Engineering Society Inc Waihi Small Gauge Railway Bay of Plenty Rotorua - Ngongotaha Rail Trust Geyserland Express Trust Tauranga Model Marine and Engineering Club Eastern Bay of Plenty Model Engineering Society East Cape / Hawke's Bay East Coast Museum of Technology Gisborne City Vintage Railway Hawkes Bay Steam Society Ormondville Rail Preservation Group Hawkes Bay Model Engineering Society Havelock North Live Steamers & Associates Taranaki Hooterville Heritage Charitable Trust(No longer operating) Pioneer Village Soc Inc Waitara Railway Preservation Society New Plymouth Society of Model & Experimental Engineers Wairarapa Friends of the Fell Society Fell Engine Museum Featherston Pahiatua Railcar Society Wairarapa Railway Restoration Society, based at the Carterton railway station Manawatu Feilding and District Steam Rail Society Steamrail Wanganui Incorporated Palmerston North Model Engineering Club Inc Esplanade Scenic Railway Wellington Craven Crane Preservation Group Department of Conservation Mainline Steam New Zealand Railway and Locomotive Society (or see website) Rail Heritage Trust of New Zealand Rimutaka Incline Railway Heritage Trust Silver Stream Railway Steam Incorporated (Engine Shed - Paekakariki) Wellington and Manawatu Railway Trust Wellington Tramway Museum (or see website) Paekakariki Station Precinct Trust Wellington CableCar Museum Kapiti Miniature Railway & Model Engineering Society Inc Featherston Miniature Fell Society Maidstone Model Engineering Society Hutt Valley Model Engineering Society South Island Nelson / Marlborough Blenheim Riverside Railway Nelson Railway Society (Founders Heritage Park) Picton Society of Model Engineers Marlborough Associated Modellers Society Nelson Society of Modellers Westland Charming Creek Railway Reefton Historic Trust Board West Coast Historical & Mechanical Society, Shantytown Westport Railway Preservation Society Canterbury Mainline Steam Ashburton Railway & Preservation Society Canterbury Railway Society (Ferrymead Railway) Canterbury Steam Preservation Society (McLeans Island Steamscene) Christchurch Tramway Ltd Diesel Traction Group Heritage Tramways Trust Midland Rail Heritage Trust Midland Railway Company (NZ) Ltd National Railway Museum of New Zealand Pleasant Point Museum and Railway Tramway Historical Society (Ferrymead Tramway) Weka Pass Railway Canterbury Society of Model & Experimental Engineers Christchurch Live Steamers Ashburton Steam Model & Engineering Club South Canterbury Model Engineers Otago Oamaru Steam and Railway Restoration Society Otago Excursion Train Trust (part owners of Dunedin Railways) Otago Railway & Locomotive Society (Ocean Beach Railway) Project Steam (Dunedin) Inc The Otago Model Engineering Society Otago Miniature Road & Rail Society Inc Southland Ohai Railway Board Heritage Trust Gore Model Engineering Club Southland Society of Model Engineers Cook Islands Rarotonga Steam Railway in Cook Islands Other organisations Kingston Flyer Ferrymead Heritage Park RM 133 Trust Otago Settlers Museum has two steam engines on display See also List of museums in New Zealand References External links Federation of Rail Organisations New Zealand website Lists of heritage railways Railway museums Museums and heritage lines Lists of railway museums
List of New Zealand railway museums and heritage lines
Engineering
746
19,335,893
https://en.wikipedia.org/wiki/CAIFI
The Customer Average Interruption Frequency Index (CAIFI) is a popular reliability index used in the reliability analysis of an electric power system. It is designed to show trends in customers interrupted and helps to show the number of customers affected out of the whole customer base. References Electric power Reliability indices
CAIFI
Physics,Engineering
58
52,907,991
https://en.wikipedia.org/wiki/Front%20%28physics%29
In physics, a front can be understood as an interface between two different possible states (either stable or unstable) in a physical system. For example, a weather front is the interface between two different density masses of air, in combustion where the flame is the interface between burned and unburned material or in population dynamics where the front is the interface between populated and unpopulated places. Fronts can be static or mobile depending on the conditions of the system, and the causes of the motion can be the variation of a free energy, where the most energetically favorable state invades the less favorable one, according to Pomeau or shape induced motion due to non-variation dynamics in the system, according to Alvarez-Socorro, Clerc, González-Cortés and Wilson. From a mathematical point of view, fronts are solutions of spatially extended systems connecting two steady states, and from dynamical systems point of view, a front corresponds to a heteroclinic orbit of the system in the co-mobile frame (or proper frame). Fronts connecting stable - unstable homogeneous states The most simple example of front solution connecting a homogeneous stable state with a homogeneous unstable state can be shown in the one-dimensional Fisher–Kolmogorov equation: that describes a simple model for the density of population. This equation has two steady states, , and . This solution corresponds to extinction and saturation of population. Observe that this model is spatially-extended, because it includes a diffusion term given by the second derivative. The state is stable as a simple linear analysis can show and the state is unstable. There exist a family of front solutions connecting with , and such solution are propagative. Particularly, there exist one solution of the form , with is a velocity that only depends on and References Concepts in physics
Front (physics)
Physics
365
10,795,926
https://en.wikipedia.org/wiki/PEAKS
PEAKS is a proteomics software program for tandem mass spectrometry designed for peptide sequencing, protein identification and quantification. Description PEAKS is commonly used for peptide identification (Protein ID) through de novo peptide sequencing assisted search engine database searching. PEAKS has also integrated PTM and mutation characterization through automatic peptide sequence tag based searching (SPIDER) and PTM Identification. PEAKS provides a complete sequence for each peptide, confidence scores on individual amino acid assignments, simple reporting for high-throughput analysis, amongst other information. The software has the ability to compare results of multiple search engines. PEAKS inChorus will cross check test results automatically with other protein ID search engines, like Sequest, OMSSA, X!Tandem and Mascot. This approach guards against false positive peptide assignments. PEAKS Q is an add-on tool for protein quantification, supporting label (ICAT, iTRAQ, SILAC, TMT, 018, etc.) and label free techniques. SPIDER is a sequence tag based search tool within PEAKS, which deals with the possible overlaps between the de novo sequencing errors and the homology mutations. It reconstructs the real peptide sequence by combining both the de novo sequence tag and the homolog, automatically and efficiently. A collection of algorithms used within the PEAKS software have been adapted and configured into a specialized project, PEAKS AB, which has proven to be the first method for automatic monoclonal antibody sequencing. Notes Mass spectrometry software Proteomic sequencing
PEAKS
Physics,Chemistry,Biology
304
32,430,563
https://en.wikipedia.org/wiki/Net%20smelter%20return
Net Smelter Return (NSR) is the net revenue that the owner of a mining property receives from the sale of the mine's metal/non metal products less transportation and refining costs. As a royalty it refers to the fraction of net smelter return that a mine operator is obligated to pay the owner of the royalty agreement. The royalty is paid in variable or fixed payments based on sales revenue received by a mining operator in return for mining output. It is contingent only on the sales price and quantity of product sold. The term is named so due to the fact most of the time, mining output sold requires further processing by smelters; the mining products purchased directly by smelters are sold to them for a discounted (net) price based on how much further processing is needed. The mining lease specifies the selling price (prices are different in spot and forward markets) and is used to verify the exact amount of product that's produced and sold between royalty payments. One advantage NSR royalties have over other royalties is that usually, payments are higher in the short term because capital costs and exploration costs cannot be used as deductions (some royalties don't have to be paid until after other costs such as loans/amortization are taken care of). Also, mine life and royalty expiration dates need to be taken into consideration. The royalty can be called a Net Value Royalty when deductions are based solely on the contract. Alternatively the Gross Smelter Return is a percentage of gross revenue paid by mine owner that isn't subject to any deductions. Examples of transactions involving net smelter return royalties Franco-Nevada's 7.29% NSR royalty on Newmont Mining's Gold Quarry open pit mine in Nevada, which cost the company US$103.5 million, realized $250 million in royalty payments before being acquired. The NSM royalty in this case gave Franco Nevada the option of collecting in cash or in-kind (metal product output). References Energy economics
Net smelter return
Environmental_science
414
4,833,901
https://en.wikipedia.org/wiki/Ecological%20indicator
Ecological indicators are used to communicate information about ecosystems and the impact human activity has on ecosystems to groups such as the public or government policy makers. Ecosystems are complex and ecological indicators can help describe them in simpler terms that can be understood and used by non-scientists to make management decisions. For example, the number of different beetle taxa found in a field can be used as an indicator of biodiversity. Many different types of indicators have been developed. They can be used to reflect a variety of aspects of ecosystems, including biological, chemical and physical. Due to this variety, the development and selection of ecological indicators is a complex process. Using ecological indicators is a pragmatic approach since direct documentation of changes in ecosystems as related to management measures, is cost and time intensive. For example, it would be expensive and time-consuming to count every bird, plant and animal in a newly restored wetland to see if the restoration was a success. Instead, a few indicator species can be monitored to determine the success of the restoration. "It is difficult and often even impossible to characterize the functioning of a complex system, such as an eco-agrosystem, by means of direct measurements. The size of the system, the complexity of the interactions involved, or the difficulty and cost of the measurements needed are often crippling" The terms ecological indicator and environmental indicator are often used interchangeably. However, ecological indicators are actually a sub-set of environmental indicators. Generally, environmental indicators provide information on pressures on the environment, environmental conditions and societal responses. Ecological indicators refer only to ecological processes; however, sustainability indicators are seen as increasingly important for managing humanity's coupled human-environmental systems. The Marine Ecosystem Marine ecosystem status and functioning are influenced by various anthropogenic and environmental stressors that necessitate ecosystem-based, integrative approaches to fisheries management. Ecological indicators play an important role in evaluating policy regarding the environment. A large number of ecological indicators have been documented and reported worldwide, and an increasing number of studies has been conducted to assess the properties of ecological indicators and determine how they should be selected for assisting fisheries management. We contrasted the sensitivity of indicators to fishing and primary productivity, by looking at indicators' response to directional change in fishing pressure and to directional change in primary productivity separately. For all ecosystems except the Black Sea, the Southern Catalan Sea and, to some extent, the Southeastern Australia, the cumulative importance shifts (in R2f unit) of the indicator B/C in response to fishing pressure were high even under the lowest fishing levels. It was concluded that the performance of biomass indicators for evaluating fishing impacts was low, but was high and better suited for assessing the impacts of changes in primary productivity on ecosystem status. Human Effects Building construction is one of the largest final consumers of environmental resources as well as one of the largest emitters of greenhouse gas and other pollution. Green building construction constitutes one of the most important elements in sustainable building requirement. Energy and global warming issues have spurred rapid development of green building construction. It is significant to get a thorough understanding of green building construction, especially for strengthening current energy and environmental policies. Indicators contribute to evaluation of policy development by: Providing decision-makers and the general public with relevant information on the current state and trends in the environment. Helping decision-makers better understand cause and effect relationships between the choices and practices of businesses and policy-makers versus the environment. Assisting to monitor and assess the effectiveness of measures taken to increase and enhance ecological goods and services. Based on the United Nations convention to combat desertification and convention for biodiversity, indicators are planned to be built in order to evaluate the evolution of the factors. For instance, for the CCD, the Unesco-funded Observatoire du Sahara et du Sahel (OSS) has created the Réseau d'Observatoires du Sahara et du Sahel (ROSELT) (website ) as a network of cross-Saharan observatories to establish ecological indicators. Limitations There are limitations and challenges to using indicators for evaluating policy programs. For indicators to be useful for policy analysis, it is necessary to be able to use and compare indicator results on different scales (local, regional, national and international). Currently, indicators face the following spatial limitations and challenges: Variable availability of data and information on local, regional and national scales. Lack of methodological standards on an international scale. Different ranking of indicators on an international scale which can result in different legal treatment. Averaged values across a national level may hide regional and local trends. When compiled, local indicators may be too diverse to provide a national result. Indicators also face other limitations and challenges, such as: Lack of reference levels, therefore it is unknown if trends in environmental change are strong or weak. Indicator measures can overlap, causing over estimation of single parameters. Long-term monitoring is necessary to identify long-term environmental changes. Attention to more easily handled measurable indicators distracts from indicators less quantifiable such as aesthetics, ethics or cultural values. See also Ecological science Ecology movement Ecosystem valuation Ecological yield Deep ecology Human ecology Systems ecology Ecosystem ecology Ecoinformatics Ecosystem Environmental ethics Environmental economics Indicator plants Indicator species Measurement of biodiversity References Specific External links Journal of Political Ecology Journals of the British Ecological Society Institute of Ecology and Environmental Management Chartered Institute of Ecology and Environmental Management Ecology and Society U.S. EPA's Report on the Environment Environmental impact assessment Systems ecology Ecology terminology Indicators
Ecological indicator
Biology,Environmental_science
1,095
68,462,656
https://en.wikipedia.org/wiki/Landsupport
Landsupport (spelling: LANDSUPPORT) is a pilot consulting project funded by the European Union for land use for the near-natural modeling of different types and methods of land use while at the same time protecting the environment. Project goal In the long term, sustainable use of the soil must be guaranteed in order to meet the needs of the world's population. The project brings together numerous universities, research institutions, companies and stakeholders with the aim of creating a web-based, free system to support practical agriculture and land users in making decisions about sustainable land use, environmental protection and agricultural use. With the active participation of various and numerous stakeholders in and outside Europe, the consortium also aims at legislation at European level, based on scientific data that is processed and modeled in the system. In the research framework program Horizon 2020, the project is organized under the direction of Fabio Terribile at the University of Naples Federico II. Project consortium The Landsupport consortium consists of the following partners: University of Naples, Italy ARIESPACE, Italy Barcelona Supercomputing Center, Spain University of Natural Resources and Life Sciences, Vienna, Austria Consiglio Nazionale delle Ricerche, Italy Crops for the Future, Malaysia ICARDA, Tunisia Institute of Advanced Studies, Hungary Institute for Environmental Protection and Research, Italy Rasdaman GmbH, Germany Joint Research Center, European Commission Regione Campania, Italy University of Milan, Italy Zala County, Hungary CMAST / Modis, Belgium Acteon, France Federal Environment Agency, Austria Slovenian Forestry Institute, Slovenia Results and advice The results of the investigations are internationally evaluated by the members in specialist committees and made available to practice and the responsible bodies at regional and state level, as well as to the European Uninion for legislative and approval procedures. See also Bioeconomy Biofector Edaphon Microbiology Pedology External links Webpage Landsupport Report of EU concerning Landsupport Balkan green Deal BW BeyondSoil Initiative of the University Hohenheim Greenerde.eu Scoalaagricola.eu Danube Strategy Baden-Württemberg Green Deal and Biodiversity in Europe References Agroecology Agronomy Botany . Biology and pharmacology of chemical elements . Ecology Ecological economics Edaphology Fertilizers Organic fertilizers Organic food Recycling Soil Soil improvers Sustainable agriculture Sustainable gardening Sustainable technologies Systems ecology Waste management
Landsupport
Chemistry,Biology,Environmental_science
483
24,948,413
https://en.wikipedia.org/wiki/Virtual%20number
A virtual number, also known as direct inward dialing (DID) or access numbers, is a telephone number without a directly associated telephone line. Usually, these numbers are programmed to forward incoming calls to one of the pre-set telephone numbers, chosen by the client: fixed, mobile or VoIP. A virtual number can work like a gateway between traditional calls (PSTN) and VoIP. Subscribers to virtual numbers may use their existing phones, without the need to purchase additional hardware, i.e. use the numerous available software. A virtual private number is a telephone number that forwards incoming calls to any of a number of pre-set telephone numbers. These are also called a follow-me number, a virtual telephone number or (in the UK) Personal Number. Usually, a virtual telephone number can be set to forward calls to different telephone numbers depending on the time of day and the day of the week using time of day routing; for example, between 9 and 5 on working days incoming calls will be forwarded to one's workplace, but in the weekends to one's cellphone. The availability (and acceptable use) of virtual phone numbers are subject to the regulatory situation in the issuing country. Applications and example of use Businesses – a company located in China can have a phone number in Los Angeles or London without paying for a fixed foreign exchange line. Virtual numbers are very popular among call centers which appear to be located in one country, when they are in one or more countries in different time zone, delivering efficient 24/7 cover. Virtual numbers faced popularity due to the growing trend of remote working, as it allows for employees to make and receive calls as their businesses' phone number remotely. It is especially popular in small businesses, startups or businesses with remote workers using virtual numbers to manage their inbound and outbound calling. Individuals – Individual usage of virtual numbers can vary, with many using the service to have 2 separate numbers, one for business and the other for personal usage. Services like Google Voice, TextNow, and Skype offer virtual numbers to individuals. One popular use case of virtual numbers are travelers, migrants, and immigrants, who can enjoy often cheaper calling rates rather than making an international call via their carrier. Specific businesses – calling cards or callback. Virtual numbers work like access numbers, e.g., the phone number that (calling cards) or callback's user has to dial to make the call/(use callback). Marketing – some companies use virtual numbers for various marketing campaigns, or different media channels; this allows them to track which campaign or medium brings what kind of traffic, as well as send different marketing materials to different audiences. Virtual services – various providers of virtual business services (virtual address, virtual receptionist, virtual office) will use a virtual number to tie in their other virtual services together. This allows their customers to have a phone, address and voice presence almost anywhere in the world. Disposable numbers are phone numbers that are used for a limited time. In San Antonio fake businesses used disposable numbers printed on pizza fliers to deliver inferior quality pizzas. Most voice over IP providers offer virtual numbers; unbundled providers label these as "DIDs" (direct inward dial). These are typically offered as local geographic numbers in various selected cities or as toll-free numbers, with the non-geographic number carrying higher per-minute cost to receive calls. In the North American Numbering Plan, Area code 500 and Area code 533 are follow-me numbers, and referred to as Personal Communications Service (NANP). In the United Kingdom, these are over 600 area code or local exchange codes 01 & 02 numbers which can be set up as virtual numbers as well as the 0800 & 0345. A virtual number is a telephone number that is not directly associated with a telephone line. It is used to forward incoming calls to one or more pre-set telephone numbers chosen by the client. Virtual numbers are often used for privacy reasons, as they allow users to keep their personal phone numbers private while still receiving calls. They are also used for business purposes, as they allow companies to have a local presence in multiple locations without having to set up a physical office. See also Follow-me, the same concept for PBXs Unified Messaging Universal Personal Telecommunications References Telephone numbers
Virtual number
Mathematics
882
3,526,857
https://en.wikipedia.org/wiki/Livens%20Projector
The Livens Projector was a simple mortar-like weapon that could throw large drums filled with flammable or toxic chemicals. In the First World War, the Livens Projector became the standard means of delivering gas attacks by the British Army and it remained in its arsenal until the early years of the Second World War. History The Livens Projector was created by Captain William Livens of the Royal Engineers. Livens designed a number of novel weapons, including a large-calibre flame thrower, to engulf German trenches in burning oil, that was deployed at the Somme in 1916. (One of these weapons was partially excavated in 2010 for an episode of archaeological television programme Time Team, having been buried when the tunnel in which it was being built was hit by a German shell.) In the Second World War, he worked on petroleum warfare weapons such as the flame fougasse and various other flame weapons. Prior to the invention of the Livens Projector, chemical weapons had been delivered either by cloud attacks or chemical-filled shells fired from howitzers. Cloud attacks at first were made by burying gas-filled cylinders just beyond the parapet of the attacker's trenches and then opening valves on the tanks when the wind was right. (Later British practice was to bring up flatcars with gas cylinders on a line parallel to the front to be attacked, and open the cylinders without removing them from the rail car.) This allowed a useful amount of gas to be released but there was danger that the wind would change and the gas drift back over the attacking troops. Chemical shells were much easier to aim but could not deliver nearly as much gas as a cylinder. Livens was in command of Z Company, the unit charged with developing and using flame and chemical weapons. Flame throwers and various means of dispensing chemicals had proven frustratingly limited in effect. During an attack on the Somme, Z Company encountered a party of Germans who were well dug in. Grenades did not shift them and Livens improvised a giant Molotov cocktail using two oil cans. When these were thrown into the German positions they were so effective that Harry Strange wondered whether it would be better to use containers to carry the flame to the enemy rather than relying on a complex flame thrower. Reflecting on the incident, Livens and Strange considered how a really large shell filled with fuel might be thrown by a mortar. Livens went on to develop a large, simple mortar that could throw a drum of oil which would burst when it landed, spreading burning oil over the target. Livens came to the attention of General Hubert Gough who was impressed by his ideas and "wangled" everything that Livens needed for his large projector. On 25 July 1916 at Ovillers-la-Boisselle during the Battle of the Somme, Z Company used eighty projectors when the Australians were due to attack Pozières. Since the early versions had a short range, it was necessary to, first, neutralize German machine gun nests, and, then place the projectors forward into no-man's-land. Z Company rapidly developed the Livens Projector, increasing its range to and eventually an electrically triggered version with a range of used at the Battle of Messines in June 1917. The Livens Projector was then modified to fire canisters of poison gas rather than oil. This system was tested in secret, at Thiepval in September 1916 and Beaumont-Hamel in November. The Livens Projector was able to deliver a high concentration of gas a considerable distance. Each canister delivered as much gas as several gas shells. Without the need to reload, a barrage could be launched quickly, catching the enemy by surprise. Although the projectors were single-shot weapons they were cheap and used in hundreds or even thousands. The Livens Projector was also used to fire other substances. At one time or another the drums contained high explosive, oil and cotton-waste pellets, thermite, white phosphorus and "stinks". Used as giant stink bombs to trick the enemy, "stinks" were malodorous but harmless substances such as bone oil and amyl acetate used to simulate a poison gas attack, compelling the opponents to don cumbersome masks (which reduced the efficiency of German troops) on occasions when gas could not be safely employed. Alternatively, "stinks" could be used to artificially prolong the scale, discomfort and duration of genuine gas-attacks i.e. alternating projectiles containing "stinks" with phosgene, adamsite or chloropicrin. There was even a design for ammunition containing a dozen Mills bombs in the manner of a cluster bomb. The Livens Projector remained in the arsenal of the British Army until the early years of the Second World War. In the context of the Invasion Scare in the early years of World War II, over 25,000 Livens Projectors were produced for the defense of Great Britain between 1939-1942. Description The Livens Projector was designed to combine the advantages of gas cylinders and shells by firing a cylinder tank at the enemy. It consisted of a simple metal pipe that was set in a ground at a 45-degree angle. Specifications varied during the war. The early field improvisations in July 1916 near La Boselle based the barrel on oil drums, the projectile was an oil can. The production model was decided on in December 1916 after further successful field trials on the Somme. It was based on spare oxy-acetylene welded tubing. The 8-inch barrel became standard and was first used in number when 2,000 fired a salvo in the Battle of Vimy Ridge in April 1917. Barrels were supplied in three lengths depending on required range: for short range, for medium range and for maximum range. A drum in diameter and long, containing of gas, was shot out by an electrically initiated charge, giving it a range of about . On impact with the target, a burster charge would disperse the chemical filling over the area. It was also used to project flammable oil, as with 1,500 drums fired before the Battle of Messines in June 1917. Oil was also tried on 20 September 1917 during the Battle of the Menin Road Ridge with 290 projectors used in support of an attempt to capture Eagle Trench east of Langemarck. This included concrete bunkers and machine gun nests but the drums did not land in the trenches and failed to suppress the German defenders there. Use As a rule, the projectors were sited out in the open some little way behind the front line so that digging, aiming (either by direct line of sight or by compass) and wiring up the electrical leads were easier. When camouflaged the positions would be unknown to the enemy so that although the enemy was able to recognise the direction of the location by the discharge flash he would be uncertain of the range. As such these installations could only be carried out at night. The digging of the narrow trenches did not involve much labour and later in the war the projectors were only buried to a depth of about , instead of up to their muzzles. The projector was somewhat unreliable. To safeguard friendly forces from 'shorts' an area immediately ahead of the projector battery was cleared of troops before firing. This area allowed for the possibility of drums reaching only 60% of the estimated range and veering 20 degrees from the central line of fire by the wind or from some other cause. The projectors were also inaccurate: A British training manual of 1940 described it as, The projector's unreliability and inaccuracy were more than made up for by the weapon's principal advantages: it was a cheap, simple and extremely effective method of delivering chemical weapons. Typically, hundreds, or even thousands, of Livens projectors would be fired in unison during an attack to saturate the enemy lines with poison gas. German equivalent The Livens projector provided the Germans with inspiration for a similar device, known as the . Over eight hundred of these were used against the Italian Army at the Battle of Caporetto. Surviving examples Several barrels with bases are displayed at Sanctuary Wood Museum Hill 62 Zillebeke, Belgium Memorial Museum Passchendaele 1917 in Zonnebeke Several barrels in the ground at the Yorkshire Trench & Dug-out in Ypres In Flanders Fields Museum in Ypres A barrel and two projectiles are displayed at the Museum of Lincolnshire Life, Lincoln, United Kingdom A barrel and base are displayed at the Purfleet Heritage and Military Centre, Purfleet-on-Thames, Essex, United Kingdom. See also Poison gas in World War I Heavy mortars Citations General and cited references United States Department of War (1942). Livens Projector M1 TM 3-325 Further reading External links Worldscapes : Chemical & Biological Warfare Royal Engineers Museum, First World War – Livens Projector Chemical weapon delivery systems United Kingdom chemical weapons program World War I chemical weapons World War I mortars of the United Kingdom
Livens Projector
Chemistry
1,840