id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,970,490
https://en.wikipedia.org/wiki/Corporate%20taxonomy
Corporate taxonomy is the hierarchical classification of entities of interest of an enterprise, organization or administration, used to classify documents, digital assets and other information. Taxonomies can cover virtually any type of physical or conceptual entities (products, processes, knowledge fields, human groups, etc.) at any level of granularity. Corporate taxonomies are increasingly used in information systems (particularly content management and knowledge management systems), as a way to promote discoverability and allow instant access to the right information within exponentially growing volumes of data in learning organizations. Relatively simple systems based on semantic networks and taxonomies proved to be a serious competitor to heavy data mining systems and behavior analysis software in contextual filtering applications used for routing customer requests, "pushing" content on a Web site or delivering product advertising in a targeted and pertinent way. A powerful approach to map and retrieve unstructured data, taxonomies allow efficient solutions in the management of corporate knowledge, in particular in complex organizational models for workflows, human resources or customer relations. As an extension of traditional thesauri and classifications used in a company, a corporate taxonomy is usually the fruit of a large harmonization effort involving most departments of the organization. It is often developed, deployed and fine tuned over the years, while setting up knowledge management systems, in order to assure the survival and good use of valuable corporate know-how. Enterprises have varying interest in the usage of taxonomies, from the usual enterprise information searches to the direct business benefits of taxonomies benefiting quicker and more accurate searches for the merchandise or the services of e-commerce or e-library sites. Such organisations may need to build large and complex vocabularies and deal with information assets that are largely in the public domain. Consequently, they are looking to shortcut their metadata schema development and avoid reinventing the wheel. Such shortcuts include the licensing of ready-built taxonomies and vocabularies with which to enhance their search results quickly. References Information intelligence: Content classification and enterprise taxonomy practice. Delphi Group. 2004. Last checked 29 January 2016. This whitepaper defines taxonomy and classification within an enterprise information architecture, analyzes trends in taxonomy software applications, and provides examples of approaches to using this technology to solve business problems. External links Taxonomy Strategies - Bibliography of resources. Most of the items selected are written for a general business audience, or are a basic primer on the particular topic. Taxonomy Warehouse - Listing of most taxonomies that are available . Resource for finding taxonomies on a variety of topics WAND Taxonomies. Source for pre-built corporate taxonomies covering many different industry and business segments. Eugene Stakhov/ARMA International. Object-oriented taxonomy approach Information technology management Taxonomy
Corporate taxonomy
[ "Technology" ]
562
[ "Information technology", "Information technology management" ]
2,970,491
https://en.wikipedia.org/wiki/Johnjoe%20McFadden
Johnjoe McFadden (born 17 May 1956) is an Anglo-Irish scientist, academic and writer. He is Professor of Molecular Genetics at the University of Surrey, United Kingdom. Life McFadden was born in Donegal, Ireland but raised in the UK. He holds joint British and Irish Nationality. He obtained his BSc in Biochemistry University of London in 1977 and his PhD at Imperial College London in 1982. He went on to work on human genetic diseases and then infectious diseases, at St Mary's Hospital Medical School, London (1982–84) and St George's Hospital Medical School, London (1984–88) and then at the University of Surrey in Guildford, UK. For more than a decade, McFadden has researched the genetics of microbes such as the agents of tuberculosis and meningitis and invented a test for the diagnosis of meningitis. He has published more than 100 articles in scientific journals on subjects as wide-ranging as bacterial genetics, tuberculosis, idiopathic diseases and computer modelling of evolution. He has contributed to more than a dozen books and has edited a book on the genetics of mycobacteria. He produced a widely reported artificial life computer model which modelled evolution in organisms. McFadden has lectured extensively in the UK, Europe, the US and Japan and his work has been featured on radio, television and national newspaper articles particularly for the Guardian. His present post, which he has held since 2001, is Professor of Molecular Genetics at the University of Surrey. Living in London, he is married and has one son. Quantum evolution McFadden wrote the popular science book, Quantum Evolution. The book examines the role of quantum mechanics in life, evolution and consciousness. The book has been described as offering an alternative evolutionary mechanism, beyond the neo-Darwinian framework. The book received positive reviews by Kirkus Reviews and Publishers Weekly. It was negatively reviewed in the journal Heredity by evolutionary biologist Wallace Arthur. Writing In 2006 McFadden co-edited the book, Human Nature: Fact and Fiction on the insights of both science and literature on human nature, with contributions from Ian McEwan, Philip Pullman, Steven Pinker, A.C. Grayling and others. in 2014 McFadden co-wrote the popular science book, Life on the Edge: The Coming Age of Quantum Biology, in which he and Jim Al-Khalili further explore quantum biology and particularly recent findings in photosynthesis, enzyme catalysis, avian navigation, olfaction, mutation and neurobiology. The book received positive reviews, for example: "'Life on the Edge’ gives the clearest account I’ve ever read of the possible ways in which the very small events of the quantum world can affect the world of middle-sized living creatures like us. With great vividness and clarity it shows how our world is tinged, even saturated, with the weirdness of the quantum." (Philip Pullman) "Hugely ambitious ... the skill of the writing provides the uplift to keep us aloft as we fly through the strange and spectacular terra incognita of genuinely new science." (Tom Whipple The Times) McFadden regularly writes articles for The Guardian newspaper on topics as varied as quantum mechanics, evolution and genetically modified crops, and has reviewed books there. The Washington Post and Frankfurter Allgemeine Sonntagszeitung have also published his articles. Life Is Simple: How Occam’s Razor Set Science Free and Unlocked the Universe (Basic Books, 384pp) ISBN 9781529364934 See also Electromagnetic theories of consciousness Mind's eye Quantum Aspects of Life References External links - Johnjoe McFadden's Homepage Johnjoe McFadden's Machines Like Us interview - Johnjoe McFadden's homepage at the University of Surrey, UK. Quantum Evolution - Explore the role of quantum mechanics in life, evolution and consciousness. - Life on the Edge: The Coming of Age of Quantum Biology. Johnjoe McFadden and Jim Al-Khalili (2014) Living people 1956 births Alumni of Imperial College London Academics of the University of Surrey British science writers British biologists Evolutionary biologists Extended evolutionary synthesis Quantum biology Writers from County Donegal Scientists from County Donegal 21st-century Irish biologists
Johnjoe McFadden
[ "Physics", "Biology" ]
880
[ "Quantum mechanics", "nan", "Quantum biology" ]
2,970,534
https://en.wikipedia.org/wiki/Lufenuron
Lufenuron is the active ingredient in the veterinary flea control medication Program, and one of the two active ingredients in the flea, heartworm, and anthelmintic medicine milbemycin oxime/lufenuron (Sentinel). Lufenuron is stored in the animal's body fat and transferred to adult fleas through the host's blood when they feed. Adult fleas transfer it to their growing eggs through their blood, and to hatched larvae feeding on their excrement. It does not kill adult fleas. Lufenuron, a benzoylurea pesticide, inhibits the production of chitin in insects. Without chitin, a larval flea will never develop a hard outer shell (exoskeleton). With its inner organs exposed to air, the insect dies from dehydration soon after hatching or molting (shedding its old, smaller shell). Lufenuron is also used to fight fungal infections, since fungus cell walls are about one third chitin. Lufenuron is also sold as an agricultural pesticide for use against lepidopterans, eriophyid mites, and western flower thrips. It is an effective antifungal in plants. References External links Veterinary drugs Insecticides Ureas Antifungals Chloroarenes Organofluorides Phenol ethers Benzamides Dog medications Fungicides Fluoroarenes Trifluoromethyl compounds
Lufenuron
[ "Chemistry", "Biology" ]
305
[ "Organic compounds", "Fungicides", "Biocides", "Ureas" ]
2,970,552
https://en.wikipedia.org/wiki/Terra%20Nova%20FPSO
Terra Nova is a Floating Production Storage and Offloading Vessel (FPSO) for servicing the Terra Nova oil and gas field. Since 2019 the vessel has been off-field undergoing a life extension programme, initially at Bull Arm Fabrication Site, then Navantia, El Ferrol. In February 2023, the vessel returned to Canada, as of 12th April is undergoing recommissioning at Bull Arm. The Terra Nova field is operated by Suncor Energy Inc., with a 37.675% interest, located approximately east off the coast of Newfoundland, Canada in the North Atlantic Ocean. The Terra Nova field is south of the successful Hibernia field and the more recent White Rose field. All three fields are in the Jeanne d'Arc Basin on the eastern edge of the famous Grand Banks fishing territory. Terra Nova lost 165 m³ of oil into the ocean in 2004 because of two mechanical failures. In June 2006, production on Terra Nova was halted as the platform was sent to Rotterdam for a refit. She returned to the Terra Nova field on 25 September 2006. External links Terra Nova field @ Offshore Technology Floating production storage and offloading vessels Service vessels of Canada Economy of Newfoundland and Labrador Petroleum industry in New Brunswick 1999 ships Petroleum industry in Atlantic Canada Petroleum industry in Canada Oil platforms off Canada
Terra Nova FPSO
[ "Chemistry" ]
262
[ "Petroleum", "Floating production storage and offloading vessels", "Petroleum technology", "Petroleum stubs" ]
2,970,774
https://en.wikipedia.org/wiki/Radiant%20flux
In radiometry, radiant flux or radiant power is the radiant energy emitted, reflected, transmitted, or received per unit time, and spectral flux or spectral power is the radiant flux per unit frequency or wavelength, depending on whether the spectrum is taken as a function of frequency or of wavelength. The SI unit of radiant flux is the watt (W), one joule per second (), while that of spectral flux in frequency is the watt per hertz () and that of spectral flux in wavelength is the watt per metre ()—commonly the watt per nanometre (). Mathematical definitions Radiant flux Radiant flux, denoted ('e' for "energetic", to avoid confusion with photometric quantities), is defined as where is the time; is the radiant energy passing out of a closed surface ; is the Poynting vector, representing the current density of radiant energy; is the normal vector of a point on ; represents the area of ; represents the time period. The rate of energy flow through the surface fluctuates at the frequency of the radiation, but radiation detectors only respond to the average rate of flow. This is represented by replacing the Poynting vector with the time average of its norm, giving where is the time average, and is the angle between and Spectral flux Spectral flux in frequency, denoted Φe,ν, is defined as where is the frequency. Spectral flux in wavelength, denoted , is defined as where is the wavelength. SI radiometry units See also Luminous flux Heat flux Power (physics) Radiosity (heat transfer) References Further reading Power (physics) Physical quantities Radiometry Temporal rates
Radiant flux
[ "Physics", "Mathematics", "Engineering" ]
330
[ "Temporal quantities", "Physical phenomena", "Force", "Telecommunications engineering", "Physical quantities", "Quantity", "Temporal rates", "Power (physics)", "Energy (physics)", "Wikipedia categories named after physical quantities", "Physical properties", "Radiometry" ]
2,970,950
https://en.wikipedia.org/wiki/OmniPage
OmniPage is an optical character recognition (OCR) application available from Kofax Incorporated. OmniPage was one of the first OCR programs to run on personal computers. It was developed in the late 1980s and sold by Caere Corporation, a company headed by Robert Noyce. The original developers were Philip Bernzott, John Dilworth, David George, Bryan Higgins, and Jeremy Knight. Caere was acquired by ScanSoft in 2000. ScanSoft acquired Nuance Communications in 2005, and took over its name. By 2019 OmniPage had been sold to Kofax Inc. OmniPage supports more than 120 different languages. OmniPage provides software development kits for integrating OCR functionality into other applications, such as Microsoft Office Document Imaging and UiPath. References External links Nuance software Optical character recognition software
OmniPage
[ "Technology" ]
169
[ "Computing stubs", "Software stubs" ]
2,971,012
https://en.wikipedia.org/wiki/Manufacturing%20process%20management
Manufacturing process management (MPM) is a collection of technologies and methods used to define how products are to be manufactured. MPM differs from ERP/MRP which is used to plan the ordering of materials and other resources, set manufacturing schedules, and compile cost data. A cornerstone of MPM is the central repository for the integration of all these tools and activities aids in the exploration of alternative production line scenarios; making assembly lines more efficient with the aim of reduced lead time to product launch, shorter product times and reduced work in progress (WIP) inventories as well as allowing rapid response to product or product changes. Production process planning Manufacturing concept planning Factory layout planning and analysis work flow simulation. walk-path assembly planning plant design optimization Mixed model line balancing. Workloads on multiple stations. Process simulation tools e.g. die press lines, manufacturing lines Ergonomic simulation and assessment of production assembly tasks Resource planning Computer-aided manufacturing (CAM) Numerical control CNC Direct numerical control (DNC) Tooling/equipment/fixtures development Tooling and Robot work-cell setup and offline programming (OLP) Generation of shop floor work instructions Time and cost estimates ABC – Manufacturing activity-based costing Outline of industrial organization Quality computer-aided quality assurance (CAQ) Failure mode and effects analysis (FMEA) Statistical process control (SPC) Computer aided inspection with coordinate-measuring machine (CMM) Tolerance stack-up analysis using PMI models. Success measurements Overall equipment effectiveness (OEE), Communication with other systems Enterprise resource planning (ERP) Manufacturing operations management (MOM) Product data management (PDM) SCADA (supervisory control and data acquisition) real time process monitoring and control Human–machine interface (HMI) (or man-machine interface (MMI)) Distributed control system (DCS) See also List of production topics Process management Quality management system processes Operations Management Industrial Management Industrial technology Industrial Engineering References Further reading Materials and Manufacturing Processes, (electronic) (paper), Taylor & Francis Product lifecycle management Engineering management Manufacturing
Manufacturing process management
[ "Engineering" ]
418
[ "Engineering economics", "Engineering management", "Manufacturing", "Mechanical engineering" ]
2,971,159
https://en.wikipedia.org/wiki/Takahashi%20method
The Takahashi method is a technique deploying extremely simple and distilled visual slides for presentations. It is similar to the Lessig method, created by Harvard professor and former presidential candidate Lawrence Lessig. It is named for its inventor, Masayoshi Takahashi. Unlike a typical presentation, no pictures and no charts are used. Only a few words are printed on each slide—often only one or two short words, using very large characters. To make up for this, a presenter will use many more slides than in a traditional presentation, each slide being shown for a much shorter duration. Further information Once Takahashi, a programmer, had to give a short presentation at a conference (RubyConf) so he first used the method and found it helpful, at least with Japanese. Takahashi never used PowerPoint or similar software; he uses only text in his slides. He started thinking about how to use the best word for each slide as he took the audience through his presentation. The words or phrases resemble Japanese newspaper headlines rather than sentences which must be read. The slides use plain text in a visual manner, to help the audience quickly read and understand the material. It's said to be helpful with Japanese and other eastern languages which use non-Latin alphabets. Many presenters in developer conferences use their own variant on Takahashi. Notably, Audrey Tang's stock presentations at Perl and Open Source conferences use this method. External links Living large: "Takahashi Method" uses king-sized text as a visual Big – small JavaScript tool for making Takahashi-style presentations on the web Weenote – minimal JavaScript tool for making Takahashi-style presentations on the web takahashi.sty – package for making Takahashi-style presentation in Beamer (LaTeX) Slide – open-source Android application for making Takahashi-style presentations sent – open-source tool for Takahashi method, developed for Unix, and Unix-like operating systems by suckless.org. Presentation
Takahashi method
[ "Technology" ]
392
[ "Multimedia", "Presentation" ]
2,971,205
https://en.wikipedia.org/wiki/Van%20Deemter%20equation
The van Deemter equation in chromatography, named for Jan van Deemter, relates the variance per unit length of a separation column to the linear mobile phase velocity by considering physical, kinetic, and thermodynamic properties of a separation. These properties include pathways within the column, diffusion (axial and longitudinal), and mass transfer kinetics between stationary and mobile phases. In liquid chromatography, the mobile phase velocity is taken as the exit velocity, that is, the ratio of the flow rate in ml/second to the cross-sectional area of the ‘column-exit flow path.’ For a packed column, the cross-sectional area of the column exit flow path is usually taken as 0.6 times the cross-sectional area of the column. Alternatively, the linear velocity can be taken as the ratio of the column length to the dead time. If the mobile phase is a gas, then the pressure correction must be applied. The variance per unit length of the column is taken as the ratio of the column length to the column efficiency in theoretical plates. The van Deemter equation is a hyperbolic function that predicts that there is an optimum velocity at which there will be the minimum variance per unit column length and, thence, a maximum efficiency. The van Deemter equation was the result of the first application of rate theory to the chromatography elution process. Van Deemter equation The van Deemter equation relates height equivalent to a theoretical plate (HETP) of a chromatographic column to the various flow and kinetic parameters which cause peak broadening, as follows: Where HETP = a measure of the resolving power of the column [m] A = Eddy-diffusion parameter, related to channeling through a non-ideal packing [m] B = diffusion coefficient of the eluting particles in the longitudinal direction, resulting in dispersion [m2 s−1] C = Resistance to mass transfer coefficient of the analyte between mobile and stationary phase [s] u = speed [m s−1] In open tubular capillaries, the A term will be zero as the lack of packing means channeling does not occur. In packed columns, however, multiple distinct routes ("channels") exist through the column packing, which results in band spreading. In the latter case, A will not be zero. The form of the Van Deemter equation is such that HETP achieves a minimum value at a particular flow velocity. At this flow rate, the resolving power of the column is maximized, although in practice, the elution time is likely to be impractical. Differentiating the van Deemter equation with respect to velocity, setting the resulting expression equal to zero, and solving for the optimum velocity yields the following: Plate count The plate height given as: with the column length and the number of theoretical plates can be estimated from a chromatogram by analysis of the retention time for each component and its standard deviation as a measure for peak width, provided that the elution curve represents a Gaussian curve. In this case the plate count is given by: By using the more practical peak width at half height the equation is: or with the width at the base of the peak: Expanded van Deemter The Van Deemter equation can be further expanded to: Where: H is plate height λ is particle shape (with regard to the packing) dp is particle diameter γ, ω, and R are constants Dm is the diffusion coefficient of the mobile phase dc is the capillary diameter df is the film thickness Ds is the diffusion coefficient of the stationary phase. u is the linear velocity Rodrigues equation The Rodrigues equation, named for Alírio Rodrigues, is an extension of the Van Deemter equation used to describe the efficiency of a bed of permeable (large-pore) particles. The equation is: where and is the intraparticular Péclet number. See also Resolution (chromatography) Jan van Deemter References Chromatography Equations
Van Deemter equation
[ "Chemistry", "Mathematics" ]
834
[ "Chromatography", "Mathematical objects", "Equations", "Separation processes" ]
2,971,303
https://en.wikipedia.org/wiki/Trudinger%27s%20theorem
In mathematical analysis, Trudinger's theorem or the Trudinger inequality (also sometimes called the Moser–Trudinger inequality) is a result of functional analysis on Sobolev spaces. It is named after Neil Trudinger (and Jürgen Moser). It provides an inequality between a certain Sobolev space norm and an Orlicz space norm of a function. The inequality is a limiting case of Sobolev imbedding and can be stated as the following theorem: Let be a bounded domain in satisfying the cone condition. Let and . Set Then there exists the embedding where The space is an example of an Orlicz space. References . . Sobolev spaces Inequalities Theorems in analysis
Trudinger's theorem
[ "Mathematics" ]
153
[ "Theorems in mathematical analysis", "Mathematical analysis", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
2,971,730
https://en.wikipedia.org/wiki/Static%20mixer
A static mixer is a device for the continuous mixing of fluid materials, without moving components. Normally the fluids to be mixed are liquid, but static mixers can also be used to mix gas streams, disperse gas into liquid or blend immiscible liquids. The energy needed for mixing comes from a loss in pressure as fluids flow through the static mixer. One design of static mixer is the plate-type mixer and another common device type consists of mixer elements contained in a cylindrical (tube) or squared housing. Mixer size can vary from about 6 mm to 6 meters diameter. Typical construction materials for static mixer components include stainless steel, polypropylene, Teflon, PVDF, PVC, CPVC and polyacetal. The latest designs involve static mixing elements made of glass-lined steel. Design Plate type In the plate type design mixing is accomplished through intense turbulence in the flow. Housed-elements design In the housed-elements design the static mixer elements consist of a series of baffles made of metal or a variety of plastics. Similarly, the mixer housing can be made of metal or plastic. The housed-elements design incorporates a method for delivering two streams of fluids into the static mixer. As the streams move through the mixer, the non-moving elements continuously blend the materials. Complete mixing depends on many variables including the fluids' properties, tube inner diameter, number of elements and their design. The housed-elements mixer's fixed, typically helical elements can simultaneously produce patterns of flow division and radial mixing: Flow division: In laminar flow, a processed material divides at the leading edge of each element of the mixer and follows the channels created by the element shape. At each succeeding element, the two channels are further divided, resulting in an exponential increase in stratification. The number of striations produced is 2n where 'n' is the number of elements in the mixer. Radial mixing: In either turbulent flow or laminar flow, rotational circulation of a processed material around its own hydraulic center in each channel of the mixer causes radial mixing of the material. Processed material is intermixed to reduce or eliminate radial gradients in temperature, velocity and material composition. Applications A common application is mixing nozzles for two-component adhesives (e.g., epoxy) and sealants (see Resin casting). Other applications include wastewater treatment and chemical processing. Static mixers can be used in the refinery and oil and gas markets as well, for example in bitumen processing or for desalting crude oil. In polymer production, static mixers can be used to facilitate polymerization reactions or for the admixing of liquid additives. History The static mixer traces its origins to an invention for a mixing device filed on Nov. 29, 1965 by the Arthur D. Little Company. This device was the housed-elements type and was licensed to the Kenics Corporation and marketed as the Kenics Motionless Mixer. Today, the Kenics brand is owned by National Oilwell Varco. The plate type static mixer patent was issued on November 24, 1998, to Robert W. Glanville of Westfall Manufacturing. See also Thermal cleaning References Laboratory equipment Turbulence Piping
Static mixer
[ "Chemistry", "Engineering" ]
653
[ "Turbulence", "Building engineering", "Chemical engineering", "Mechanical engineering", "Piping", "Fluid dynamics" ]
2,971,989
https://en.wikipedia.org/wiki/Pisgah%20National%20Forest
Pisgah National Forest is a National Forest in the Appalachian Mountains of western North Carolina. It is administered by the United States Forest Service, part of the United States Department of Agriculture. The Pisgah National Forest is completely contained within the state of North Carolina. The forest is managed together with the other three North Carolina National Forests (Croatan, Nantahala, and Uwharrie) from common headquarters in Asheville, North Carolina. There are local ranger district offices located in Pisgah Forest, Mars Hill, and Nebo. Name Pisgah (פִּסְגָּה) is a Biblical Hebrew word with several meanings: it can be used to describe someone’s best achievement; another meaning is the highest point of a mountain, “summit”. Some translators of the Bible book of Deuteronomy translated the word as a name of a mountain in general, usually referring to Mount Nebo. History The Pisgah National Forest was established in 1916, one of the first national forests in the eastern United States. The new preserve included approximately that had been part of the Biltmore Estate, but were sold to the federal government in 1914 by Edith Vanderbilt. Some of the forest tracts were among the first purchases by the Forest Service under the Weeks Act of 1911. While national forests had already been created in the western United States, the Weeks Act provided the authority required to create national forests in the east as well. Although tracts in the future Pisgah National Forest were among the first purchased under the Weeks Act, the very first to receive formal approval was the Gennett Purchase in northern Georgia. On March 25, 1921 Boone National Forest was added to Pisgah, and on July 10, 1936, most of Unaka National Forest was added. In 1954 the Pisgah National Forest was administratively combined with the Croatan and Nantahala national forests, collectively known as the National Forests of North Carolina. American forestry has roots in what is now the Pisgah National Forest. The Cradle of Forestry, (Biltmore Forest School), located in the southern part of the forest, was the site of the first school of forestry in the United States. It operated during the late 19th and early 20th centuries. The school was opened and operated at the direction of George Washington Vanderbilt II, builder of the Biltmore Estate in Asheville. The Forestry Education offered at Biltmore was taught by Carl Schenk. A native German, Schenk was referred to Vanderbilt when Gifford Pinchot resigned to operate the newly formed Division of Forestry. The Cradle of Forestry and the Biltmore Estate played a major role in the birth of the U.S. Forest Service. Today these lands are part of an educational and recreational area in Pisgah National Forest. Located on the forest property is the Bent Creek Campus of the Appalachian Forest Experiment Station, listed on the National Register of Historic Places in 1993. Administration The Pisgah National Forest is divided into 3 Ranger Districts: the Grandfather, Appalachian, and Pisgah districts. The Grandfather and Appalachian Ranger Districts lie in the northern mountains of North Carolina and include areas such as the Linville Gorge Wilderness, Wilson Creek, the watersheds of the Toe and Cane rivers, Roan Mountain, Mount Mitchell, Craggy Gardens, and the Big Ivy/Coleman Boundary area. The Grandfather Ranger District office is located in Nebo. The Appalachian Ranger District stretches along the Tennessee border from the Great Smoky Mountains National Park north to Hot Springs, with the district office located in Mars Hill. The Appalachian Trail passes through this district, as well as the Cherokee National Forest in Tennessee. The Pisgah Ranger District lies mostly south of Asheville, in parts of Henderson, Transylvania and Haywood counties, with the district office located in Pisgah Forest. Geography The Pisgah National Forest covers of mountainous terrain in the southern Appalachian Mountains, including parts of the Blue Ridge Mountains and Great Balsam Mountains. Elevations reach over and include some of the highest mountains in the eastern United States. Summit elevations include Black Balsam Knob at , Mount Hardy at , Tennant Mountain at , and Cold Mountain at . Mount Mitchell, in Mount Mitchell State Park, is the highest mountain east of the Mississippi River and lies just outside the boundary of Pisgah National Forest. The forest also includes tracts surrounding the city of Asheville, the city of Brevard and land in the French Broad River Valley. Recreation includes activities such as hiking, backpacking, and mountain biking. The land and its resources are also used for hunting, wildlife management, and timber harvesting, as well as the North Carolina Arboretum. The forest lies in parts of 12 counties in western North Carolina. In descending order they are Transylvania, McDowell, Haywood, Madison, Caldwell, Burke, Yancey, Buncombe, Avery, Mitchell, Henderson, and Watauga counties. Forests and old growth Some of old-growth forests have been identified in the Pisgah National Forest, with in Linville Gorge. Rivers and trails Bent Creek, Mills River, and Davidson River - three major streams and tributaries of the French Broad River - are located in the Pisgah Ranger District, which lies on either side of the Blue Ridge Parkway south of Asheville, along the Pisgah Ridge and Balsam Mountains. Three long-distance recreational trails - the Mountains-to-Sea Trail, the Shut-In Trail, and the Art Loeb Trail travel through this district. Also included in the Pisgah Ranger District are the Shining Rock and Middle Prong Wildernesses. The Blue Ridge Parkway transects this National Forest, and many National Forest and Parkway trails intersect. Recreation Pisgah National Forest is a popular place for many activities, such as hiking, backpacking, road biking, mountain biking, fishing, and rock climbing. Popular mountain biking trails include Sycamore Cove Trail, and Black Mountain Loop. Farlow Gap is an expert-level trail, and considered "one of the toughest mountain bike trails in Pisgah National Forest." Wilderness areas There are three officially designated wilderness areas lying within Pisgah National Forest that are part of the National Wilderness Preservation System. Linville Gorge Wilderness Middle Prong Wilderness Shining Rock Wilderness Gallery See also List of national forests of the United States Cherokee National Forest, the corresponding USNF across the border in Tennessee DuPont State Forest (also popular in western North Carolina for hiking and mountain biking) List of mountains in North Carolina Appalachian temperate rainforest Killing of Judy Smith References External links Pisgah National Forest Images from The Dawn of Private Forestry in America, Covering the Years 1895 to 1914, Forest History Society Library and Archives National forests of North Carolina National forests of the Appalachians Blue Ridge National Heritage Area Old-growth forests Protected areas of Transylvania County, North Carolina Protected areas of McDowell County, North Carolina Protected areas of Haywood County, North Carolina Protected areas of Madison County, North Carolina Protected areas of Caldwell County, North Carolina Protected areas of Burke County, North Carolina Protected areas of Yancey County, North Carolina Protected areas of Buncombe County, North Carolina Protected areas of Avery County, North Carolina Protected areas of Mitchell County, North Carolina Protected areas of Henderson County, North Carolina Protected areas of Watauga County, North Carolina Protected areas established in 1916 1916 establishments in North Carolina Western North Carolina Mountain biking in the United States
Pisgah National Forest
[ "Biology" ]
1,486
[ "Old-growth forests", "Ecosystems" ]
2,972,272
https://en.wikipedia.org/wiki/Radite
Radite is a trade name for an early plastic, formed of pyroxylin—a partially nitrated cellulose— manufactured by DuPont and introduced by the Sheaffer Pen Company in 1924 when plastics were first used as a material for pen manufacture. Sheaffer's Radite pens were the first commercial plastic pens, and Sheaffer marketed the material as "indestructible." Jade green in color, the pens were best sellers at the time. The material is credited with helping Sheaffer capture 25% of the market. Radite is extremely similar to other celluloid pen materials trademarked at the time, such as Permanite, Pyralin, Fiberloid, Viscoloid, and Herculoid. References Plastics
Radite
[ "Physics" ]
157
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
2,973,049
https://en.wikipedia.org/wiki/Linishing
Linishing is the process of using grinding or belt sanding techniques to improve the flatness, smoothness and uniformity of a surface and its finish. The process takes multiple stages, and a finer abrasive surface is typically used each time. Abrasive brushes and linishing belts are typically used, the latter being a machine similar to a belt sander used for large surfaces. Large linishing belts are used in large-scale industrial linishing processes. Hand tools similar to linishing belts but much smaller and more suitable for small surfaces are also used. References Grinding and lapping Machine tools Surface finishing
Linishing
[ "Engineering" ]
124
[ "Machine tools", "Industrial machinery" ]
2,973,053
https://en.wikipedia.org/wiki/Weathering%20steel
Weathering steel, often referred to by the genericised trademark COR-TEN steel and sometimes written without the hyphen as corten steel, is a group of steel alloys that form a stable external layer of rust that eliminates the need for painting. U.S. Steel (USS) holds the registered trademark on the name COR-TEN. The name COR-TEN refers to the two distinguishing properties of this type of steel: corrosion resistance and tensile strength. Although USS sold its discrete plate business to International Steel Group (now ArcelorMittal) in 2003, it makes COR-TEN branded material in strip mill plate and sheet forms. The original COR-TEN received the standard designation A242 (COR-TEN A) from the ASTM International standards group. Newer ASTM grades are A588 (COR-TEN B) and A606 for thin sheet. All of the alloys are in common production and use. The surface oxidation generally takes six months to develop, although surface treatments can accelerate this to as little as one hour. History The history of weathering steels began in the US in 1910s, when steels alloyed with different amounts of copper were exposed to the elements; the research continued into the 1920s and it was discovered that phosphorus content also helps with the corrosion resistance. In 1933 the United States Steel Corporation decided to commercialize the results of their studies and patented a steel with exceptional mechanical resistance, primarily for use in railroad hopper cars, for the handling of heavy bulk loads including coal, metal ores, other mineral products and grain. The controlled corrosion for which this material is now best known was a welcome benefit discovered soon after, prompting USS to apply the trademarked name Cor-Ten. Because of its inherent toughness, this steel is still used extensively for bulk transport, intermodal shipping containers and bulk storage. Railroad passenger cars were also being built with Cor-Ten, albeit painted, by Pullman-Standard for the Southern Pacific from 1936, continuing through commuter coaches for the Rock Island Line in 1949. In 1964, the Moorestown Interchange was built over New Jersey Turnpike at milepost 37.02. This overpass is believed to be the first highway structure application of weathering steel. Other states including Iowa, Ohio, and Michigan followed soon after. Those were followed by University of York Footbridge in the United Kingdom in 1967. Since then, the practice of using weathering steel in bridges has expanded to many countries. Properties Weathering refers to the chemical composition of these steels, allowing them to exhibit increased resistance to atmospheric corrosion compared to other steels. This is because the steel forms a protective layer on its surface under the influence of the weather. The corrosion-retarding effect of the protective layer is produced by the particular distribution and concentration of alloying elements in it. It is not yet clear how exactly the patina formation differs from usual rusting, but it's established that drying of the wetted surface is necessary and that copper is the most important alloying element. The layer protecting the surface develops and regenerates continuously when subjected to the influence of the weather. In other words, the steel is allowed to rust in order to form the protective coating. The mechanical properties of weathering steels depend on which alloy and how thick the material is. ASTM A242 The original A242 alloy has a yield strength of and ultimate tensile strength of for light-medium rolled shapes and plates up to thick. It has yield strength of and ultimate strength of for medium weight rolled shapes and plates from thick. The thickest rolled sections and plates – from thick have yield strength of and ultimate strength of . ASTM A242 is available in Type 1 and Type 2. Both have different applications based on the thickness. Type 1 is often used in housing structures, construction industry and freight cars. The Type 2 steel, which is also called Corten B, is used primarily in urban furnishing, passenger ships or cranes. ASTM A588 A588 has a yield strength of at least , and ultimate tensile strength of for all rolled shapes and plate thicknesses up to thick. Plates from have yield strength at least and ultimate tensile strength at least , and plates from thick have yield strength at least and ultimate tensile strength at least . Uses Weathering steel is popularly used in outdoor sculptures for its distressed antique appearance. One example is the large Chicago Picasso sculpture, which stands in the plaza of the Daley Center Courthouse in Chicago, which is also constructed of weathering steel. Other examples include Barnett Newman's Broken Obelisk; several of Robert Indiana's Numbers sculptures and his original Love sculpture; numerous works by Richard Serra; the Alamo sculpture in Manhattan, NY; the Barclays Center, Brooklyn, New York; the Angel of the North, Gateshead; and Ribbons, a sculpture by Pippa Hale, celebrating women in Leeds; and Broadcasting Tower at Leeds Beckett University. It is also used in bridge and other large structural applications such as the New River Gorge Bridge, the second span of the Newburgh–Beacon Bridge (1980), and the creation of the Australian Centre for Contemporary Art (ACCA) and MONA. It is very widely used in marine transportation, in the construction of intermodal containers as well as visible sheet piling along recently widened sections of London's M25 motorway. The first use of weathering steel for architectural applications was the John Deere World Headquarters in Moline, Illinois. The building was designed by architect Eero Saarinen, and completed in 1964. The main buildings of Odense University (built 1971–1976), designed by Knud Holscher and Jørgen Vesterholt, are clad in weathering steel, earning them the nickname Rustenborg (Danish for "rusty fortress"). In 1977, Robert Indiana created a Hebrew version of the Love sculpture made from weathering steel using the four-letter word ahava (אהבה, "love" in Hebrew) for the Israel Museum Art Garden in Jerusalem, Israel. In Denmark, all masts for supporting the catenary on electrified railways are made of weathering steel for aesthetic reasons. Weathering steel was used in 1971 for the Highliner electric cars built by the St. Louis Car Company for Illinois Central Railroad. The use of weathering steel was seen as a cost-cutting move in comparison with the contemporary railcar standard of stainless steel. A subsequent order in 1979 was built to similar specs, including weathering steel bodies, by Bombardier. The cars were painted, a standard practice for weathering steel railcars. The durability of weathering steel did not live up to expectations, with rust holes appearing in the railcars. Painting may have contributed to the problem, as painted weathering steel is no more corrosion-resistant than conventional steel, because the protective patina will not form in time to prevent corrosion over a localized area of attack such as a small paint failure. These cars were retired by 2016. Weathering steel was used to build the exterior of Barclays Center, made up of 12,000 pre-weathered steel panels engineered by ASI Limited & SHoP Construction. The New York Times says of the material: "While it can look suspiciously unfinished to the casual observer, it has many fans in the world of art and architecture." In 2015, a new building for the KTH Royal Institute of Technology School of Architecture was completed on its campus. The use of weathering steel helped the futuristic shapes of the facade fit in well with its much older surroundings and in 2015 it was awarded the Kasper Salin Prize. Disadvantages Using weathering steel in construction presents several challenges. Ensuring that weld-points weather at the same rate as the other materials may require special welding techniques or material. Weathering steel is not rustproof in itself: if water is allowed to accumulate on the surface of the steel, it will experience a higher corrosion rate, so provision for drainage must be made. According to the NTSB, lack of drainage is what ultimately led to the Collapse of the Fern Hollow Bridge. Weathering steel is sensitive to humid subtropical climates, and in such environments it is possible that the protective patina may not stabilize but instead continue to corrode. For example, the former Omni Coliseum, built in 1972 in Atlanta, never stopped rusting, and eventually large holes appeared in the structure. This was a major factor in the decision to demolish it just 25 years after construction. The same thing can happen in environments laden with sea salt. Hawaii's Aloha Stadium, built in 1975, is one example of this. Weathering steel's normal surface weathering can also lead to rust stains on nearby surfaces. The rate at which some weathering steels form the desired patina varies strongly with the presence of atmospheric pollutants which catalyze corrosion. While the process is generally successful in large urban centers, the weathering rate is much slower in more rural environments. Uris Hall, a social sciences building on Cornell University's main campus in Ithaca, a small city in Upstate New York, did not achieve the predicted surface finish on its Bethlehem Steel Mayari-R weathering steel framing within the predicted time. Rainwater runoff from the slowly rusting steel stained the numerous large windows and increased maintenance costs. Corrosion without the formation of a protective layer apparently led to the need for emergency structural reinforcement and galvanizing in 1974, less than two years after opening. The U.S. Steel Tower in Pittsburgh, Pennsylvania, was constructed by U.S. Steel in part to showcase COR-TEN steel. The initial weathering of the material resulted in a discoloration, known as "bleeding" or "runoff", of the surrounding city sidewalks and nearby buildings. A cleanup effort was orchestrated by the corporation once weathering was complete to clean the markings. A few of the nearby sidewalks were left uncleaned, and remain a rust color. This problem has been reduced in newer formulations of weathering steel. Staining can be prevented if the structure can be designed so that water does not drain from the steel onto concrete where stains would be visible. See also Iron pillar of Delhi and Dhar iron pillar; ancient metal monuments with some characteristics similar to weathering steel References External links Report on Weathering Steel in TxDOT Bridges from the Texas Department of Transportation (4464 KB). Contains recommended details to avoid staining. Note: wrapping of piers was later found not to be cost-effective. A Primer on Weathering Steel from STRUCTURE magazine (2005) Weathering Steel from Colorado and London (2020) Structural steel Steels Sculpture materials U.S. Steel
Weathering steel
[ "Engineering" ]
2,179
[ "Steels", "Structural engineering", "Alloys", "Structural steel" ]
2,973,295
https://en.wikipedia.org/wiki/Satellite%20galaxy
A satellite galaxy is a smaller companion galaxy that travels on bound orbits within the gravitational potential of a more massive and luminous host galaxy (also known as the primary galaxy). Satellite galaxies and their constituents are bound to their host galaxy, in the same way that planets within the Solar System are gravitationally bound to the Sun. While most satellite galaxies are dwarf galaxies, satellite galaxies of large galaxy clusters can be much more massive. The Milky Way is orbited by about fifty satellite galaxies, the largest of which is the Large Magellanic Cloud. Moreover, satellite galaxies are not the only astronomical objects that are gravitationally bound to larger host galaxies (see globular clusters). For this reason, astronomers have defined galaxies as gravitationally bound collections of stars that exhibit properties that cannot be explained by a combination of baryonic matter (i.e. ordinary matter) and Newton's laws of gravity. For example, measurements of the orbital speed of stars and gas within spiral galaxies result in a velocity curve that deviates significantly from the theoretical prediction. This observation has motivated various explanations such as the theory of dark matter and modifications to Newtonian dynamics. Therefore, despite also being satellites of host galaxies, globular clusters should not be mistaken for satellite galaxies. Satellite galaxies are not only more extended and diffuse compared to globular clusters, but are also enshrouded in massive dark matter halos that are thought to have been endowed to them during the formation process. Satellite galaxies generally lead tumultuous lives due to their chaotic interactions with both the larger host galaxy and other satellites. For example, the host galaxy is capable of disrupting the orbiting satellites via tidal and ram pressure stripping. These environmental effects can remove large amounts of cold gas from satellites (i.e. the fuel for star formation), and this can result in satellites becoming quiescent in the sense that they have ceased to form stars. Moreover, satellites can also collide with their host galaxy resulting in a minor merger (i.e. merger event between galaxies of significantly different masses). On the other hand, satellites can also merge with one another resulting in a major merger (i.e. merger event between galaxies of comparable masses). Galaxies are mostly composed of empty space, interstellar gas and dust, and therefore galaxy mergers do not necessarily involve collisions between objects from one galaxy and objects from the other, however, these events generally result in much more massive galaxies. Consequently, astronomers seek to constrain the rate at which both minor and major mergers occur to better understand the formation of gigantic structures of gravitationally bound conglomerations of galaxies such as galactic groups and clusters. History Early 20th century Prior to the 20th century, the notion that galaxies existed beyond the Milky Way was not well established. In fact, the idea was so controversial at the time that it led to what is now heralded as the "Shapley-Curtis Great Debate" aptly named after the astronomers Harlow Shapley and Heber Doust Curtis that debated the nature of "nebulae" and the size of the Milky Way at the National Academy of Sciences on April 26, 1920. Shapley argued that the Milky Way was the entire universe (spanning over 100,000 lightyears or 30 kiloparsec across) and that all of the observed "nebulae" (currently known as galaxies) resided within this region. On the other hand, Curtis argued that the Milky Way was much smaller and that the observed nebulae were in fact galaxies similar to the Milky Way. This debate was not settled until late 1923 when the astronomer Edwin Hubble measured the distance to M31 (currently known as the Andromeda galaxy) using Cepheid Variable stars. By measuring the period of these stars, Hubble was able to estimate their intrinsic luminosity and upon combining this with their measured apparent magnitude he estimated a distance of 300 kpc, which was an order-of-magnitude larger than the estimated size of the universe made by Shapley. This measurement verified that not only was the universe much larger than previously expected, but it also demonstrated that the observed nebulae were actually distant galaxies with a wide range of morphologies (see Hubble sequence). Modern times Despite Hubble's discovery that the universe was teeming with galaxies, a majority of the satellite galaxies of the Milky Way and the Local Group remained undetected until the advent of modern astronomical surveys such as the Sloan Digital Sky Survey (SDSS) and the Dark Energy Survey (DES). In particular, the Milky Way is currently known to host 59 satellite galaxies (see satellite galaxies of the Milky Way), of which two known as the Large Magellanic Cloud and Small Magellanic Cloud have been observable in the Southern Hemisphere with the unaided eye since ancient times. Nevertheless, modern cosmological theories of galaxy formation and evolution predict a much larger number of satellite galaxies than what is observed (see missing satellites problem). However, more recent high resolution simulations have demonstrated that the current number of observed satellites pose no threat to the prevalent theory of galaxy formation. Motivations to study satellite galaxies Spectroscopic, photometric and kinematic observations of satellite galaxies have yielded a wealth of information that has been used to study, among other things, the formation and evolution of galaxies, the environmental effects that enhance and diminish the rate of star formation within galaxies and the distribution of dark matter within the dark matter halo. As a result, satellite galaxies serve as a testing ground for prediction made by cosmological models. Classification of satellite galaxies As mentioned above, satellite galaxies are generally categorized as dwarf galaxies and therefore follow a similar Hubble classification scheme as their host with the minor addition of a lowercase "d" in front of the various standard types to designate the dwarf galaxy status. These types include dwarf irregular (dI), dwarf spheroidal (dSph), dwarf elliptical (dE) and dwarf spiral (dS). However, out of all of these types it is believed that dwarf spirals are not satellites, but rather dwarf galaxies that are only found in the field. Dwarf irregular satellite galaxies Dwarf irregular satellite galaxies are characterized by their chaotic and asymmetric appearance, low gas fractions, high star formation rate and low metallicity. Three of the closest dwarf irregular satellites of the Milky Way include the Small Magellanic Cloud, Canis Major Dwarf, and the newly discovered Antlia 2. Dwarf elliptical satellite galaxies Dwarf elliptical satellite galaxies are characterized by their oval appearance on the sky, disordered motion of constituent stars, moderate to low metallicity, low gas fractions and old stellar population. Dwarf elliptical satellite galaxies in the Local Group include NGC 147, NGC 185, and NGC 205, which are satellites of our neighboring Andromeda galaxy. Dwarf spheroidal satellite galaxies Dwarf spheroidal satellite galaxies are characterized by their diffuse appearance, low surface brightness, high mass-to-light ratio (i.e. dark matter dominated), low metallicity, low gas fractions and old stellar population. Moreover, dwarf spheroidals make up the largest population of known satellite galaxies of the Milky Way. A few of these satellites include Hercules, Pisces II and Leo IV, which are named after the constellation in which they are found. Transitional types As a result of minor mergers and environmental effects, some dwarf galaxies are classified as intermediate or transitional type satellite galaxies. For example, Phoenix and LGS3 are classified as intermediate types that appear to be transitioning from dwarf irregulars to dwarf spheroidals. Furthermore, the Large Magellanic Cloud is considered to be in the process of transitioning from a dwarf spiral to a dwarf irregular. Formation of satellite galaxies According to the standard model of cosmology (known as the ΛCDM model), the formation of satellite galaxies is intricately connected to the observed large-scale structure of the Universe. Specifically, the ΛCDM model is based on the premise that the observed large-scale structure is the result of a bottom-up hierarchical process that began after the recombination epoch in which electrically neutral hydrogen atoms were formed as a result of free electrons and protons binding together. As the ratio of neutral hydrogen to free protons and electrons grew, so did fluctuations in the baryonic matter density. These fluctuations rapidly grew to the point that they became comparable to dark matter density fluctuations. Moreover, the smaller mass fluctuations grew to nonlinearity, became virialized (i.e. reached gravitational equilibrium), and were then hierarchically clustered within successively larger bound systems. The gas within these bound systems condensed and rapidly cooled into cold dark matter halos that steadily increased in size by coalescing together and accumulating additional gas via a process known as accretion. The largest bound objects formed from this process are known as superclusters, such as the Virgo Supercluster, that contain smaller clusters of galaxies that are themselves surrounded by even smaller dwarf galaxies. Furthermore, in this model dwarfs galaxies are considered to be the fundamental building blocks that give rise to more massive galaxies, and the satellites that are observed around these galaxies are the dwarfs that have yet to be consumed by their host. Accumulation of mass in dark matter halos A crude yet useful method to determine how dark matter halos progressively gain mass through mergers of less massive halos can be explained using the excursion set formalism, also known as the extended Press-Schechter formalism (EPS). Among other things, the EPS formalism can be used to infer the fraction of mass that originated from collapsed objects of a specific mass at an earlier time by applying the statistics of Markovian random walks to the trajectories of mass elements in -space, where and represent the mass variance and overdensity, respectively. In particular the EPS formalism is founded on the ansatz that states "the fraction of trajectories with a first upcrossing of the barrier at is equal to the mass fraction at time that is incorporated in halos with masses ". Consequently, this ansatz ensures that each trajectory will upcross the barrier given some arbitrarily large , and as a result it guarantees that each mass element will ultimately become part of a halo. Furthermore, the fraction of mass that originated from collapsed objects of a specific mass at an earlier time can be used to determine average number of progenitors at time within the mass interval that have merged to produce a halo of at time . This is accomplished by considering a spherical region of mass with a corresponding mass variance and linear overdensity , where is the linear growth rate that is normalized to unity at time and is the critical overdensity at which the initial spherical region has collapsed to form a virialized object. Mathematically, the progenitor mass function is expressed as:where and is the Press-Schechter multiplicity function that describes the fraction of mass associated with halos in a range . Various comparisons of the progenitor mass function with numerical simulations have concluded that good agreement between theory and simulation is obtained only when is small, otherwise the mass fraction in high mass progenitors is significantly underestimated, which can be attributed to the crude assumptions such as assuming a perfectly spherical collapse model and using a linear density field as opposed to a non-linear density field to characterize collapsed structures. Nevertheless, the utility of the EPS formalism is that it provides a computationally friendly approach for determining properties of dark matter halos. Halo merger rate Another utility of the EPS formalism is that it can be used to determine the rate at which a halo of initial mass M merges with a halo with mass between M and M+ΔM. This rate is given by where , . In general the change in mass, , is the sum of a multitude of minor mergers. Nevertheless, given an infinitesimally small time interval it is reasonable to consider the change in mass to be due to a single merger events in which transitions to . Galactic cannibalism (minor mergers) Throughout their lifespan, satellite galaxies orbiting in the dark matter halo experience dynamical friction and consequently descend deeper into the gravitational potential of their host as a result of orbital decay. Throughout the course of this descent, stars in the outer region of the satellite are steadily stripped away due to tidal forces from the host galaxy. This process, which is an example of a minor merger, continues until the satellite is completely disrupted and consumed by the host galaxies. Evidence of this destructive process can be observed in stellar debris streams around distant galaxies. Orbital decay rate As satellites orbit their host and interact with each other they progressively lose small amounts of kinetic energy and angular momentum due to dynamical friction. Consequently, the distance between the host and the satellite progressively decreases in order to conserve angular momentum. This process continues until the satellite ultimately mergers with the host galaxy. Furthermore, If we assume that the host is a singular isothermal sphere (SIS) and the satellite is a SIS that is sharply truncated at the radius at which it begins to accelerate towards the host (known as the Jacobi radius), then the time that it takes for dynamical friction to result in a minor merger can be approximated as follows:where is the initial radius at , is the velocity dispersion of the host galaxy, is the velocity dispersion of the satellite and is the Coulomb logarithm defined as with , and respectively representing the maximum impact parameter, the half-mass radius and the typical relative velocity. Moreover, both the half-mass radius and the typical relative velocity can be rewritten in terms of the radius and velocity dispersion such that and . Using the Faber-Jackson relation, the velocity dispersion of satellites and their host can be estimated individually from their observed luminosity. Therefore, using the equation above it is possible to estimate the time that it takes for a satellite galaxy to be consumed by the host galaxy. Minor merger driven star formation In 1978, pioneering work involving the measurement of the colors of merger remnants by the astronomers Beatrice Tinsley and Richard Larson gave rise to the notion that mergers enhance star formation. Their observations showed that an anomalous blue color was associated with the merger remnants. Prior to this discovery, astronomers had already classified stars (see stellar classifications) and it was known that young, massive stars were bluer due to their light radiating at shorter wavelengths. Furthermore, it was also known that these stars live short lives due to their rapid consumption of fuel to remain in hydrostatic equilibrium. Therefore, the observation that merger remnants were associated with large populations of young, massive stars suggested that mergers induced rapid star formation (see starburst galaxy). Since this discovery was made, various observations have verified that mergers do indeed induce vigorous star formation. Despite major mergers being far more effective at driving star formation than minor mergers, it is known that minor mergers are significantly more common than major mergers so the cumulative effect of minor mergers over cosmic time is postulated to also contribute heavily to burst of star formation. Minor mergers and the origins of thick disk components Observations of edge-on galaxies suggest the universal presence of a thin disk, thick disk and halo component of galaxies. Despite the apparent ubiquity of these components, there is still ongoing research to determine if the thick disk and thin disk are truly distinct components. Nevertheless, many theories have been proposed to explain the origin of the thick disk component, and among these theories is one that involves minor mergers. In particular, it is speculated that the preexisting thin disk component of a host galaxy is heated during a minor merger and consequently the thin disk expands to form a thicker disk component. See also References Extragalactic astronomy
Satellite galaxy
[ "Astronomy" ]
3,191
[ "Extragalactic astronomy", "Astronomical sub-disciplines" ]
2,973,309
https://en.wikipedia.org/wiki/Smallworld
Smallworld is the brand name of a portfolio of GIS software provided by GE Digital, a division of General Electric. The software was originally created by the Smallworld company founded in Cambridge, England, in 1989 by Dick Newell and others. Smallworld grew to become the global market leader for GIS in 2010 focused on utilities and communications and remains strong in this sector today. Smallworld was acquired by GE Energy in September 2000. Smallworld technology supports focused application products for telecommunications and utility industries. Smallworld GIS Solution Portfolio Smallworld applications are based upon GE's Smallworld Geographic Information System (GIS) and primarily provide industry-focused products for: Electric Transmission and Distribution Utilities: Smallworld Electric Office, GIS Adapter Telecommunications: Smallworld Physical Network Inventory Gas Distribution and Transmission Utilities: Smallworld Gas Distribution Office, Smallworld Global Transmission Office, MAOP Water and Wastewater Utilities: Smallworld Water Office Smallworld GIS is also used by a number of customers outside of these industries to provide the basis for other applications for rail and road transportation. In addition, GE provides a number of cross-industry and integration tools, including: GeoSpatial Analysis for geospatial business intelligence with the ability to visualise data from a wide range of spatial and non-spatial data sources such as ESRI Shape files, AutoCAD DWG, Microsoft Excel, etc. Smallworld GeoSpatial Server for service-based integration and web mapping Mobile Enterprise for field enablement of workflows Google Maps integration enables access to Google Maps and Street View directly within the Smallworld applications. Integration of Safe’s FME product allowing import and export between other GIS such as Esri ArcGIS, Hexagon Intergraph, Pitney Bowes MapInfo, etc. Smallworld Business Integrator enables integration with ERP systems such as SAP and IBM Maximo. Smallworld GIS Adapter–provides a tightly integrated mechanism for sharing the as-built network with operational systems. Technology GE Digital’s Smallworld GIS platform utilises a number of technologies: The 64bit Java Virtual Machine which allows supports development in both Java and Magik (programming language) (an object-oriented programming language that supports multiple inheritance, polymorphism, multi-threading and is dynamically typed). A database technology called Version Managed Data Store (VMDS) that has been designed and optimized for storing and analyzing complex spatial and topological data and supports alternative versions of the data during long-transactions to manage the progression of assets through their lifecycle (plan, design, build, operate, maintain). A highly-secure and reliable server layer providing web-based access and integration based upon containers and orchestrated by Kubernetes supports streamlining and automating enterprise deployments. The solution can be operated on-premise or deployed into the major public cloud providers such as OCI, Amazon AWS, Microsoft Azure and Google Cloud. References Companies based in Cambridge General Electric GIS software companies GIS software History of computing in the United Kingdom Science and technology in Cambridgeshire
Smallworld
[ "Technology" ]
625
[ "History of computing", "History of computing in the United Kingdom" ]
2,973,525
https://en.wikipedia.org/wiki/Confidence
Confidence is the feeling of belief or trust that a person or thing is reliable. Self-confidence is trust in oneself. Self-confidence involves a positive belief that one can generally accomplish what one wishes to do in the future. Self-confidence is not the same as self-esteem, which is an evaluation of one's worth. Self-confidence is related to self-efficacy—belief in one's ability to accomplish a specific task or goal. Confidence can be a self-fulfilling prophecy, as those without it may fail because they lack it, and those with it may succeed because they have it rather than because of an innate ability or skill. History Ideas about the causes and effects of self-confidence have appeared in English-language publications describing characteristics of a sacrilegious attitude toward God, the character of the British empire, and the culture of colonial-era American society. In 1890, the philosopher William James in his Principles of Psychology wrote, "Believe what is in the line of your needs, for only by such belief is the need fulfilled... Have faith that you can successfully make it, and your feet are nerved to its accomplishment". With World War I, psychologists praised self-confidence as greatly decreasing nervous tension, allaying fear, and ridding the battlefield of terror; they argued that soldiers who cultivated a strong and healthy body would also acquire greater self-confidence while fighting. At the height of the temperance movement of the 1920s, psychologists associated self-confidence in men with remaining at home and taking care of the family when they were not working. During the Great Depression, academics Philip Eisenberg and Paul Lazarsfeld wrote that a sudden negative change in one's circumstances, especially a loss of a job, could lead to decreased self-confidence, but more commonly if the jobless person believes the fault of his unemployment is his. They also noted how if individuals do not have a job long enough, they become apathetic and lose all self-confidence. In 1943, American psychologist Abraham Maslow argued in his paper "A Theory of Human Motivation" that an individual is only motivated to acquire self-confidence (one component of "esteem") after achieving what they need for physiological survival, safety, and love and belonging. He claimed that satisfaction with self-esteem led to feelings of self-confidence that, once attained, led to a desire for "self-actualization". As material standards of most people rapidly rose in developed countries after World War II and fulfilled their material needs, a plethora of widely cited academic research about confidence and related concepts like self-esteem and self-efficacy emerged. Research Measures One of the earliest measures of self-confidence used a 12-point scale, ranging from a minimum score characterizing someone who is "timid and self-distrustful, shy, never makes decisions, self-effacing" to a maximum score characterizing someone who is "able to make decisions, absolutely confident and sure of his own decisions and opinions". Some researchers have measured self-confidence as a simple construct divided into affective and cognitive components: anxiety as an affective aspect and self-evaluations of proficiency as a cognitive component. Other researchers have used body language proxies, rather than self-reports, to measure self-confidence by having examiners measure on a scale of 1to5 the subject's body language such as eye contact, fidgeting, posture, facial expressions, and gestures. Some methods measure self-esteem and self-confidence in various aspects or activities, such as speaking in public spaces, academic performance, physical appearance, romantic relationships, social interactions, and athletic ability. In sports, researchers have measured athletes' confidence about winning upcoming matches and how sensitive respondents' self-confidence is to performance and negative feedback. Abraham Maslow and others have emphasized the need to distinguish between self-confidence as a generalized personality characteristic and self-confidence concerning a specific task, ability, or challenge (i.e., self-efficacy). The term "self-confidence" typically refers to a general personality trait— in contrast, "self-efficacy" is defined by psychologist Albert Bandura as a "belief in one's ability to succeed in specific situations or accomplish a task". Factors correlated with self-confidence Various factors within and beyond an individual's control may affect their self-confidence. An individual's self-confidence can vary in different environments, such as at home or at school, and concerning different types of relationships and situations. When people attribute their success to a matter under their control, they are less likely to be confident about being successful in the future. If someone attributes their failure to a factor beyond their control, they are more likely to be confident about succeeding in the future. If a person believes they failed to achieve a goal because of a factor that was beyond their control, they are more likely to be more self-confident that they can achieve the goal in the future. One's self-confidence often increases as one satisfactorily completes particular activities. American social psychologist Leon Festinger found that self-confidence in an individual's ability may only rise or fall when that individual can compare themselves to others who are roughly similar, in a competitive environment. A person can possess self-confidence in their ability to complete a specific task (self-efficacy)—e.g. cook a good meal or write a good novel—even though they may lack general self-confidence, or conversely be self-confident though they lack the self-efficacy to achieve a particular task. These two types of self-confidence are, however, correlated with each other, and for this reason, can be easily conflated. Social psychologists have found self-confidence to be correlated with other psychological variables including saving money, influencing others, and being a responsible student. Self-confidence affects interest, enthusiasm, and self-regulation. Self-confidence is important for accomplishing goals and improving performance. Marketing researchers have found that the general self-confidence of a person is negatively correlated with their level of anxiety. Self-confidence increases a person's general well-being and one's motivation which often increases performance. It also increases one's ability to deal with stress and mental health. The more self-confident an individual is, the less likely they are to conform to the judgments of others. Higher confidence is correlated with individuals setting higher goals. When people face feelings of discontent because they do not accomplish a certain goal, people who have higher self-confidence may become even more persistent in accomplishing their goals, whereas those with low self-confidence are more prone to giving up quickly. Albert Bandura argued that a person's perceived confidence indicates capability. If people do not believe that they are capable of coping, they experience disruption which lowers their confidence about their performance. Salespeople who are high in self-confidence tend to set higher goals for themselves, which makes them more likely to stay employed, yield higher revenues, and generate higher customer service satisfaction. In certain fields of medical practice, patients experience a lack of self-confidence during the recovery period. This is commonly referred to as DSF or from the Latin for lack of self-confidence. This can be the case after a stroke, when the patient refrains from using a weaker lower limb due to fear of it not being strong enough. On the overconfidence effect, Martin Hilbert argues that confidence bias can be explained by a noisy conversion of objective evidence into subjective estimates, where noise is defined as the mixing of memories during the observing and remembering process. Dominic D. P. Johnson and James H. Fowler write that "overconfidence maximizes individual fitness and populations tend to become overconfident, as long as benefits from contested resources are sufficiently large compared with the cost of competition". In studies of implicit self-esteem, researchers have found that people may consciously overreport their levels of self-esteem. Inaccurate self-evaluation is commonly observed in healthy populations. In the extreme, large differences between one's self-perception and one's actual behaviour are a hallmark of several disorders that have important implications for understanding treatment-seeking and compliance. Overconfidence supports delusional thinking, such as frequently occurs in individuals with schizophrenia. Whether a person, in making a decision, seeks out additional sources of information depends on their level of self-confidence specific to that area. As the complexity of a decision increases, a person is more likely to be influenced by another person and seek out additional information. Several psychologists suggest that self-confident people are more willing to examine evidence that both supports and contradicts their attitudes. Meanwhile, people who are less self-confident and more defensive may prefer attitudinal information over information that challenges their perspectives. When individuals with low self-confidence receive feedback from others, they are averse to receiving information about their relative ability and negative informative feedback, and not averse to receiving positive feedback. If new information about an individual's performance is negative feedback, this may interact with a negative affective state (low self-confidence) causing the individual to become demoralized, which in turn induces a self-defeating attitude that increases the likelihood of failure in the future more than if they did not lack self-confidence. People may be more self-confident about what they believe if they consult sources of information that agree with their world views. People may deceive themselves about their positive qualities and the negative qualities of others so that they can display greater self-confidence than they might otherwise feel, thereby enabling them to advance socially and materially. Perceptions of self-confidence in others People with high self-confidence are more likely to impress others, as others perceive them as more knowledgeable and more likely to make correct judgments. Despite this, a negative correlation is sometimes found between the level of their self-confidence and the accuracy of their claims. When people are uncertain and unknowledgeable about a topic, they are more likely to believe the testimony, and follow the advice of those that seem self-confident. However, expert psychological testimony on the factors that influence eyewitness memory appears to reduce juror reliance on self-confidence. People prefer leaders with greater self-confidence over those with less self-confidence. Self-confident leaders tend to influence others through persuasion instead of resorting to coercive means. They are more likely to resolve issues by referring them to another qualified person or calling upon bureaucratic procedures, which avoid personal involvement. Others suggest that self-confidence does not affect leadership style but is only correlated with years of supervisory experience and self-perceptions of power. Variation in different groups Social scientists have discovered that self-confidence operates differently in different categories of people. Children and students In children, self-confidence emerges differently than in adults. For example, only children as a group may be more self-confident than other children. If children are self-confident, they may be more likely to sacrifice immediate recreational time for possible rewards in the future, enhancing their self-regulatory capability. Successful performance of children in music increases feelings of self-confidence, increasing motivation for study. By adolescence, youth who have little contact with friends tend to have low self-confidence. In adolescents, low self-confidence may be a predictor of loneliness. In general, students who perform well have increased confidence, which likely in turn encourages them to take greater responsibility to complete tasks. Teachers affect the self-confidence of their students depending on how they treat them. Students who perform better receive more positive evaluation reports and have greater self-confidence. Characteristically low-achieving students report less confidence, while characteristically high-performing students report higher self-confidence. Extracurricular activities in school settings can boost confidence in students at earlier ages. These include participation in games or sports, visual and performing arts, and public speaking. In a phenomenon known as stereotype threat, African American students perform more poorly on exams (relative to White American students) if they must reveal their racial identities before the exam. A similar phenomenon has been found in female students' performance (relative to male students) on math tests. The opposite has been observed in Asian Americans, whose confidence becomes tied up in expectations that they will succeed by both parents and teachers and who claim others perceive them as excelling academically more than they are. Male university students may be more confident than their female counterparts. In regards to inter-ethnic interaction and language learning, those who engage more with people of different ethnicity and language become more self-confident in interacting with them. Men and women Women who are either high or low in general self-confidence are more likely to be persuaded to change their opinion than women with medium self-confidence. However, when specific high confidence (self-efficacy) is high, generalized confidence plays less of a role. Men who have low generalized self-confidence are more easily persuaded than men of high generalized self-confidence. Women tend to respond less to negative feedback and be more averse to negative feedback than men. In experiments conducted by economists Muriel Niederle and Lise Vesterlund, the researchers found that male overconfidence and male preference for competition contributed to higher male participation in a competitive tournament scheme, while risk and feedback aversion played a negligible role. Some scholars partly attribute the fact of women being less likely to persist in engineering college than men to women's diminished sense of self-confidence. More self-confident women may receive high-performance evaluations but not be as well-liked as men who engage in the same behaviour. Confident women may be considered a better job candidate than both men and women who behaved modestly. Male common stock investors trade 45% more than their female counterparts, which they attribute to greater recklessness (though also self-confidence) of men, reducing men's net returns by 2.65 percentage points per year versus women's 1.72 percentage points. Women report lower self-confidence levels than men in supervising subordinates. One study found that women who viewed commercials with women in traditional gender roles appeared less self-confident in giving a speech than those who viewed commercials with women taking on more masculine roles. Such self-confidence may also be related to body image, as one study found a sample of overweight people in Australia and the US are less self-confident about their body's performance than people of average weight, and the difference is even greater for women than for men. Others found that if a newborn is separated from its mother upon delivery, the mother is less self-confident in her ability to raise that child than one who was not separated from her child. Furthermore, women who initially had low self-confidence are likely to experience a larger drop of self-confidence after separation from their children than women with relatively higher self-confidence. Heterosexual men who exhibit greater self-confidence relative to other men more easily attract single and partnered women. Athletes Self-confidence is one of the most influential factors in how well an athlete performs in a competition. In particular, "robust self-confidence beliefs" are correlated with aspects of mental toughness—the ability to cope better than one's opponents and remain focused under pressure. These traits enable athletes to "bounce back from adversity". When athletes confront stress while playing sports, their self-confidence decreases. However, feedback from their team members in the form of emotional and informational support reduces the extent to which stresses in sports reduce their self-confidence. At high levels of support, performance-related stress does not affect self-confidence. Among gymnasts, those who tend to talk to themselves in an instructional format tend to be more self-confident than those who do not. In a group, members' desire for success and confidence can also be related. Groups that had a higher desire for success did better in performance than groups with a weaker desire. The more frequently a group succeeded, the more interest they had in the activity and success. Self-confidence in different cultures The utility of self-confidence may vary by culture. Some find Asians perform better when they lack confidence, especially when compared to North Americans. See also References Emotions Narcissism Positive psychology eo:Fido fi:Luottamus
Confidence
[ "Biology" ]
3,304
[ "Behavior", "Narcissism", "Human behavior" ]
2,973,816
https://en.wikipedia.org/wiki/Stickum
Stickum is a trademark adhesive of Mueller Sports Medicine, of Prairie du Sac, Wisconsin, United States. It is available in powder, paste, and aerosol spray forms. According to the company website, the spray form helps improve grip "even in wet conditions". Suggested uses include for bat handles and vaulting poles, with many vendors also promoting the product for use by weightlifters, and for various other athletic applications. Stickum, along with other adhesive or "sticky" substances (such as glue, rosin/tree sap, or food substances), were used for years in the National Football League to assist players in gripping the ball. The use of adhesives such as Stickum was banned by the league in 1981, and the resulting action became known as the "Lester Hayes rule", named after Oakland Raiders defensive back Lester Hayes, known for his frequent use of Stickum. Despite the ban, Hall of Famer Jerry Rice freely admitted to illegally using Stickum throughout his career, leading many fans to question the integrity of his receiving records. Rice's claim that "all players" in his era used Stickum was quickly denied by Hall of Fame contemporaries Cris Carter and Michael Irvin. In the National Basketball Association, Houston Rockets center Dwight Howard was caught using Stickum in a game against the Atlanta Hawks in 2016. References External links Page on the Mueller Sports Medicine website Cheating in sports Sporting goods brands
Stickum
[ "Physics" ]
291
[ "Materials stubs", "Materials", "Matter" ]
2,973,832
https://en.wikipedia.org/wiki/Dimensional%20transmutation
In particle physics, dimensional transmutation is a physical mechanism providing a linkage between a dimensionless parameter and a dimensionful parameter. In classical field theory, such as gauge theory in four-dimensional spacetime, the coupling constant is a dimensionless constant. However, upon quantization, logarithmic divergences in one-loop diagrams of perturbation theory imply that this "constant" actually depends on the typical energy scale of the processes under considerations, called the renormalization group (RG) scale. This "running" of the coupling is specified by the beta function of the renormalization group. Consequently, the interaction may be characterised by a dimensionful parameter , namely the value of the RG scale at which the coupling constant diverges. In the case of quantum chromodynamics, this energy scale is called the QCD scale, and its value 220 MeV supplants the role of the original dimensionless coupling constant in the form of the logarithm (at one-loop) of the ratio and . Perturbation theory, which produced this type of running formula, is only valid for a (dimensionless) coupling ≪ 1. In the case of QCD, the energy scale is an infrared cutoff, such that implies , with the RG scale. On the other hand, in the case of theories such as QED, is an ultraviolet cutoff, such that implies . This is also a way of saying that the conformal symmetry of the classical theory is anomalously broken upon quantization, thereby setting up a mass scale. See conformal anomaly. References Quantum field theory Renormalization group
Dimensional transmutation
[ "Physics" ]
342
[ "Quantum field theory", "Physical phenomena", "Critical phenomena", "Quantum mechanics", "Renormalization group", "Statistical mechanics", "Quantum physics stubs" ]
2,973,866
https://en.wikipedia.org/wiki/Momentum%20transfer
In particle physics, wave mechanics, and optics, momentum transfer is the amount of momentum that one particle gives to another particle. It is also called the scattering vector as it describes the transfer of wavevector in wave mechanics. In the simplest example of scattering of two colliding particles with initial momenta , resulting in final momenta , the momentum transfer is given by where the last identity expresses momentum conservation. Momentum transfer is an important quantity because is a better measure for the typical distance resolution of the reaction than the momenta themselves. Wave mechanics and optics A wave has a momentum and is a vectorial quantity. The difference of the momentum of the scattered wave to the incident wave is called momentum transfer. The wave number k is the absolute of the wave vector and is related to the wavelength . Momentum transfer is given in wavenumber units in reciprocal space Diffraction The momentum transfer plays an important role in the evaluation of neutron, X-ray, and electron diffraction for the investigation of condensed matter. Laue-Bragg diffraction occurs on the atomic crystal lattice, conserves the wave energy and thus is called elastic scattering, where the wave numbers final and incident particles, and , respectively, are equal and just the direction changes by a reciprocal lattice vector with the relation to the lattice spacing . As momentum is conserved, the transfer of momentum occurs to crystal momentum. The presentation in reciprocal space is generic and does not depend on the type of radiation and wavelength used but only on the sample system, which allows to compare results obtained from many different methods. Some established communities such as powder diffraction employ the diffraction angle as the independent variable, which worked fine in the early years when only a few characteristic wavelengths such as Cu-K were available. The relationship to -space is with and basically states that larger corresponds to larger . See also References diffraction momentum neutron-related techniques synchrotron-related techniques
Momentum transfer
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
391
[ "Spectrum (physical sciences)", "Physical quantities", "Quantity", "Diffraction", "Crystallography", "Spectroscopy", "Momentum", "Moment (physics)" ]
2,973,881
https://en.wikipedia.org/wiki/Tunnel%20injection
Tunnel injection is a field electron emission effect; specifically a quantum process called Fowler–Nordheim tunneling, whereby charge carriers are injected to an electric conductor through a thin layer of an electric insulator. It is used to program NAND flash memory. The process used for erasing is called tunnel release. This injection is achieved by creating a large voltage difference between the gate and the body of the MOSFET. When VGB >> 0, electrons are injected into the floating gate. When VGB << 0, electrons are forced out of the floating gate. An alternative to tunnel injection is the spin injection. See also Hot carrier injection References Quantum mechanics Semiconductors
Tunnel injection
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
136
[ "Electrical resistance and conductance", "Matter", "Physical quantities", "Semiconductors", "Theoretical physics", "Quantum mechanics", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Quantum physics stubs" ]
2,973,884
https://en.wikipedia.org/wiki/Quenched%20approximation
In lattice field theory, the quenched approximation is an approximation often used in lattice gauge theory in which the quantum loops of fermions in Feynman diagrams are neglected. Equivalently, the corresponding one-loop determinants are set to one. This approximation is often forced upon the physicists because the calculation with the Grassmann numbers is computationally very difficult in lattice gauge theory. In particular, quenched QED is QED without dynamical electrons, and quenched QCD is QCD without dynamical quarks. Recent calculations typically avoid the quenched approximation. See also Lattice QCD References Lattice field theory
Quenched approximation
[ "Physics" ]
128
[ "Statistical mechanics stubs", "Theoretical physics", "Computational physics", "Theoretical physics stubs", "Statistical mechanics", "Computational physics stubs" ]
2,973,891
https://en.wikipedia.org/wiki/Duraspark
The Duraspark II is a Ford electronic ignition system. Ford Motor Company began using electronic ignitions in 1973 with the Duraspark electronic ignition system and introduced the Duraspark II system in 1976. The biggest change, apart from the control box redesign, was the large distributor cap to handle the increased spark energy. Description Ford used several models over the years. They were coded by the color of the plastic wire strain relief, or "grommet" as it is most often called, in order to make them easy to identify. In addition to the color-coding, the modules may have a keyway molded into the electrical connectors to prevent accidental use in the wrong vehicle. The system consists of a magnetic reluctor and pickup in the distributor, with a separate fender mounted ignition module to trigger the coil. Typically, Duraspark II distributors have both mechanical and vacuum advance mechanisms. Certain 1981–83 models used the EEC-III system, which uses a Dura-Spark III module (brown grommet where wires emerge) and a Dura-Spark II ignition coil. A resistance wire is used in the primary circuit. The distributors in EEC-III (and later) systems eliminate conventional mechanical and vacuum advance mechanisms. All timing is controlled by the engine computer, which is capable of firing the spark plug at any point within a 50-degree range depending on calibration. This increased spark capability requires greater separation of adjacent distributor cap electrodes to prevent cross-fire, another reason for its large-diameter distributor cap. This system is very similar to the systems used at MSD; MSD used the Duraspark during R&D. In the aftermarket The Duraspark II ignition system is a common upgrade for older Ford cars equipped with a points-type ignition. In most cases, the distributor will interchange with the older-style points distributor. The system is similar to some aftermarket systems and the control module may be easily swapped. Duraspark swaps are easy and can be run by a Duraspark box or an aftermarket box. MSD makes a harness that adapts the Duraspark mag pick up right to an aftermarket box such as the 6AL2. Re-curving a Duraspark is a way to build additional power and economy. Examples of these are available on eBay and Summit Racing. Use by AMC The Duraspark module was used by AMC starting in 1978 and continued to be used with AMC's computerized engine control. The Motorcraft Duraspark system replaced the older Prestolite system in 1978. AMC used the "blue grommet" module from 1978 and it continued on the carbureted AMC engines through the Chrysler buyout until 1991 for V8 engines and 1990 for inline 6 engines. In 1982 AMC briefly used the "yellow double grommet" module with three connectors in some passenger cars and Jeeps. References Ford Motor Company Engine technology Engines Automotive technology tradenames
Duraspark
[ "Physics", "Technology" ]
604
[ "Physical systems", "Machines", "Engine technology", "Engines" ]
2,973,937
https://en.wikipedia.org/wiki/Split%20supersymmetry
In particle physics, split supersymmetry is a proposal for physics beyond the Standard Model. History It was proposed separately in three papers. The first by James Wells in June 2003 in a more modest form that mildly relaxed the assumption about naturalness in the Higgs potential. In May 2004 Nima Arkani-Hamed and Savas Dimopoulos argued that naturalness in the Higgs sector may not be an accurate guide to propose new physics beyond the Standard Model and argued that supersymmetry may be realized in a different fashion that preserved gauge coupling unification and has a dark matter candidate. In June 2004 Gian Giudice and Andrea Romanino argued from a general point of view that if one wants gauge coupling unification and a dark matter candidate, that split supersymmetry is one amongst a few theories that exists. Overview The new light (~TeV) particles in Split Supersymmetry (beyond the Standard Models particles) are The Lagrangian for Split Supersymmetry is constrained from the existence of high energy supersymmetry. There are five couplings in Split Supersymmetry: the Higgs quartic coupling and four Yukawa couplings between the Higgsinos, Higgs and gauginos. The couplings are set by one parameter, , at the scale where the supersymmetric scalars decouple. Beneath the supersymmetry breaking scale, these five couplings evolve through the renormalization group equation down to the TeV scale. At a future Linear collider, these couplings could be measured at the 1% level and then renormalization group evolved up to high energies to show that the theory is supersymmetric at an exceedingly high scale. Long lived Gluinos The striking feature of split supersymmetry is that the gluino becomes a quasi-stable particle with a lifetime that could be up to 100 seconds long. A gluino that lived longer than this would disrupt Big Bang nucleosynthesis or would have been observed as an additional source of cosmic gamma rays. The gluino is long lived because it can only decay into a squark and a quark and because the squarks are so heavy and these decays are highly suppressed. Thus, the decay rate of the gluino can roughly be estimated, in natural units, as where is the gluino rest mass and the squark rest mass. For gluino mass of the order of 1 TeV, the cosmological bound mentioned above sets an upper bound of about GeV on squarks masses. The potentially long lifetime of the gluino leads to different collider signatures at the Tevatron and the Large Hadron Collider. There are three ways to see these particles: Measuring the ratio of momentum to energy or velocity in tracking chambers (dE/dx in the inner tracking chamber or p/v in the outer muon tracking chamber) Looking for excess singlet jet events that arise from initial or final state radiation. Looking for gluinos that have come to rest inside the detector and later decay. Such an event may occur if the gluino hadronize to form an exotic hadron which strongly interacts with a nucleon in the detector to create an exotic charged hadron. The latter will decelerate by electromagnetic interaction inside the detector and will eventually stop. Advantages and drawbacks Split supersymmetry allows gauge coupling unification as supersymmetry does, because the particles which have masses way beyond the TeV scale play no major role in the unification. These particles are the gravitino - which has a small coupling (of order of the gravitational interaction) to the other particles, and the scalar partners to the standard model fermions - namely, squarks and sleptons. The latter move the beta functions of all gauge couplings together, and do not influence their unification, because in the grand unification theory they form a full SU(5) multiplet, just like a complete generation of particles. Split supersymmetry also solves the gravitino cosmological problem, because the gravitino mass is much higher than TeV. The upper bounds on proton decay rate can also be satisfied because the squarks are very heavy as well. On the other hand, unlike conventional supersymmetry, split supersymmetry does not solve the hierarchy problem which has been a primary motivation for proposals for new physics beyond the Standard Model since 1979. One proposal is that the hierarchy problem is "solved" by assuming fine-tuning due to anthropic reasons. History The initial attitude of some of the high energy physics community towards split supersymmetry was illustrated by a parody called supersplit supersymmetry. Often when a new notion in physics is proposed there is a knee-jerk backlash. When naturalness in the Higgs sector was initially proposed as a motivation for new physics, the notion was not taken seriously. After the supersymmetric Standard Model was proposed, Sheldon Glashow quipped that 'half of the particles have already been discovered.' After 25 years, the notion of naturalness had become so ingrained in the community that proposing a theory that did not use naturalness as the primary motivation was ridiculed. Split supersymmetry makes predictions that are distinct from both the Standard Model and the Minimal Supersymmetric Standard Model and the ultimate nature of the naturalness in the Higgs sector will hopefully be determined at future colliders. Many of the original proponents of naturalness no longer believe that it should be an exclusive constraint on new physics. Kenneth Wilson originally advocated for it, but has recently called it one of his biggest mistakes during his career. Steven Weinberg relaxed the notion of naturalness in the cosmological constant and argued for an environmental explanation for it in 1987. Leonard Susskind, who initially proposed technicolor, is a firm advocate of the notion of a landscape and non-naturalness. Savas Dimopoulos, who initially proposed the supersymmetric Standard Model, proposed split supersymmetry. See also Minimal Supersymmetric Standard Model Supersplit supersymmetry Supersymmetry External links Implications of Supersymmetry Breaking with a Little Hierarchy between Gauginos and Scalars by James D. Wells Supersymmetric Unification Without Low Energy Supersymmetry And Signatures for Fine-Tuning at the LHC by Nima Arkani-Hamed and Savas Dimopoulos Split Supersymmetry by G.F. Giudice and A. Romanino Authority Articles on Split supersymmetry Supersymmetric quantum field theory Particle physics Physics beyond the Standard Model
Split supersymmetry
[ "Physics" ]
1,379
[ "Supersymmetric quantum field theory", "Unsolved problems in physics", "Particle physics", "Physics beyond the Standard Model", "Supersymmetry", "Symmetry" ]
2,973,964
https://en.wikipedia.org/wiki/Penetron
The penetron, short for penetration tube, is a type of limited-color television used in some military applications. Unlike a conventional color television, the penetron produces a limited color gamut, typically two colors and their combination. Penetrons, and other military-only cathode ray tubes (CRTs), have been replaced by LCDs in modern designs. History Basic television A conventional black and white television (B&W) uses a tube that is uniformly coated with a phosphor on the inside face. When excited by high-speed electrons, the phosphor gives off light, typically white but other colors are also used in certain circumstances. An electron gun at the back of the tube provides a beam of high-speed electrons, and a set of electromagnets arranged near the gun allow the beam to be moved about the display. The television signal is sent as a series of stripes, each one of which is displayed as a separate line on the display. The strength of the signal increases or decreases the current in the beam, producing bright or dark points on the display as the beam sweeps across the tube. In a color display, the uniform coating of white phosphor is replaced by dots or lines of three colored phosphors, producing red, green or blue light (RGB) when excited. These primary colors mix in the human eye to produce a single apparent color. This presents a problem for conventional electron guns, which cannot be focussed or positioned accurately enough to hit these much smaller individual patterns. A number of companies were working on various solutions to this problem in the late 1940s, using three separate tubes or a single white-output with colored filters placed in front of it. None of these proved practical and this was a field of considerable development interest. Penetron The penetron was original designed by Koller and Williams while working at General Electric (GE). It was initially developed as a novel way to build a single-gun color television with the simplicity of a conventional B&W set. Like the B&W tube, it used a uniform coating of phosphor on the display with a single electron gun at the rear. However, the phosphor coating is applied in three layers of different colors, red on the inside closest to the gun, then green, and blue on the outside closest to the front face of the tube. Colors were selected by increasing the power of the electron beam, which allowed the electrons to flow through any lower layers to reach the proper color. In a conventional set, voltage is used to control the brightness of the image, not its color, something that the new design also had to achieve. In the penetron, voltage is also used to select the color. To address these competing needs, the color selection was provided by an external mechanism. The gun was modulated by voltage as it would be in a B&W set, with increasing power producing a brighter spot on the screen. A set of fine wires placed behind the screen then provided extra energy needed to select a particular color layer. Since the phosphors were relatively opaque, the system required very high accelerating voltages, between 25 and 40 kV. An improved version was introduced that used transparent phosphor layers and thin insulating layers between them that reduced the required voltages. The dielectric ensured that stray electrons, either off-voltage from the guns or secondary emission from the phosphors themselves, were stopped before they reached the screen. The penetron was ideally suited for use with the early CBS broadcast system, which sent color information as three separate sequential frames. CBS' experimental televisions used a mechanical filter with three color sections that spun in front of a B&W tube. The same timing signal was used in the penetron to change the voltage of the color selection grid, to the same end. The low switching rate, 144 times a second, meant that the changing high-voltage was not a major source of high-frequency noise. Unlike the mechanical CBS system, the penetron had no moving parts, could be built at any size (which was difficult to do with the disk), and had no problems with flicker. It represented a major advance in display technology. NTSC It was not long after the introduction of CBS' system that a new system was introduced by RCA that eventually won out. Unlike CBS's field-sequential system, RCA directly encoded the color for every spot on the screen, a system known as "dot-sequential". The advantage to the RCA system was that the primary component of the signal was very similar to the B&W signal used on existing sets, which meant the millions of B&W televisions would be able to receive the new signal while newer colors sets could see these in either B&W or color if that additional signal was provided. This was a huge advantage over the CBS system, and a modified version was selected by the NTSC as the new color standard in 1953. The major disadvantage was the difficulty in correctly focussing the beam on the correct color, a problem RCA solved with their shadow mask system. The shadow mask is a thin metal foil with small holes photoetched into it, positioned so the holes lie directly above one triplet of colored phosphor dots. Three separate electron guns are individually focussed on the mask, sweeping the screen as normal. When the beams pass over one of the holes, they travel through it, and since the guns are separated by a small distance from each other at the back of the tube, each beam has a slight angle as it travels through the hole. The phosphor dots are arranged on the screen such that the beams hit only their correct phosphor. To ensure the holes line up with the dots, the mask is used to create the dots using photosensitive material. The new broadcast system presented a serious problem for the penetron. The signal required the color to be selected at high speeds "on the fly" as the beam was being drawn across the screen. This meant the high voltage color selection grid had to be rapidly cycled, which presented numerous problems, notably high-frequency noise that filled the interior of the tube and interfered with the receiver electronics. Another modification was introduced to address this issue, using three separate guns, each fed with a different base voltage tuned to hit one of the layers. In this version no switching was required, eliminating the high-frequency noise. Producing such a system proved difficult in practice, and for home television use GE instead introduced their "Porta-Color" system, a dramatic improvement on RCA's shadow mask system. Other developers continued working with the basic system attempting to find ways of solving the high frequency switching issues, but none of these entered commercial production. Use in avionics For other uses, however, the advantages of the penetron remained. Although it was not well suited to the dot-sequential method of color broadcast, that was only important if one was receiving over-the-air broadcasts. For uses where the signal could be provided in any needed format, like in computer displays, the penetron remained useful. When a full color gamut was not needed, the complexity of the penetron was further reduced and it became very attractive. This lent it to custom applications like military avionics, where the nature of the input signal was not important and the developer was free to use any signaling style they wished. In the avionics role the penetron had other advantages as well. Its use of phosphors in layers instead of stripes meant that it had higher resolution, three times that of the RCA system. This was very useful for radar display and IFF systems, where the images were often overlaid with textual cues that required high resolution to be easily readable. Additionally, since all of the signal reached the screen in a penetron, as opposed to 15% of it in a shadow mask tube, for any given amount of power the penetron was much brighter. This was a major advantage in the avionics role where power budgets were often quite limited, yet the displays were often hit with direct sunlight and needed to be very bright. The lack of the shadow mask also meant the penetron was much more robust mechanically, and didn't suffer from color shifting under g-loads. Penetrons were used from the late 1960s to the mid-1980s, mostly for radar or IFF systems where two-color displays (green/red/yellow) were commonly used. Improvements in conventional shadow masks removed most of its advantages during this period. Better focusing allowed the size of the holes in the shadow mask to increase in proportion to the opaque area, which improved display brightness. Brightness was further improved with the introduction of newer phosphors. Problems with doming were addressed through the use of invar shadow masks that were mechanically robust and attached to the tube using a strong metal frame. Other uses Penetron displays were also offered as an option on some graphics terminals, where the high speed color-switching was not required and the penetron's limited gamut was not a concern. IDI offered such displays as an $8,000 option on their IDIgraph and IDIIOM series terminals. Tektronix, a major manufacturer of oscilloscopes, offered a limited gamut of color in some of its CRT oscilloscopes, using Penetron-type technology. Description In most versions of the penetron the tube has an inner layer of red and outer layer of green, separated by a thin dielectric layer. A complete image is produced by scanning twice, once with the gun set to a lower power that is stopped in the red layer, and then again at a higher power that travels through the red layer and into the green. Yellow can be produced by hitting the same location on both sweeps. In a display where the colors are either on or off and various brightness levels do not have to be created, the system can be further simplified by removing the color selection grid and modulating the voltage of the electron gun itself. However, this also causes problems because the electrons will reach the screen faster when accelerated with higher voltages, which means that the deflection system has to be increased in power as well to ensure the scanning creates the same screen size and line widths on both passes. Several alternative arrangements of the penetron were experimented on to address this problem. One common attempt used an electron multiplier at the tube face instead of the selection grid. In this system a low-energy scanning beam was used, and magnets were set to cause the electrons to strike the sides of the multipliers. A shower of higher-energy electrons would then be released and travel to the layered phosphors of a normal penetron arrangement. It was later noticed that the beams emanating from the multipliers landed in rings, which allowed a new arrangement of phosphors in concentric rings instead of layers. The main advantage to the penetron is that it lacks the mechanical focusing system of a shadow mask television, which means that all of the beam energy reaches the screen. For any given amount of power, the penetron will be much brighter, typically 85% brighter. This is a major advantage in an aircraft setting, where power supply is limited but the displays need to be bright enough to be easily read even when directly lit by sunlight. The system is guaranteed to produce the correct colors in spite of external interference or the g-forces of maneuvering – a very important quality in aviation settings. The penetron also offered higher resolutions because the phosphor was continuous, as opposed to the small spots in a shadow mask system. Additionally, the lack of the shadow mask makes the penetron much more robust mechanically. Sinclair experimented with a variant of this technology on his early pocket TV screens, but was unable to produce an RGB version. Examples of these tubes exist as prototypes. References Citations Bibliography D. N. Jarrett, "Cockpit Engineering", Ashgate Publishing, 2005 David Morton, , Johns Hopkins University Press, 2007, G. Panigrahi: PENETRON LAND COLOR DISPLAY SYSTEM, Dept. of Computer Science, University of Illinois, Urbana-Champaign, Illinois, USA (October 1973, pdf, 12MB) Thomson-CSF, Image and Display Tubes 1977 part 2 (pdf, 193MB) data sheets: OME1199E2 - p.216ff OME1269E21 - p.220ff TH8102E20 - p.159ff TH8104E21 - p.165ff Patents U.S. Patent 2,590,018, "Production of Colored Images", Louis Koller and Fred Williams/General Electric, filed 24 October 1950, issued 18 March 1952 U.S. Patent 2,958,002, "Production of Colored Images", Dominic Cusano and Frank Studer/General Electric, filed 29 October 1954, issued 25 October 1960 U.S. Patent 2,827,593, "High Purity Color Information Screen", Louis Koller/General Electric, filed 29 April 1955, issued 18 March 1958 U.S. Patent 2,992,349, "Field Enhanced Luminescent System", Dominic Cusano/General Electric, filed 24 October 1957, issued 11 July 1961 U.S. Patent 4,612,483, "Penetron color display tube with channel plate electron multiplier", Derek Washington/Philips Electronics, filed 22 September 1983, issued 16 September 1986 See also Beam-index tube Chromatron Television technology Vacuum tube displays Early color television
Penetron
[ "Technology" ]
2,810
[ "Information and communications technology", "Television technology" ]
2,973,987
https://en.wikipedia.org/wiki/Penrose%20graphical%20notation
In mathematics and physics, Penrose graphical notation or tensor diagram notation is a (usually handwritten) visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971. A diagram in the notation consists of several shapes linked together by lines. The notation widely appears in modern quantum theory, particularly in matrix product states and quantum circuits. In particular, categorical quantum mechanics (which includes ZX-calculus) is a fully comprehensive reformulation of quantum theory in terms of Penrose diagrams. The notation has been studied extensively by Predrag Cvitanović, who used it, along with Feynman's diagrams and other related notations in developing "birdtracks", a group-theoretical diagram to classify the classical Lie groups. Penrose's notation has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra. Interpretations Multilinear algebra In the language of multilinear algebra, each shape represents a multilinear function. The lines attached to shapes represent the inputs or outputs of a function, and attaching shapes together in some way is essentially the composition of functions. Tensors In the language of tensor algebra, a particular tensor is associated with a particular shape with many lines projecting upwards and downwards, corresponding to abstract upper and lower indices of tensors respectively. Connecting lines between two shapes corresponds to contraction of indices. One advantage of this notation is that one does not have to invent new letters for new indices. This notation is also explicitly basis-independent. Matrices Each shape represents a matrix, and tensor multiplication is done horizontally, and matrix multiplication is done vertically. Representation of special tensors Metric tensor The metric tensor is represented by a U-shaped loop or an upside-down U-shaped loop, depending on the type of tensor that is used. Levi-Civita tensor The Levi-Civita antisymmetric tensor is represented by a thick horizontal bar with sticks pointing downwards or upwards, depending on the type of tensor that is used. Structure constant The structure constants () of a Lie algebra are represented by a small triangle with one line pointing upwards and two lines pointing downwards. Tensor operations Contraction of indices Contraction of indices is represented by joining the index lines together. Symmetrization Symmetrization of indices is represented by a thick zigzag or wavy bar crossing the index lines horizontally. Antisymmetrization Antisymmetrization of indices is represented by a thick straight line crossing the index lines horizontally. Determinant The determinant is formed by applying antisymmetrization to the indices. Covariant derivative The covariant derivative () is represented by a circle around the tensor(s) to be differentiated and a line joined from the circle pointing downwards to represent the lower index of the derivative. Tensor manipulation The diagrammatic notation is useful in manipulating tensor algebra. It usually involves a few simple "identities" of tensor manipulations. For example, , where n is the number of dimensions, is a common "identity". Riemann curvature tensor The Ricci and Bianchi identities given in terms of the Riemann curvature tensor illustrate the power of the notation Extensions The notation has been extended with support for spinors and twistors. See also Abstract index notation Angular momentum diagrams (quantum mechanics) Braided monoidal category Categorical quantum mechanics uses tensor diagram notation Matrix product state uses Penrose graphical notation Ricci calculus Spin networks Trace diagram Notes Tensors Theoretical physics Mathematical notation Diagram algebras
Penrose graphical notation
[ "Physics", "Mathematics", "Engineering" ]
716
[ "Tensors", "Theoretical physics", "nan" ]
2,974,033
https://en.wikipedia.org/wiki/Diffeomorphism%20constraint
In theoretical physics, it is often important to study theories with the diffeomorphism symmetry such as general relativity. These theories are invariant under arbitrary coordinate transformations. Equations of motion are generally derived from the requirement that the action is stationary. There are special variations that are equivalent to spatial diffeomorphisms. The invariance of the action under these variations implies non-dynamical equations of motion i.e. constraints. These equations must be satisfied or, at least, they must annihilate the physical states in a quantum version of the theory. See also Wheeler–DeWitt equation References Quantum gravity Diffeomorphisms
Diffeomorphism constraint
[ "Physics" ]
133
[ "Unsolved problems in physics", "Quantum gravity", "Relativity stubs", "Theory of relativity", "Physics beyond the Standard Model" ]
2,974,045
https://en.wikipedia.org/wiki/Cosmological%20natural%20selection
Cosmological natural selection, also called the fecund universes, is a hypothesis proposed by Lee Smolin intended as a scientific alternative to the anthropic principle. It addresses why our universe has the particular properties that allow for complexity and life. The hypothesis suggests that a process analogous to biological natural selection applies at the grandest of scales. Smolin first proposed the idea in 1992 and summarized it in a book aimed at a lay audience called The Life of the Cosmos, published in 1997. Hypothesis Black holes have a role in natural selection. In fecund theory a collapsing black hole causes the emergence of a new universe on the "other side", whose fundamental constant parameters (masses of elementary particles, Planck constant, elementary charge, and so forth) may differ slightly from those of the universe where the black hole collapsed. Each universe thus gives rise to as many new universes as it has black holes. The theory contains the evolutionary ideas of "reproduction" and "mutation" of universes, and so is formally analogous to models of population biology. Alternatively, black holes play a role in cosmological natural selection by reshuffling only some matter affecting the distribution of elementary quark universes. The resulting population of universes can be represented as a distribution of a landscape of parameters where the height of the landscape is proportional to the numbers of black holes that a universe with those parameters will have. Applying reasoning borrowed from the study of fitness landscapes in population biology, one can conclude that the population is dominated by universes whose parameters drive the production of black holes to a local peak in the landscape. This was the first use of the notion of a landscape of parameters in physics. Leonard Susskind, who later promoted a similar string theory landscape, stated: I'm not sure why Smolin's idea didn't attract much attention. I actually think it deserved far more than it got. However, Susskind also argued that, since Smolin's theory relies on information transfer from the parent universe to the baby universe through a black hole, it ultimately makes no sense as a theory of cosmological natural selection. According to Susskind and many other physicists, the last decade of black hole physics has shown us that no information that goes into a black hole can be lost. Even Stephen Hawking, who was the largest proponent of the idea that information is lost in a black hole, later reversed his position. The implication is that information transfer from the parent universe into the baby universe through a black hole is not conceivable. Smolin has noted that the string theory landscape is not Popper-falsifiable if other universes are not observable. This is the subject of the Smolin–Susskind debate concerning Smolin's argument: "[The] Anthropic Principle cannot yield any falsifiable predictions, and therefore cannot be a part of science." There are then only two ways out: traversable wormholes connecting the different parallel universes, and "signal nonlocality", as described by Antony Valentini, a scientist at the Perimeter Institute. In a critical review of The Life of the Cosmos, astrophysicist Joe Silk suggested that our universe falls short by about four orders of magnitude from being maximal for the production of black holes. In his book Questions of Truth, particle physicist John Polkinghorne puts forward another difficulty with Smolin's thesis: one cannot impose the consistent multiversal time required to make the evolutionary dynamics work, since short-lived universes with few descendants would then dominate long-lived universes with many descendants. Smolin responded to these criticisms in Life of the Cosmos and later scientific papers. When Smolin published the theory in 1992, he proposed as a prediction of his theory that no neutron star should exist with a mass of more than 1.6 times the mass of the sun. Later this figure was raised to two solar masses following more precise modeling of neutron star interiors by nuclear astrophysicists. If a more massive neutron star was ever observed, it would show that our universe's natural laws were not tuned for maximal black hole production, because the mass of the strange quark could be retuned to lower the mass threshold for production of a black hole. A 1.97-solar-mass pulsar was discovered in 2010. In 2019, neutron star PSR J0740+6620 was discovered with a solar-mass of 2.08 ±.07. In 1992 Smolin also predicted that inflation, if true, must only be in its simplest form, governed by a single field and parameter. This idea was further studied by Nikodem Poplawski. See also Black hole cosmology Biocosm Anthropic principle Quantum gravity General relativity Quantum mechanics Lee Smolin Fine-tuned universe References External links Cosmological Natural Selection— Underscores the coincidence of the constants being tuned for biological life as well as for black holes. Challenges the notion of "coincidence" in this context. Scientific Alternatives to the Anthropic Principle Cosmic natural selection - Leonard Susskind's criticism of this idea Physical cosmology
Cosmological natural selection
[ "Physics", "Astronomy" ]
1,056
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
2,974,121
https://en.wikipedia.org/wiki/Higgs%20phase
In theoretical physics, it is often important to consider gauge theory that admits many physical phenomena and "phases", connected by phase transitions, in which the vacuum may be found. Global symmetries in a gauge theory may be broken by the Higgs mechanism. In more general theories such as those relevant in string theory, there are often many Higgs fields that transform in different representations of the gauge group. If they transform in the adjoint representation or a similar representation, the original gauge symmetry is typically broken to a product of U(1) factors. Because U(1) describes electromagnetism including the Coulomb field, the corresponding phase is called a Coulomb phase. If the Higgs fields that induce the spontaneous symmetry breaking transform in other representations, the Higgs mechanism often breaks the gauge group completely and no U(1) factors are left. In this case, the corresponding vacuum expectation values describe a Higgs phase. Using the representation of a gauge theory in terms of a D-brane, for example D4-brane combined with D0-branes, the Coulomb phase describes D0-branes that have left the D4-branes and carry their own independent U(1) symmetries. The Higgs phase describes D0-branes dissolved in the D4-branes as instantons. References Gauge theories
Higgs phase
[ "Physics" ]
282
[ "Quantum mechanics", "Quantum physics stubs" ]
2,974,133
https://en.wikipedia.org/wiki/Toxinology
Toxinology is a subfield of toxicology dedicated to toxic substances produced by or occurring in living organisms. References Toxicology
Toxinology
[ "Environmental_science" ]
26
[ "Toxicology" ]
2,974,322
https://en.wikipedia.org/wiki/Dioxolane
Dioxolane is a heterocyclic acetal with the chemical formula (CH2)2O2CH2. It is related to tetrahydrofuran (THF) by replacement of the methylene group (CH2) at the 2-position with an oxygen atom. The corresponding saturated 6-membered C4O2 rings are called dioxanes. The isomeric 1,2-dioxolane (wherein the two oxygen centers are adjacent) is a peroxide. 1,3-dioxolane is used as a solvent and as a comonomer in polyacetals. As a class of compounds Dioxolanes are a group of organic compounds containing the dioxolane ring. Dioxolanes can be prepared by acetalization of aldehydes and ketalization of ketones with ethylene glycol. (+)-cis-Dioxolane is the trivial name for which is a muscarinic acetylcholine receptor agonist. Protecting groups Organic compounds containing carbonyl groups sometimes need protection so that they do not undergo reactions during transformations of other functional groups that may be present. A variety of approaches to protection and deprotection of carbonyls including as dioxolanes are known. For example, consider the compound methyl cyclohexanone-4-carboxylate, where lithium aluminium hydride reduction will produce 4-hydroxymethylcyclohexanol. The ester functional group can be reduced without affecting the ketone by protecting the ketone as a ketal. The ketal is produced by acid catalysed reaction with ethylene glycol, the reduction reaction carried out, and the protecting group removed by hydrolysis to produce 4-hydroxymethylcyclohexanone. NaBArF4 can also be used for deprotection of acetal or ketal-protected carbonyl compounds. For example, deprotection of 2-phenyl-1,3-dioxolane to benzaldehyde can be achieved in water in five minutes at 30 °C. PhCH(OCH2)2   +   H2O   ->[\ce{NaBAr4}][\text{30 °C / 5 min}] PhCHO + HOCH2CH2OH Natural products Neosporol is a natural product that includes a 1,3-dioxolane moiety, and is an isomer of sporol which has a 1,3-dioxane ring. The total synthesis of both compounds has been reported, and each includes a step in which a dioxolane system is formed using trifluoroperacetic acid (TFPAA), prepared by the hydrogen peroxide – urea method. This method involves no water, so it gives a completely anhydrous peracid, necessary in this case as the presence of water would lead to unwanted side reactions.   +     →     +     +   In the case of neosporol, a Prilezhaev reaction with trifluoroperacetic acid is used to convert a suitable allyl alcohol precursor to an epoxide, which then undergoes a ring-expansion reaction with a proximate carbonyl functional group to form the dioxolane ring. A similar approach is used in the total synthesis of sporol, with the dioxolane ring later expanded to a dioxane system. See also Dioxane References External links environmental and toxicological data Muscarinic agonists Solvents Protecting groups Formals
Dioxolane
[ "Chemistry" ]
747
[ "Protecting groups", "Reagents for organic chemistry", "Functional groups", "Formals" ]
2,974,577
https://en.wikipedia.org/wiki/Added%20mass
In fluid mechanics, added mass or virtual mass is the inertia added to a system because an accelerating or decelerating body must move (or deflect) some volume of surrounding fluid as it moves through it. Added mass is a common issue because the object and surrounding fluid cannot occupy the same physical space simultaneously. For simplicity this can be modeled as some volume of fluid moving with the object, though in reality "all" the fluid will be accelerated, to various degrees. The dimensionless added mass coefficient is the added mass divided by the displaced fluid mass – i.e. divided by the fluid density times the volume of the body. In general, the added mass is a second-order tensor, relating the fluid acceleration vector to the resulting force vector on the body. Background Friedrich Wilhelm Bessel proposed the concept of added mass in 1828 to describe the motion of a pendulum in a fluid. The period of such a pendulum increased relative to its period in a vacuum (even after accounting for buoyancy effects), indicating that the surrounding fluid increased the effective mass of the system. The concept of added mass is arguably the first example of renormalization in physics. The concept can also be thought of as a classical physics analogue of the quantum mechanical concept of quasiparticles. It is, however, not to be confused with relativistic mass increase. It is often erroneously stated that the added mass is determined by the momentum of the fluid. That this is not the case, it becomes clear when considering the case of the fluid in a large box, where the fluid momentum is exactly zero at every moment of time. The added mass is actually determined by the quasi-momentum: the added mass times the body acceleration is equal to the time derivative of the fluid quasi-momentum. Virtual mass force Unsteady forces due to a change of the relative velocity of a body submerged in a fluid can be divided into two parts: the virtual mass effect and the Basset force. The origin of the force is that the fluid will gain kinetic energy at the expense of the work done by an accelerating submerged body. It can be shown that the virtual mass force, for a spherical particle submerged in an inviscid, incompressible fluid is where bold symbols denote vectors, is the fluid flow velocity, is the spherical particle velocity, is the mass density of the fluid (continuous phase), is the volume of the particle, and D/Dt denotes the material derivative. The origin of the notion "virtual mass" becomes evident when we take a look at the momentum equation for the particle. where is the sum of all other force terms on the particle, such as gravity, pressure gradient, drag, lift, Basset force, etc. Moving the derivative of the particle velocity from the right hand side of the equation to the left we get so the particle is accelerated as if it had an added mass of half the fluid it displaces, and there is also an additional force contribution on the right hand side due to acceleration of the fluid. Applications The added mass can be incorporated into most physics equations by considering an effective mass as the sum of the mass and added mass. This sum is commonly known as the "virtual mass". A simple formulation of the added mass for a spherical body permits Newton's classical second law to be written in the form becomes One can show that the added mass for a sphere (of radius ) is , which is half the volume of the sphere times the density of the fluid. For a general body, the added mass becomes a tensor (referred to as the induced mass tensor), with components depending on the direction of motion of the body. Not all elements in the added mass tensor will have dimension mass, some will be mass × length and some will be mass × length2. All bodies accelerating in a fluid will be affected by added mass, but since the added mass is dependent on the density of the fluid, the effect is often neglected for dense bodies falling in much less dense fluids. For situations where the density of the fluid is comparable to or greater than the density of the body, the added mass can often be greater than the mass of the body and neglecting it can introduce significant errors into a calculation. For example, a spherical air bubble rising in water has a mass of but an added mass of Since water is approximately 800 times denser than air (at RTP), the added mass in this case is approximately 400 times the mass of the bubble. Naval architecture These principles also apply to ships, submarines, and offshore platforms. In the marine industry, added mass is referred to as hydrodynamic added mass. In ship design, the energy required to accelerate the added mass must be taken into account when performing a sea keeping analysis. For ships, the added mass can easily reach one fourth or one third of the mass of the ship and therefore represents a significant inertia, in addition to frictional and wavemaking drag forces. For certain geometries freely sinking through a column of water, hydrodynamic added mass associated with the sinking body can be much larger than the mass of the object. This situation can occur, for instance, when the sinking body has a large flat surface with its normal vector pointed in the direction of motion (downward). A substantial amount of kinetic energy is released when such an object is abruptly decelerated (e.g., due to an impact with the seabed). In the offshore industry hydrodynamic added mass of different geometries are the subject of considerable investigation. These studies typically are required as input to subsea dropped object risk assessments (studies focused on quantifying risk of dropped object impacts to subsea infrastructure). As hydrodynamic added mass can make up a significant proportion of a sinking object's total mass at the instant of impact, it significantly influences the design resistance considered for subsea protection structures. Proximity to a boundary (or another object) can influence the quantity of hydrodynamic added mass. This means that added mass depends on both the object geometry and its proximity to a boundary. For floating bodies (e.g., ships/vessels) this means that the response of the floating body (i.e., due to wave action) is altered in finite water depths (the effect is virtually nonexistent in deep water). The specific depth (or proximity to a boundary) at which the hydrodynamic added mass is affected depends on the body's geometry and location and shape of a boundary (e.g., a dock, seawall, bulkhead, or the seabed). The hydrodynamic added mass associated with a freely sinking object near a boundary is similar to that of a floating body. In general, hydrodynamic added mass increases as the distance between a boundary and a body decreases. This characteristic is important when planning subsea installations or predicting the motion of a floating body in shallow water conditions. Aeronautics In aircraft (other than lighter-than-air balloons and blimps), the added mass is not usually taken into account because the density of the air is so small. Hydraulic structures Hydraulic structures like weirs or locks often contain moveable steel structures like valves or gates, which are submerged under water. These steel structures are often constructed with thin steel plates mounted on girders. When the steel structures are accelerated or decelerated, substantial amounts of water are moved, too. This added mass must e.g. be taken into account when designing the drive systems for these steel structures. See also Basset force for describing the effect of the body's relative motion history on the viscous forces in a Stokes flow Basset–Boussinesq–Oseen equation for the description of the motion of – and forces on – a particle moving in an unsteady flow at low Reynolds numbers Darwin drift for the relation between added mass and the Darwin drift volume Keulegan–Carpenter number for a dimensionless parameter giving the relative importance of the drag force to inertia in wave loading Morison equation for an empirical force model in wave loading, involving added mass and drag Response Amplitude Operator for the use of added mass in ship design References External links MIT OpenCourse Ware Naval Civil Engineering Laboratory Det Norske Veritas DNV-RP-H103 Modelling And Analysis Of Marine Operations Fluid dynamics
Added mass
[ "Chemistry", "Engineering" ]
1,704
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
2,974,623
https://en.wikipedia.org/wiki/Jiro%20%28software%29
Jiro is the registered name used by Sun Microsystems for an extension to Java and Jini. Jiro as an industry initiative, along with an EMC initiative called "Wide Sky" were catalysts in the late nineties for a common interface to storage devices, leading to the Bluefin specification, subsequently donated to the SNIA for the foundation of the SMI-S industry standard. Jiro was established by Sun in 1998 subsequent to acquiring a small company called Redcape Policy Software. Initially known by the moniker "StoreX," this technology was targeted at storage management. Jiro in many ways was a management oriented extension to Jini, leveraging many of Jini's ideas and capabilities for automatic detection of elements to be managed. Jiro was a Management Framework infrastructure based on a distributed runtime environment. It was standardized as JSR 9 by the Java Community Process. Jiro never gained the broad industry support necessary for success, because every device had to have a custom adapter (or Management Facade), and it was withdrawn from the market in 2001. Though never gaining commercial or industry acceptance, Jiro was one of the precursors to the development of the Storage Networking Industry Association's (SNIA) Storage Management Initiative (SMI) SNIA SMI Home Page, which has been seen as successful in promoting the use of open standards for storage management. Mark Carlson (one of the first employees at Redcape) led this effort based on his experience at Sun Microsystems as a Jiro developer and evangelist. By 2005, most large storage systems providers had announced adoption of SNIA's SMI specifications within their storage management products. The SNIA has now embarked on a project to standardize Management Frameworks along the lines of the earlier Jiro project using web services to communicate between standard service components. Overview Jiro Implements an infrastructure for creating integrated and automated management software in a distributed, cross-platform environment. Jiro makes use of Jini technology for allowing services come and go in network. Jiro introduces a middle tier of management between the client/GUI and other Java-based agent technologies such as JMX and JDMK. This middle tier is where the automation of management take place. Jiro divides a management environment into domains. Each domain only a shared management server (a Java Virtual Machine running Jiro services) that represents the domain as a whole. Other private management server can host management services that are specific to their host. References Java platform
Jiro (software)
[ "Technology" ]
503
[ "Computing platforms", "Java platform" ]
2,974,863
https://en.wikipedia.org/wiki/Idempotent%20matrix
In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix is idempotent if and only if . For this product to be defined, must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings. Example Examples of idempotent matrices are: Examples of idempotent matrices are: Real 2 × 2 case If a matrix is idempotent, then implying so or implying so or Thus, a necessary condition for a matrix to be idempotent is that either it is diagonal or its trace equals 1. For idempotent diagonal matrices, and must be either 1 or 0. If , the matrix will be idempotent provided so a satisfies the quadratic equation or which is a circle with center (1/2, 0) and radius 1/2. In terms of an angle θ, is idempotent. However, is not a necessary condition: any matrix with is idempotent. Properties Singularity and regularity The only non-singular idempotent matrix is the identity matrix; that is, if a non-identity matrix is idempotent, its number of independent rows (and columns) is less than its number of rows (and columns). This can be seen from writing , assuming that has full rank (is non-singular), and pre-multiplying by to obtain . When an idempotent matrix is subtracted from the identity matrix, the result is also idempotent. This holds since If a matrix is idempotent then for all positive integers n, . This can be shown using proof by induction. Clearly we have the result for , as . Suppose that . Then, , since is idempotent. Hence by the principle of induction, the result follows. Eigenvalues An idempotent matrix is always diagonalizable. Its eigenvalues are either 0 or 1: if is a non-zero eigenvector of some idempotent matrix and its associated eigenvalue, then which implies This further implies that the determinant of an idempotent matrix is always 0 or 1. As stated above, if the determinant is equal to one, the matrix is invertible and is therefore the identity matrix. Trace The trace of an idempotent matrix — the sum of the elements on its main diagonal — equals the rank of the matrix and thus is always an integer. This provides an easy way of computing the rank, or alternatively an easy way of determining the trace of a matrix whose elements are not specifically known (which is helpful in statistics, for example, in establishing the degree of bias in using a sample variance as an estimate of a population variance). Relationships between idempotent matrices In regression analysis, the matrix is known to produce the residuals from the regression of the vector of dependent variables on the matrix of covariates . (See the section on Applications.) Now, let be a matrix formed from a subset of the columns of , and let . It is easy to show that both and are idempotent, but a somewhat surprising fact is that . This is because , or in other words, the residuals from the regression of the columns of on are 0 since can be perfectly interpolated as it is a subset of (by direct substitution it is also straightforward to show that ). This leads to two other important results: one is that is symmetric and idempotent, and the other is that , i.e., is orthogonal to . These results play a key role, for example, in the derivation of the F test. Any similar matrices of an idempotent matrix are also idempotent. Idempotency is conserved under a change of basis. This can be shown through multiplication of the transformed matrix with being idempotent: . Applications Idempotent matrices arise frequently in regression analysis and econometrics. For example, in ordinary least squares, the regression problem is to choose a vector of coefficient estimates so as to minimize the sum of squared residuals (mispredictions) ei: in matrix form, Minimize where is a vector of dependent variable observations, and is a matrix each of whose columns is a column of observations on one of the independent variables. The resulting estimator is where superscript T indicates a transpose, and the vector of residuals is Here both and (the latter being known as the hat matrix) are idempotent and symmetric matrices, a fact which allows simplification when the sum of squared residuals is computed: The idempotency of plays a role in other calculations as well, such as in determining the variance of the estimator . An idempotent linear operator is a projection operator on the range space along its null space . is an orthogonal projection operator if and only if it is idempotent and symmetric. See also Idempotence Nilpotent Projection (linear algebra) Hat matrix References Linear algebra Regression analysis Matrices
Idempotent matrix
[ "Mathematics" ]
1,062
[ "Matrices (mathematics)", "Linear algebra", "Mathematical objects", "Algebra" ]
2,974,898
https://en.wikipedia.org/wiki/Spiritual%20successor
A spiritual successor (sometimes called a spiritual sequel) is a product or fictional work that is similar to, or directly inspired by, another previous product or work, but (unlike a traditional prequel or sequel) does not explicitly continue the product line or media franchise of its predecessor, and is thus only a successor "in spirit". Spiritual successors often have similar themes and styles to their preceding material, but are generally a distinct intellectual property. In fiction, the term generally refers to a work by a creator that shares similarities to one of their earlier works, but is set in a different continuity, and features distinct characters and settings. Such works may arise when licensing issues prevent a creator from releasing a direct sequel using the same copyrighted characters and names as the original. The term is also used more broadly to describe a pastiche work that intentionally evokes similarities to pay homage to other influential works, but is also distinct enough to avoid copyright infringement. In literature Arthur Conan Doyle's Sherlock Holmes stories, published between 1887 and 1927, drew a large number of pastiches from other authors as early as the 1900s to capture the same mystery and spirit as Doyle's writings. Subsequently, Doyle and his publishers, and since then Doyle's estate, had aggressive enforced copyright on the Holmes character, often requiring authors that were publishing stories to change any use of Holmes' name to something else. The name "Herlock Sholmes" became one of the more common variations on this, notably in Maurice Leblanc's Arsène Lupin versus Herlock Sholmes, with the Sholmes character having a personality similar, but not quite exactly like Holmes to further distance potential copyright issues. In and around the 1950s, the character Solar Pons, a pastiche of Holmes, appeared in several books not authorized by the estate of Conan Doyle. These copyright issues have continued into contemporary times: in the case Klinger v. Conan Doyle Estate, Ltd. (2014), it was determined that the characters of Holmes and Watson were in the public domain. However, certain story elements were under copyright until 2023. In films and television In films and television shows, spiritual successor often describes similar works by the same creator, or starring the same cast. For example, the show Parks and Recreation is a spiritual successor to The Office. Both are workplace mockumentaries developed by Greg Daniels, featuring satirical humor and characters being filmed by an in-universe documentary film crew. The film 10 Cloverfield Lane was not originally scripted with any connection to Cloverfield. When the film was acquired by Bad Robot, producer J. J. Abrams recognized a common element of a giant monster attack between the two films, and chose to market 10 Cloverfield Lane as a spiritual successor to Cloverfield to help bring interest to the newer film, which allowed him to establish a franchise he could build upon in the future. Spiritual successors are common in Indian film industries, particularly Bollywood, where films marketed as sequels do not share continuity with their predecessors. The 2006 film Superman Returns was created as a spiritual sequel to Superman: The Movie and Superman II, with no references to Superman III or Superman IV: The Quest for Peace, though the Arrowverse's Crisis on Infinite Earths would later confirm that the latter two sequels had occurred within the timeline established in the 2006 film. The 2022 film Chip 'n Dale: Rescue Rangers was created as a spiritual sequel to the 1988 film Who Framed Roger Rabbit; both films showcases worlds where cartoon characters coexist with humans. The 2022 miniseries We Own This City was described as a spiritual successor to the 2002–08 series The Wire in that both are street-level crime dramas set in Baltimore and both are produced by David Simon for HBO. In video games Games by the same studio Spiritual successor games are sometimes made by the same studio as the original, but with a new title due to licensing issues. Some examples of these include: The Dark Souls series by FromSoftware was inspired by the studio's earlier game, Demon's Souls, an exclusive title for the PlayStation 3. Because Sony Interactive Entertainment held the rights to Demon's Souls, the studio was unable to produce a direct sequel on other platforms, leading them to create a new property with similar gameplay mechanics. Demon's Souls itself was a spiritual successor to King's Field. Irrational Games' BioShock is a spiritual successor to their earlier System Shock 2. While System Shock 2 was met with critical acclaim, it was considered a commercial failure, and publisher Electronic Arts would not allow a third title in the series. After several years and other projects at Irrational, as well as being acquired by a new publisher 2K Games, the studio developed BioShock, with a similar free-form narrative structure. Shadow of the Colossus was considered a spiritual successor to Ico by Fumito Ueda, who directed both games as leader of Team Ico. Ueda expressed that he did not necessarily want a direct canonical connection between the games, but that both had similar narrative themes and elements that he wanted players to interpret on their own. Created by Facepunch Studios, Sandbox (stylized as s&box) is an upcoming spiritual successor to Garry's Mod. Unlike the latter being a sandbox mod of the Source engine, s&box is a game developoment platform built on top of Source 2. Games by the same staff Alternatively, a successor may be developed by some of the staff who worked on the preceding game, under a new studio name. Examples of these include: Yooka-Laylee is a spiritual successor evoking the style and gameplay of Rare's Banjo-Kazooie. It was developed by Playtonic Games, which consisted of many former Rare staff members, including composer Grant Kirkhope. Yooka and Laylee, the game's animal protagonists, serve as direct stand-ins for the original game's Banjo and Kazooie. Mighty No. 9 closely resembles the gameplay and character design of the Mega Man series, which project lead Keiji Inafune worked on before leaving Capcom, and is considered a spiritual successor. Bloodstained: Ritual of the Night is considered a spiritual successor to the Castlevania series, created by Koji Igarashi who had led development of several Castlevania games before leaving Konami. A number of games from Bullfrog Productions have spawned spiritual successors in the years after the studio was closed by Electronic Arts in 2001, with these projects typically led by former staff from Bullfrog having found their own studios. These include Godus by Peter Molyneux's studio 22cans, succeeding Populous; 5 Lives Studios' Satellite Reign, succeeding Syndicate Wars; and Two Point Hospital by Mark Webley and Gary Carr's Two Point Studios, succeeding Theme Hospital. P.N.03 has been called the spiritual predecessor of Bayonetta for its "combat...with stylish dance-inspired movements" and "flashy, energetic, intense" gameplay and character design. P.N.03 director Shinji Mikami later co-founded PlatinumGames, the studio that developed Bayonetta, and Bayonetta director and PlatinumGames co-founder Hideki Kamiya also directed Resident Evil 2, Devil May Cry, and Viewtiful Joe, the last of which was part of the Capcom Five with P.N.03. Common themes only The term is also more broadly applied to video games developed by a different studio with no connection to the original, and simply inspired by the gameplay, aesthetics or other elements of the preceding work. Examples of such games include: The game Cities: Skylines (along with other city-builder games) is considered a spiritual successor to the SimCity series, both focusing on constructing and managing a simulated city. Axiom Verge is a side-scrolling Metroidvania game that succeeds the Metroid series. The Mother series (known as EarthBound outside Japan) has directly inspired a number of pixel-art, role-playing indie games featuring children in playable character roles as spiritual successors to the series. These include Undertale and Citizens of Earth. War for the Overworld (succeeding Dungeon Keeper) crossed through several of these categories over the course of the development. Originating as a fan-made direct sequel to Dungeon Keeper 2, the game then became a spiritual successor with only thematic connection after moving away from the Dungeon Keeper IP. Finally, the hiring of returning voice actor Richard Ridings presented a direct staff connection to the original. In sports In sports, the Ravens–Steelers rivalry is considered the spiritual successor to the older Browns–Steelers rivalry due to the original Cleveland Browns relocation to Baltimore, as well as the reactivated Browns having a 6–30 record against the Steelers since returning to the league in 1999. In other industries The Honda CR-Z is regarded as the spiritual successor to the second generation Honda CR-X in both name and exterior design, despite a nearly two decade time difference in production. The Toyota Fortuner SUV is a spiritual successor to the Toyota 4Runner SUV mainly because they both share the same platform as the Hilux pickup truck. The Canon Cat computer was Jef Raskin's spiritual successor to the Apple Macintosh. See also Canon (fiction) Continuation novel Phoenix club (sports) Reboot (fiction) Remake Revisionism (fictional) Sequel Spin-off (media) Gaiden Digression References Spiritual successor Sequel, spiritual Film and video terminology Video game terminology
Spiritual successor
[ "Technology" ]
1,917
[ "Computing terminology", "Video game terminology" ]
2,975,044
https://en.wikipedia.org/wiki/Polish%20units%20of%20measurement
The traditional Polish units of measurement included two uniform yet distinct systems of weights and measures, as well as a number of related systems borrowed from neighbouring states. The first attempt at standardisation came with the introduction of the Old Polish measurement [system], also dubbed the Warsaw system, introduced by a royal decree of December 6, 1764. The system was later replaced by the New Polish measurement [system] introduced on January 1, 1819. The traditional Polish systems of weights and measures were later replaced with those of surrounding nations (due to the Partitions of Poland), only to be replaced with metric system by the end of the 19th century (between 1872 and 1876). History Historic weights and measures The first recorded weights and measures used in Poland were related to dimensions of human body, hence the most basic measures in use were sążeń (fathom), łokieć (ell), piędź (span), stopa (foot) and skok (jump). With time trade relations with the neighbouring nations brought to use additional units, with names often borrowed from German, Arabic or Czech. From Middle Ages until the 18th century, there was no single system of measurement used in all of Poland. Traditional units like stopa (foot) or łokieć (ell) were used throughout the country, but their meaning differed from region to region. Most major cities in the area used their own systems of measurement, which were used in the surrounding areas as well. Among the commonly used systems were Austrian, Galician, Danzig, Kraków, Prussian, Russian and Breslau. The matter was further complicated by the fact that Austrian or German systems were hardly uniform either and differed from town to town. Furthermore, the systems tended to evolve over time: in the 13th century the Kraków's ell was equivalent to 64.66 centimetres, a century later it was equivalent to 62.5 cm, then in the 16th century it shrunk to 58.6 cm and finally was equalled to standard "old Polish ell" of 59.6 cm only in 1836. To add to the confusion, various goods were traditionally measured with different units, often incompatible or difficult to convert. For instance, beer was sold in units named achtel (0.5 of barrel, that is 62 Kraków gallons of 2.75 litres each). However honey and mead were recorded for tax purposes in units named rączka (slightly more than 10 Kraków gallons). As the weights and measures were important in everyday life of merchants, in 1420 the royal decree allowed each voivode to create and maintain a single system used in his voivodeship. This law was later confirmed by a Sejm act of 1565. Steel or copper rods used as local standard of ell (basic unit of length) were created in a voivode's capital and then dispatched to all nearby towns, where they were further duplicated for everyday use. One bar was to be stored in the town hall for comparison, while additional rods were stored in the gatehouses or toll points to be borrowed by merchants as needed. Damaging or losing a rod was punishable by law. Measuring time Outside of this set of systems was the measurement of time. As clock towers only started to appear in late Middle Ages, and their usability was limited to within a small radius, some basic substitutes for modern minutes and hours were developed, based on Christian prayers. The pacierz (or paternoster) was a non-standard unit of time comprising some 25 seconds, that is enough time to recite the Lord's Prayer. Similarly, zdrowaśka (from Zdrowaś Mario, the first words of the Hail Mary) was used, as was the Rosary (różaniec) that is the time needed to recite Hail Mary 50 times (roughly 16 minutes). Those units were never strictly defined, but is used in rural areas of Poland even today. Early attempts at standardisation While this system introduced some level of standardisation throughout the country, the systems used in various voivodeships still differed from one another. To counter this problem the Kraków ell and Poznań ell were made equal in 1507. The same applied to ells used in Lwów and Lublin, which however were different from those in Kraków and Poznań. In 1532 the Płock ell was aligned with the Kraków ell, which in 1565 was declared an official ell to be used in all of the Crown of Poland. The system used by Warsaw was adopted in Płock and all of Masovia in 1569. In 1613 additional systems were created for Vilnius and Kaunas. The standardisation of other units of measurement also made some progress since the 15th century, but at a different pace. In the end this created even more confusion, as two towns could use the same units of length, but two different units of weight, although using the same terms. 1764 reform - the Old Polish system As until then not only different units varied from town to town but also their relation to one another, in 1764 a major overhaul of the measurement system was prepared. By a royal decree of December 6, 1764 all units of measurement were to be converted to a new system, common to all of Poland and its dependencies. The system relied on previously used units, but introduced a common, unified system of relations between them. It had no official name and it was not until the 19th century when it started to be called the Old Polish system (miary staropolskie, or Old-Polish measures), in contrast to the new system introduced then. The basic unit of length - the ell or łokieć in Polish - was set to 0.5955 metres. For trade and everyday use it was further subdivided into the foot (stopa, ≈29.78 centimetres); sztych (≈19.86cm); quarter (ćwierć, ≈14.89cm); palm (dłoń, ≈7.44cm); and inch (cal, ≈2.48 centimetres), or gathered into the fathom (sążeń, 3 ells or 1.787 metres in length), such that:1 ell = 2 feet = 3 sztychs = 4 quarters = 8 palms = 24 inches ( = ⅓ of a fathom ).A different system of units, although complementary and interchangeable, was used in measuring lengths for agrarian purposes. The basic unit was a step (krok), equalling 3.75 of standard ell, or 2.2333 metres. Two steps made a rod (pręt, 4.4665 metres), 2 rods made a stick (laska), and five sticks were equal to a cable (sznur of 44.665 metres). Finally 3 cables made up a furlong (staje) of roughly 134 metres. In measuring the distance between cities, the basic unit was staje, although it was different from the staje mentioned before and had the length of roughly 893 metres. Eight staje made up a Polish mile of 7144 metres. The weights were based on the (funt of 0.4052 kg) composed of two grzywnas, each in turn comprising 16 lots (łut of 0.0127 kg). For heavier goods the basic units were a stone (kamień, 32 pounds or 12.976 kg) and Hundredweight (cetnar, five stones or 64.80 kg). There were two sets of units of volume: one for fluids and the other for dry goods. Both used the gallon (garniec) of 3.7689 litres as the basic unit. This was subdivided into 4 quarts () of 0.9422 L or 16 . For dry goods four gallons comprised a measure (), 2 measures comprised a quarter (), 4 quarters comprised a bushel () of 120.6 L, and 30 bushels comprised a last () of 3618 L. For fluids, 5 gallons comprised a konew of 18.8445 L and 14.4 konew made up a barrel of 271.36 L. Current use Though the traditional systems were officially abandoned in the 19th century, traces of their use, especially in rural areas, were found by ethnographers as late as 1969. Length Krok (:pl:Krok (miara)) Ławka (:pl:Ławka (jednostka długości)) Łokieć (:pl:Łokieć (miara)) Piędź (:pl:Piędź) Staje (:pl:Staje) Stopa (:pl:Stopa (miara)) Area Łan (:pl:Łan (miara powierzchni)) Morga (:pl:Morga) Staje (:pl:Staje) Włóka (:pl:Włóka (miara powierzchni)) Źreb (:pl:Źreb) Volume Garniec (:pl:Garniec) Korzec (:pl:Korzec) Łaszt (:pl:Łaszt) Mass and monetary units Grzywna (:pl:Grzywna (ekonomia); :pl:Grzywna (jednostka miar)) Kamień (:pl:Kamień (miara)) Kwarta (:pl:Kwarta (jednostka wagowa)) Kwartnik (:pl:Kwartnik) Łut (:pl:Łut) Skojec (:pl:Skojec) Wiardunek (:pl:Wiardunek) Time Pacierz (:pl:Pacierz) Zdrowaśka (:pl:Zdrowaśka) References Polish 1760s establishments in the Polish–Lithuanian Commonwealth Science and technology in Poland Culture of Poland Polish Poland 1810s establishments in Poland
Polish units of measurement
[ "Mathematics" ]
2,069
[ "Obsolete units of measurement", "Systems of units", "Units of measurement by country", "Quantity", "Units of measurement" ]
2,975,045
https://en.wikipedia.org/wiki/Belling-Lee%20connector
The Belling-Lee connector (also type 9,52, but largely only in the context of its specification, IEC 61169, Part 2: Radio-frequency coaxial connector of type 9,52) is commonly used in Europe, parts of Southeast Asia, and Australia, to connect coaxial cables with each other and with terrestrial VHF/UHF roof antennas, antenna signal amplifiers, CATV distribution equipment, TV sets, and FM and DAB radio receivers. In these countries, it is known colloquially as an aerial connector, IEC connector, PAL connector, or simply as a TV aerial connector. It is one of the oldest coaxial connectors still commonly used in consumer devices. For television signals, the convention is that the source has a male connector and the receptor has a female connector. For FM radio signals, the convention is that the source has a female connector and the receptor has a male connector. This is more or less universally adopted with TV signals, while it is not uncommon for FM radio receivers to deviate from this, especially FM radio receivers from companies not based in the areas that use this kind of connector. It was invented at Belling & Lee Ltd in Enfield, United Kingdom around 1922 at the time of the first BBC broadcasts. Originally intended for use only at MF frequencies (up to 1.6 MHz) when adopted for television they were used for frequencies as high as 957 MHz. Belling Lee Limited still exists as a wholly owned subsidiary of Dialight, since 1992. In type 9,52, the 9,52, in French SI style, refers to the male external and female internal connector body diameter. In their most common form the connectors just slide together. There is, however, also a screw-coupled variant which is specified to have an M14×1 thread. There is also a miniature Belling-Lee connector which was used for internal connections inside some equipment (including BBC RC5/3 Band II receiver and the STC AF101 Radio Telephone). The miniature version is only about in diameter. See also List of RF connector types F connector RF connector References External links How to wire a Belling-Lee connector (TV aerial plug). Make your own fly-lead. RF connectors Television technology Television terminology Socket
Belling-Lee connector
[ "Technology", "Engineering" ]
464
[ "Information and communications technology", "Antennas", "Telecommunications engineering", "Television technology" ]
2,975,066
https://en.wikipedia.org/wiki/Neo-creationism
Neo-creationism is a pseudoscientific movement which aims to restate creationism in terms more likely to be well received by the public, by policy makers, by educators and by the scientific community. It aims to re-frame the debate over the origins of life in non-religious terms and without appeals to scripture. In the United States, this comes in response to the 1987 ruling by the Supreme Court in Edwards v. Aguillard that creationism is an inherently religious concept and that advocating it as correct or accurate in public-school curricula violates the Establishment Clause of the First Amendment. One of the principal claims of neo-creationism propounds that ostensibly objective orthodox science, with a foundation in naturalism, is actually a dogmatically atheistic religion. Its proponents argue that the scientific method excludes certain explanations of phenomena, particularly where they point towards supernatural elements, thus effectively excluding religious insight from contributing to understanding the universe. This leads to an open and often hostile opposition to what neo-creationists term "Darwinism", which they generally mean to refer to evolution, but which they may extend to include such concepts as abiogenesis, stellar evolution and the Big Bang theory. Notable neo-creationist organizations include the Discovery Institute and its Center for Science and Culture. Neo-creationists have yet to establish a recognized line of legitimate scientific research and lack scientific and academic legitimacy, even among many academics of evangelical Christian colleges. Eugenie C. Scott and other critics regard neo-creationism as the most successful form of irrationalism. The main form of neo-creationism is intelligent design. A second form, abrupt appearance theory, which claims that the first life and the universe appeared abruptly and that plants and animals appeared abruptly in complex form, has occasionally been postulated. Motivations The neo-creationist movement is motivated by the fear that religion is under attack by the study of evolution. An argument common to neo-creationist justifications is that society has suffered "devastating cultural consequences" from adopting materialism and that science is the cause of this decay into materialism since science seeks only natural explanations. They believe that the theory of evolution implies that humans have no spiritual nature, no moral purpose, and no intrinsic meaning, and thus that acceptance of evolution devalues human life directly leading to the atrocities committed by Hitler's Nazi regime, for example. The movement's proponents seek to "defeat [the] materialist world view" represented by the theory of evolution in favor of "a science consonant with Christian and theistic convictions". Phillip E. Johnson, 'father' of the intelligent design movement, states the movement's goal is to "affirm the reality of God". Tactics Much of the effort of neo-creationists in response to science consists of polemics highlighting gaps in understanding or minor inconsistencies in the literature of biology, then making statements about what can and cannot happen in biological systems. Critics of neo-creationism suggest that neo-creationist science consists of quote-mining the biological literature (including outdated literature) for minor slips, inconsistencies or polemically promising examples of internal arguments. These internal disagreements, fundamental to the working of all natural science, are then presented dramatically to lay audiences as evidence of the fraudulence and impending collapse of "Darwinism". Critics suggest that neo-creationists routinely employ this method to exploit the technical issues within biology and evolutionary theory to their advantage, relying on a public that is not sufficiently scientifically literate to follow the complex and sometimes difficult details. Robert T. Pennock argues that intelligent design proponents are "manufacturing dissent" in order to explain the absence of scientific debate of their claims: "The 'scientific' claims of such neo-creationists as Johnson, Denton, and Behe rely, in part, on the notion that these issues [surrounding evolution] are the subject of suppressed debate among biologists.... According to neo-creationists, the apparent absence of this discussion and the nearly universal rejection of neo-creationist claims must be due to the conspiracy among professional biologists instead of a lack of scientific merit." Eugenie Scott describes neo-creationism as "a mixed bag of antievolution strategies brought about by legal decisions against equal time laws". Those legal decisions, McLean v. Arkansas and Edwards v. Aguillard, doomed the teaching of creation science as an alternative to evolution in public school science classes. Scott considers intelligent design, and the various strategies of design proponents like Teach the Controversy and Critical Analysis of Evolution, as leading examples of neo-creationism. Neo-creationists generally reject the term "neo-creation", alleging it is a pejorative term. Any linkage of their views to creationism would undermine their goal of being viewed as advocating a new form of science. Instead, they identify themselves to their non-scientific audience as conducting valid science, sometimes by redefining science to suit their needs. This is rejected by the vast majority of actual science practitioners. Nevertheless, neo-creationists profess to present and conduct valid science which is equal, or superior to, the theory of evolution, but have yet to produce recognized scientific research and testing that supports their claims. Instead, the preponderance of neo-creationist works are publications aimed at the general public and lawmakers and policymakers. Much of that published work is polemical in nature, disputing and controverting what they see as a "scientific orthodoxy" which shields and protects "Darwinism" while attacking and ridiculing alleged alternatives like intelligent design. Examples of neo-creationist polemics include the Discovery Institute's Wedge Document, the book Darwin on Trial by Phillip E. Johnson, and the book From Darwin to Hitler by Richard Weikart. Research for Weikart's book was funded by the Discovery Institute, and is promoted through the institute. Both Johnson and Weikart are affiliated with the Discovery Institute; Johnson is program advisor, and Weikart is a fellow. Criticism All of the following names make explicit the connections between traditional creationism, neo-creationism and intelligent design. Not all critics of neo-creationism are on the evolution side of the debate. Henry M. Morris, a notable young earth creationist, accepted the term but opposed the logic of neo-creationism for the very reason that it does not embrace the Bible. The Baptist Center for Ethics calls for "Baptists to recommit themselves to the separation of church and state, which will keep public schools free from coercive pressure to promote sectarian faith, such as state-written school prayers and the teaching of neo-creationism..." Barbara Forrest, co-author of Creationism's Trojan Horse: The Wedge of Intelligent Design () Georgetown University theologian John Haught Journalist Chris Mooney, author of The Republican War on Science () Massimo Pigliucci Eugenie C. Scott Robert T. Pennock See also Creation–evolution controversy Creation myth Creation science Intelligent design movement Social implications of the theory of evolution Theistic realism References External links Neo-Creo New York Times By William Safire Creationism Pseudoscience
Neo-creationism
[ "Biology" ]
1,465
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
2,975,094
https://en.wikipedia.org/wiki/Chlorambucil
Chlorambucil, sold under the brand name Leukeran among others, is a chemotherapy medication used to treat chronic lymphocytic leukemia (CLL), Hodgkin lymphoma, and non-Hodgkin lymphoma. For CLL it is a preferred treatment. It is given by mouth. Common side effects include bone marrow suppression. Other serious side effects include an increased long term risk of further cancer, infertility, and allergic reactions. Use during pregnancy often results in harm to the baby. Chlorambucil is in the alkylating agent family of medications. It works by blocking the formation of DNA and RNA. Chlorambucil was approved for medical use in the United States in 1957. It is on the World Health Organization's List of Essential Medicines. It was originally made from nitrogen mustard. Medical uses Chlorambucil's current use is mainly in chronic lymphocytic leukemia, as it is well tolerated by most patients, though chlorambucil has been largely replaced by fludarabine as first-line treatment in younger patients. It can be used for treating some types of non-Hodgkin lymphoma, Waldenström macroglobulinemia, polycythemia vera, trophoblastic neoplasms, and ovarian carcinoma. Moreover, it also has been used as an immunosuppressive drug for various autoimmune and inflammatory conditions, such as nephrotic syndrome. Side effects Bone marrow suppression (anemia, neutropenia, thrombocytopenia) is the most commonly occurring side effect of chlorambucil. Withdrawn from the drug, this side effect is typically reversible. Like many alkylating agents, chlorambucil has been associated with the development of other forms of cancer. Less commonly occurring side effects include: Gastrointestinal Distress (nausea, vomiting, diarrhea, and oral ulcerations). Central Nervous System: Seizures, tremors, muscular twitching, confusion, agitation, ataxia, and hallucinations. Skin reactions Hepatotoxicity Infertility Hair Loss Pharmacology Mechanism of action Chlorambucil produces its anti-cancer effects by interfering with DNA replication and damaging the DNA in a cell. The DNA damage induces cell cycle arrest and cellular apoptosis via the accumulation of cytosolic p53 and subsequent activation of Bcl-2-associated X protein, an apoptosis promoter. Chlorambucil alkylates and cross-links DNA during all phases of the cell cycle, inducing DNA damage via three different methods of covalent adduct generation with double-helical DNA: Attachment of alkyl groups to DNA bases, resulting in the DNA being fragmented by repair enzymes in their attempts to replace the alkylated bases, preventing DNA synthesis and RNA transcription from the affected DNA. DNA damage via the formation of cross-links which prevents DNA from being separated for synthesis or transcription. Induction of mispairing of the nucleotides leading to mutations. The precise mechanisms by which chlorambucil acts to kill tumor cells are not yet completely understood. Limitations to bioavailability A recent study has shown Chlorambucil to be detoxified by human glutathione transferase Pi (GST P1-1), an enzyme that is often found over-expressed in cancer tissues. This is important since Chlorambucil, as an electrophile, is made less reactive by conjugation with glutathione, thereby making the drug less toxic to the cell. Shown above, Chlorambucil reacts with glutathione as catalyzed by hGSTA 1-1 leading to the formation of the monoglutathionyl derivative of Chlorambucil. Chemistry Chlorambucil is a white to pale beige crystalline or granular powder with a slight odor. When heated to decomposition it emits very toxic fumes of hydrogen chloride and nitrogen oxides History Nitrogen mustards arose from the derivatization of sulphur mustard gas after military personnel exposed to it during World War I were observed to have decreased white blood cell counts. Since the sulphur mustard gas was too toxic to be used in humans, Gilman hypothesized that by reducing the electrophilicity of the agent, which made it highly chemically reactive towards electron-rich groups, then less toxic drugs could be obtained. To this end, he made analogues that were less electrophilic by exchanging the sulphur with a nitrogen, leading to the nitrogen mustards. With an acceptable therapeutic index in humans, nitrogen mustards were first introduced in the clinic in 1946. Aliphatic mustards were developed first, such as mechlorethamine hydrochloride (mustine hydrochloride), which is still used in the clinic today. In the 1950s, aromatic mustards like chlorambucil were introduced as less toxic alkylating agents than the aliphatic nitrogen mustards, proving to be less electrophilic and react with DNA more slowly. Additionally, these agent can be administered orally, a significant advantage. Chlorambucil was first synthesized by Everett et al. References External links Leukeran (manufacturer's website) IARC Group 1 carcinogens Anilines Organochlorides Nitrogen mustards Chloroethyl compounds World Health Organization essential medicines Wikipedia medicine articles ready to translate Carboxylic acids
Chlorambucil
[ "Chemistry" ]
1,171
[ "Carboxylic acids", "Functional groups" ]
2,975,098
https://en.wikipedia.org/wiki/Glycogen%20storage%20disease%20type%20IV
Glycogen storage disease type IV (GSD IV), or Andersen's Disease, is a form of glycogen storage disease, which is caused by an inborn error of metabolism. It is the result of a mutation in the GBE1 gene, which causes a defect in the glycogen branching enzyme. Therefore, glycogen is not made properly and abnormal glycogen molecules accumulate in cells; most severely in cardiac and muscle cells. The severity of this disease varies on the amount of enzyme produced. GSD IV is autosomal recessive, which means each parent has a mutant copy of the gene, but show no symptoms of the disease. Having an autosomal recessive inheritance pattern, males and females are equally likely to be affected by Andersen's disease. Classic Andersen's disease typically becomes apparent during the first few months after the patient is born. Approximately 1 in 20,000 to 25,000 newborns have a glycogen storage disease. Andersen's disease affects 1 in 800,000 individuals worldwide, with 3% of all GSDs being type IV. The disease was described and studied first by Dorothy Hansine Andersen. Human pathology It is a result of the absence of the glycogen branching enzyme, which is critical in the production of glycogen. This leads to very long unbranched glucose chains being stored in glycogen. The long unbranched molecules have low solubility, leading to glycogen precipitation in the liver. These deposits subsequently build up in the body tissue, especially the heart and liver. The inability to break down glycogen in muscle cells causes muscle weakness. The probable result is cirrhosis and death within five years. In adults, the activity of the enzyme is higher and symptoms do not appear until later in life. Variant types Fatal perinatal neuromuscular type Excess fluid builds up around and in the body of the fetus Fetuses exhibit fetal akinesia deformation sequence Causes decrease in fetal movement and stiffness of joints after birth Infants have low muscle tone and muscle wasting Do not survive past the newborn stage due to weakened heart and lungs Congenital muscular type Develops in early infancy Babies have dilated cardiomyopathy, preventing the heart from pumping efficiently Only survive a few months Progressive hepatic type Infants have difficulty gaining weight Develop enlarged liver and cirrhosis that is irreversible High BP in hepatic portal vein and buildup of fluid in the abdominal cavity Die of liver failure in early childhood Non-progressive hepatic type Same as progressive, but liver disease is not so severe Do not usually develop cirrhosis Usually show muscle weakness and hypotonia Survive into adulthood Life expectancy varies upon symptom severity Childhood neuromuscular type Develops in late childhood Has myopathy and dilated cardiomyopathy Varies greatly Some have mild muscle weakness Some have severe cardiomyopathy and die in early adulthood Diagnosis An assay of amylo-1,4 → 1,6 glucan transferases (which removes a block of 6 glucose residues from the 1,4 position and attaches it to the 1,6 position of the same chain) Alternative names and related disease Alternative names in medical literature for the disease include: Andersen's triad Glycogenosis type IV Glycogen branching enzyme deficiency Polyglucosan body disease Amylopectinosis Mutations in GBE1 can also cause a milder disease in adults that is called adult polyglucosan body disease. In other mammals The form in horses is known as glycogen branching enzyme deficiency. It has been reported in American Quarter Horses and related breeds. The disease has been reported in the Norwegian Forest Cat, where it causes skeletal muscle, heart, and CNS degeneration in animals greater than five months old. It has not been associated with cirrhosis or liver failure. References External links Inborn errors of carbohydrate metabolism Hepatology
Glycogen storage disease type IV
[ "Chemistry" ]
823
[ "Inborn errors of carbohydrate metabolism", "Carbohydrate metabolism" ]
2,975,155
https://en.wikipedia.org/wiki/Dependence%20relation
In mathematics, a dependence relation is a binary relation which generalizes the relation of linear dependence. Let be a set. A (binary) relation between an element of and a subset of is called a dependence relation, written , if it satisfies the following properties: if , then ; if , then there is a finite subset of , such that ; if is a subset of such that implies , then implies ; if but for some , then . Given a dependence relation on , a subset of is said to be independent if for all If , then is said to span if for every is said to be a basis of if is independent and spans If is a non-empty set with a dependence relation , then always has a basis with respect to Furthermore, any two bases of have the same cardinality. If and , then , using property 3. and 1. Examples Let be a vector space over a field The relation , defined by if is in the subspace spanned by , is a dependence relation. This is equivalent to the definition of linear dependence. Let be a field extension of Define by if is algebraic over Then is a dependence relation. This is equivalent to the definition of algebraic dependence. See also matroid Linear algebra Binary relations
Dependence relation
[ "Mathematics" ]
249
[ "Linear algebra", "Mathematical relations", "Binary relations", "Algebra" ]
2,975,185
https://en.wikipedia.org/wiki/Kolmogorov%27s%20inequality
In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables exceed some specified bound. Statement of the inequality Let X1, ..., Xn : Ω → R be independent random variables defined on a common probability space (Ω, F, Pr), with expected value E[Xk] = 0 and variance Var[Xk] < +∞ for k = 1, ..., n. Then, for each λ > 0, where Sk = X1 + ... + Xk. The convenience of this result is that we can bound the worst case deviation of a random walk at any point of time using its value at the end of time interval. Proof The following argument employs discrete martingales. As argued in the discussion of Doob's martingale inequality, the sequence is a martingale. Define as follows. Let , and for all . Then is also a martingale. For any martingale with , we have that Applying this result to the martingale , we have where the first inequality follows by Chebyshev's inequality. This inequality was generalized by Hájek and Rényi in 1955. See also Chebyshev's inequality Etemadi's inequality Landau–Kolmogorov inequality Markov's inequality Bernstein inequalities (probability theory) References (Theorem 22.4) Stochastic processes Probabilistic inequalities Articles containing proofs
Kolmogorov's inequality
[ "Mathematics" ]
326
[ "Theorems in probability theory", "Probabilistic inequalities", "Articles containing proofs", "Inequalities (mathematics)" ]
2,975,188
https://en.wikipedia.org/wiki/Venice%20Biennale%20of%20Architecture
Venice Biennale of Architecture (in Italian Mostra di Architettura di Venezia) is an international exhibition of architecture from nations around the world, held in Venice, Italy, every other year. It was held on even years until 2018, but 2020 was postponed to 2021 due to the COVID-19 pandemic, shifting the calendar to uneven years. It is the architecture section under the overall Venice Biennale and was officially established in 1980, even though architecture had been a part of the Venice Art Biennale since 1968. The main agenda of the Architecture Biennale is to propose and showcase architectural solutions to contemporary societal, humanistic, and technological issues. Although leaning towards the academic side of architecture, the Biennale also provides an opportunity for local architects around the world to present new projects. The Biennale is separated into two main sections: The permanent, national pavilions in the Biennale Gardens as well as the Arsenale, which hosts projects from numerous nations under one roof. Exhibitions 2023 The 18th Venice Architecture Biennale is curated by Lesley Lokko. The international architecture exhibition is entitled The Laboratory of the Future. Awards: Golden Lion for Lifetime Achievement: Demas Nwoko. Golden Lion for Best National Participation: Brazil. Golden Lion for the best participant in The Laboratory of the Future: DAAR – Alessandro Petti and Sandi Hilal. 2021 Curated by Hashim Sarkis, The 17th Venice Architecture Biennale was entitled How will we live together? Due to the COVID-19 pandemic, it took place in 2021 instead of 2020. National pavilions, contributions and curators (selection) Brazil: utopias of common life. Curated by Arquitetos Associados (Alexandre Brasil, André Luiz Prado, Bruno Santa Cecília, Carlos Alberto Maciel, Paula Zasnicoff) + Henrique Penha Canada: Impostor Cities. Curated by Thomas Balaban, David Theodore, and Jennifer Thorogood. Germany: 2038. Curated by 2038. Great Britain: The Garden of Privatised Delights. Curated by Madeleine Kessler and Manijeh Verghese. Italy: Resilient Communities. Curated by Alessandro Melis. Poland: Trouble in Paradise. Curated by PROLOG +1. Russian Federation: Open. Curated by Ippolito Pestellini Laparelli. Scotland: What if...?/. Curated by 7N Architects and Architecture and Design Scotland Spain: Uncertainty. Curated by Domingo J. González, Sofía Piñero, Andrzej Gwizdala, Fernando Herrera Turkey: Architecture as Measure. Curated by Neyran Turan. United States of America: American Framing. Curated by Paul Andersen and Paul Preissner. Awards Golden Lion for Best National Participation: United Arab Emirates, with Wetland. Special mention as National Participation: Russia, with Open! Special mention as National Participation: Philippines, with Structures of Mutual Support. Golden Lion for best participant in the exhibition How will we live together?: raumlaborberlin (Berlin, Germany), with Instances of Urban Practice. Silver Lion for promising young participant in the exhibition How will we live together?: Foundation for Achieving Seamless Territory (FAST) (Amsterdam, the Netherlands; New York, USA), with Watermelons, Sardines, Crabs, Sands, and Sediments: Border Ecologies and the Gaza Strip. Special Mention to the participant in the exhibition How will we live together?: cave_bureau (Nairobi, Kenya), with The Anthropocene Museum: Exhibit 3.0 Obsidian Rain. Special Golden Lion for Lifetime Achievement: Lina Bo Bardi 2018 The 16th International Architecture Exhibition was titled FREESPACE and was curated by Yvonne Farrell and Shelley McNamara. National pavilions, contributions and curators (selection) Australia: Repair. Curated by Baracco+Wright Architects in collaboration with Linda Tegg. Austria: Thoughts Form Matter. Curated by Verena Konrad. China: Building a Future Countryside. Curated by Li Xiangning. Croatia: Cloud Pergola / The Architecture of Hospitality. Curated by Bruno Juričić. Egypt: Robabecciah the informal city. Curated by Islam Elmashtooly and Mouaz Abouzaid. Indonesia: Sunyata: The Poetics of Emptiness. Curated by Ary Indrajanto, David Hutama, Adwitya Dimas Satria, Ardy Hartono, Jonathan Aditya Gahari, Johanes Adika Gahari. Italy: Arcipelago Italia. Curated by Mario_Cucinella. Korea (Republic of): Spectres of the State Avant-garde. Curated by Seongtae Park. Lebanon: The Place That Remains. Curated by Hala Younes. Lithuania: The Swamp School. Curated by Nomeda & Gediminas Urbonas. Mexico: Echoes of a Land. Curated by Gabriela Etchegaray. Romania: Mnemonics. Curated by Romeo Cuc.   Awards: Golden Lion for Best National Participation: Switzerland, with Svizzera 240, House Tour. Commissioners: Swiss Arts Council Pro Helvetia: Marianne Burki, Sandi Paucic, Rachele Giudici Legittimo. Curators & Exhibitors: Alessandro Bosshard, Li Tavor, Matthew van der Ploeg, Ani Vihervaara Special Mention for Best National Participation: Great Britain, with Island. Commissioner: Sarah Mann; Architecture Design Fashion British Council. Curators: Caruso St John Architects, Marcus Taylor Golden Lion for the best participant: Souto Moura - Arquitectos; Eduardo Souto de Moura (Porto, Portugal). Silver Lion for a promising young participant: Architecten de vylder vinck taillieu. Jan de Vylder, Inge Vinck, Jo Taillieu (Ghent, Belgium). Special Mentions: Andramatin; Andra Matin (Jakarta, Indonesia) and RMA Architects; Rahul Mehrotra (Mumbai, India; Boston, USA) Golden Lion for Lifetime Achievement: Kenneth Frampton (Great Britain) 2016 The 15th International Architecture Exhibition, entitled Reporting from the Front was directed by Alejandro Aravena 28 May – 27 November. In his curation of the exhibition, Aravena foregrounded social housing, incremental housing, rural-urban relationships, the balance between technology and natural materials, and an attentiveness to manual labor and handicraft. Aravena invited, among others, Raphael Zuber, Herzog & de Meuron, Tadao Ando, Peter Zumthor, David Chipperfield, SANAA and Francis Kéré. National pavilions, contributions and curators (selection) Italy: Taking Care. Curated by Studio Tamassociati Scotland: Prospect North. Curated by Lateral North, Dualchas Architects and Soluis and Architecture and Design Scotland Serbia: HEROIC: Free Shipping. Exhibitors: Stefan Vasic, Ana Šulkic and Igor Sjeverac. Switzerland: Incidental Space. Curated by Sandra Oehy. Exhibitor Christian Kerez. Awards: Golden Lion for Lifetime Achievement: Paulo Mendes da Rocha. Golden Lion for Best National Pavilion: Spain with Unfinished, curated by Iñaqui Carnicero and Carlos Quintáns. Golden Lion for Best Participation in the International Exhibition: Breaking The Siege, curated by Gabinete de Arquitectura (Solano Benitez and Gloria Cabral). 2014 The 14th International Architecture Exhibition: Fundamentals. Directed by Rem Koolhaas. 7 June – 23 November 2014. National pavilions, contributions and curators (selection) Scotland: Critical Dialogues. Curated by Jonathan Charley, Judith Winter, Lottie Gerrard and Architecture and Design Scotland Awards: Golden Lion for Lifetime Achievement: Phyllis Lambert. Golden Lion for Best National Participation: Korea, with "Crow's Eye View" curated by Minsuk Cho together with Hyungmin Pai and Changmo Ahn. Silver Lion for Best Research Project of the Monditalia section: Andrés Jaque and his Office for Political Innovation, with the project "Sales Oddity. Milano 2 and the Politics of Home to Home TV Urbanisms". Silver Lion for Best National Participation: Chile, with "Monolith Controversies" curated by Pedro Alonso and Hugo Palmarola. Special Mentions to National Participations: Canada, France, Russia Special Mentions to research projects of the Monditalia section: "Radical Pedagogies: ACTION-REACTION-INTERACTION" by Beatriz Colomina, Britt Eversole, Ignacio G. Galán, Evangelos Kotsioris, Anna-Maria Meister, Federica Vannucchi, Amunátegui Valdés Architects, Smog.tv; "Intermundia" by Ana Dana Beroš; "Italian Limes" by Folder 2012 The 13th International Architecture Exhibition: Common Ground. Directed by David Chipperfield. 29 August – 25 November 2012. National pavilions, contributions and curators (selection) Scotland: Past + Future. Curated by Rem Koolhaas and Architecture and Design Scotland Awards: Golden Lion for lifetime achievement: Alvaro Siza. Golden Lion for Best National Participation: Japan, "Architecture, possible here? Home-for-All" curated by Toyo Ito, with the participation of Kumiko Inui, Sou Fujimoto, Akihisa Hirata and Naoya Hatakeyama. Golden Lion for Best Project of the Common Ground Exhibition: Urban-Think Tank (Alfredo Brillembourg, Hubert Klumpner), Justin McGuirk and Iwan Baan Silver Lion for a promising practice of the Common Ground Exhibition: Grafton Architects (Yvonne Farrell and Shelley McNamara) Special Mentions: Poland, commissioner Hanna Wróblewska; United States of America, commissioner Cathy Lang Ho; Russia, commissioner Grigory Revzin; Cino Zucchi. 2010 The 12th International Architecture Exhibition: People meet in architecture. Directed by Kazuyo Sejima. 29 August – 21 November 2010. Awards: Golden Lion for Lifetime Achievement: Rem Koolhaas Golden Lion for the Best National Participation: Kingdom of Bahrain Golden Lion for the Best Project in the People meet in architecture exhibition: junya.ishigami+associates Golden Lion in memoriam: Kazuo Shinohara Silver Lion for a promising young participant in the People meet in architecture exhibition: OFFICE Kersten Geers David Van Severen + Bas Princen Special Mentions: Amateur Architecture Studio, Studio Mumbai, Piet Oudolf 2008 The 11th International Architecture Exhibition: Out There: Architecture Beyond Building. Directed by Aaron Betsky. 14 September – 23 November 2008. National pavilions, contributions and curators (selection) Germany: Updating Germany - 100 Projects for a Better Future. Curated by Friedrich von Borries and Matthias Böttger United States: Into the Open: Positioning Practice: 16 architectural groups focus on the increasing interest in civic engagement in American architectural practice, and examines the means by which a new generation is reclaiming a role in shaping community and the built environment. Curated by William Menking, Aaron Levy, and Andrew Sturm. Awards: Golden Lion for Lifetime Achievement: Frank Gehry Golden Lion for Best National Participant: Poland ("Hotel Polonia. The Afterlife of Buildings"). A project by Nicolas Grospierre and Kobas Laksa Golden Lion for the Best Installation Project in the International Exhibition: Greg Lynn Form ("Recycled Toys Forniture") Special Golden Lion for lifetime achievement to a historian of Architecture: James S. Ackerman Silver Lion for a Promising Young Architect in the International Exhibition: Chilean Group Elemental 2006 The 10th International Architecture Exhibition: Cities, architecture and society. Directed by Ricky Burdett. 10 September – 19 November 2006. The collateral section City-Port was held in Palermo until January 14, 2007. The exhibition attracted over 130,000 visitors. Awards: Golden Lion for Lifetime Achievement: Richard Rogers Golden Lion for Best National Participation: Denmark for "CO-EVOLUTION, Danish/Chinese collaboration on sustainable urban development in China". Curated by Henrik Valeur and UiD. Projects by Danish architectural offices and Chinese universities CEBRA + Tsinghua, COBE + CQU, Effekt + Tongji and Transform + XAUAT Golden Lion for the City: Bogotá, Colombia Golden Lion for Best Urban Projects: Javier Sanchez/ Higuera + Sanchez for the housing project "Brazil 44" in Mexico City Special prize for Best Architecture School: Facoltà di Architettura Politecnico di Torino for a project for Mumbai Mentions for three significant national exhibitions: Japan, Iceland and Macedonia Seven Stone Lions, Città di Pietra -Sensi Contemporanei section: Bari, group leader arch. Adolfo Natalini; Crotone, arch. Carlo Moccia; Pantelleria, group leader arch. Gabriella Giuntoli; Bari, group leader arch. Guido Canella; Bari, group leader arch. Antonio Riondino; Bari, group leader arch. Vitangelo Ardito; Pantelleria, group leader arch. Marino Narpozzi Prize for Architecture Portus, Città – Porto - Sensi Contemporanei section: "Il parco della Blanda" of Region Basilicata. Area: Maratea, Piana di Castrocucco (Potenza). Project by: Gustavo Matassa, with Vincenzo De Biase, Silvia Marano, Rosa Nave Premio Manfredo Tafuri, appointed by the Padiglione Italia: Vittorio Gregotti Giancarlo De Carlo prize, appointed by the Padiglione Italia: Andrea Stipa Ernesto Nathan Rogers prize, appointed by the Padiglione Italia: Luca Molinari National pavilions, contributions and curators (selection) United States Pavilion: After the Flood: Building on Higher Ground: Architectural responses to the August 2005 devastation in New Orleans and the Gulf Coast wrought by Hurricane Katrina. Curated by Christian Ditlev Bruun. Projects by Anderson+Anderson Architects and Eight Inc. were among the included projects. Photography for the exhibition by Michael Goodman. Graphic Design by Paula Kelly Design NYC. The exhibition traveled to Bangkok (2007), Panama City (2007), and Los Angeles (2008). The exhibition also marked the beginning of the international symposium series Sustainable Dialogues, which connected architects, city planners, and environmentalist from Southeast Asia, Central and South America with American architects in each region to exchange ideas and knowledge and propose solutions to issues of ecological disasters, global climate change, and sustainable architectural strategies. Collaborators included Global Green and Make it Right (founded by Brad Pitt). 2004 The 9th International Architecture Exhibition: METAMORPH. Directed by Kurt W. Forster. 12 September – 7 November 2004. The exhibition attracted over 115,000 visitors. Awards: Golden Lion for Lifetime Achievement: Peter Eisenman Golden Lion for best installation presented by a country: Belgium Pavilion ("Kinshasa, the Imaginary City") Golden Lion for most significant work of the Metamorph exhibition: Studio SANAA by Kazuyo Sejima + Ryue Nishizawa for the project for the Museum of 21st century for contemporary art (Kanazawa, Japan) and for the enlargement of the Istituto Valenciano de Arte Moderna (Valencia, Spain) Special Prize for best work in the Concert Halls section: Studio Plot of Julien De Smedt and Bjarke Ingels for the Concert House project (Stavanger, Norvegia) Special Prize for best work in the Episodes section: German photographer Armin Linke and Italian architect Piero Zanini for the Alpi installation Special Prize for best work in the Transformations section: Austrian architect Günther Domenig for the Documentation Centre at the Party Rally Grounds of Norimberg, Germany Special Prize for best work in the Topography section: Studio Foreign Office Architects Ltd for the Novartis Car Park (Basilea, Switzerland) Special Prize for best work in the Surface section: Japanese architect Shuhei Endo for the Springtecture project (Singu-cho, Hyogo, Japan) Special Prize for best work in the Atmosphere section: Australian studio PTW Architects pty Ltd and Chinese partner studio CSCEC + Design for the National Swimming Center project (Pechino Olympic Green, China) Special Prize for best work in the Hyper-Project section: Martinez Lapeña- Torres Arquitectos for the Esplanada Fòrum (Barcelona, Spain) Special Prize for best work in the Morphing Lights, Floating Shadows sections (photography): images of Mars shot by NASA in cooperation with JPL and Cornell University National pavilions, contributions and curators (selection) United States Pavilion: Transcending Type. Featuring six U.S. architecture firms in the vanguard of contemporary design. Each explore new forms and uses for iconic modern building types. Commissioner: Architectural Record. Curated by Christian Ditlev Bruun. Projects exhibited: George Yu Architects, Los Angeles: SHOPPING CENTER. *Kolatan/MacDonald Studio, New York: RESIDENTIAL HIGH RISE (Resi-Rise) *Studio/Gang/Architects, Chicago: URBAN SPORTS ARENA. *Lewis.Tsurumaki.Lewis, New York: PARKING GARAGE. *Predock_Frane, Los Angeles: SPIRITUAL SPACE. *Reiser + Umemoto, New York: HIGHWAY INTERCHANGE 2002 The 8th International Architecture Exhibition: NEXT. Directed by Deyan Sudjic. 8 September – 3 November 2002. The exhibition attracted over 100,000 visitors. Awards: Golden Lion for Lifetime Achievement: Toyo Ito Golden Lion for best project of the International Exhibition: Iberê Camargo Foundation di Porto Alegre (Brazil) designed by Alvaro Siza Vieira Special prize for best National Participant: Dutch Pavilion Special prize for best architectural works patron: Zhang Xin Special prize for best governative sponsorship : Barcelona Special mention to Next Mexico City: The Lakes Project National pavilions, contributions and curators (selection) United States Pavilion *World Trade Center. Two Perspectives: The Aftermath & Before. Photographs by Joel Meyerowitz. *A New World Trade Center Design Proposals. The Max Protetch Gallery. Commissioner: Robert Ivy, Chief Executive Officer of the American Institute of Architects (AIA) 2000 The 7th International Architecture Exhibition: Less Aesthetics, More Ethics. Directed by Massimiliano Fuksas. 18 June – 29 October 2000. Awards: Golden Lion for Lifetime Achievement: Renzo Piano, Paolo Soleri and Jørn Utzon. Golden Lion for best interpretation of the exhibition: Jean Nouvel Special prize for best National Participant: Spain Special "Bruno Zevi" prize for best architecture professor: Joseph Rykwert Special prize for best architectural works patron: Thomas Krens Special prize for best architecture editor: Eduardo Luis Rodriguez, editor of Arquitectura Cuba Special prize for best architecture photographer: Ilya Utkin National pavilions, contributions and curators (selection) United States Pavilion: ARCHitecture LABoratories with Columbia University and UCLA. Greg Lynn and Hani Rashid, respectively, transformed the U.S. Pavilion into a research laboratory designed to investigate, produce, and present a broad scope of new architectural schemes. A central theme of the studio program was new technology and its application to contemporary housing and other building archetypes. Organized by: The Solomon R. Guggenheim Foundation. Commissioner: Max Hollein 1996 The 6th International Architecture Exhibition: Sensing the Future—The Architect as Seismograph. Directed by Hans Hollein. Awards: Golden Lion for best National Participant: Japan Golden Lion for best interpretation of exhibition: Odile Decq-Benoît Cornette, Juha Kaakko, Ilkka Laine, Kimmo Liimatainen, Jari Tirkkonen, Enric Miralles Moya Special Osella for an extraordinary initiative in contemporary architecture: Pascal Maragall, Mayor of Barcelona Special Osella for media exposure in the field of contemporary architecture: Wim Wenders Special Osella for best architecture photographer: Gabriele Basilico National pavilions, contributions and curators (selection) United States Pavilion: Building a Dream: The Art of Disney Architecture. The Walt Disney Company has inspired and commissioned the work of many of the leading architects of our day for its hotels, productions, facilities, office buildings, sports facilities and housing developments. Organized by: Disney Imagineering, and The Solomon R Guggenheim Foundation, New York. Commissioner: Thomas Krens 1991 The 5th International Architecture Exhibition. Directed by Francesco Dal Co. 8 September – 6 October 1991. Awards: Winner of the International contest for the new Palazzo del Cinema 1990: Rafael Moneo Winner of the International contest "Una Porta per Venezia" for the restoration of Piazzale Roma: Jeremy Dixon and Edward Jones National pavilions, contributions and curators (selection) United States Pavilion: "Peter Eisenman and Frank Gehry". The similarities and differences between the work of architects Peter Eisenman and Frank Gehry. Organized by: The Solomon R Guggenheim Foundation, New York. Commissioner: Philip Johnson 1986 The 4th International Architecture Exhibition: Hendrik Petrus Berlage—Drawings. Directed by Aldo Rossi. 18 July – 28 September 1986. Villa Farsetti, Santa Maria di Sala. 1985 The 3rd International Architecture Exhibition: Progetto Venezia (international competition). Directed by Aldo Rossi. 20 July – 29 September 1985. Awards: Stone Lion: Robert Venturi, Manuel Pascal Schupp, COPRAT, Franco Purini (Accademia Bridge) Stone Lion: Raimund Abraham, Raimund Fein, Peter Nigst, Giangiacomo D’Ardia (Ca’ Venier dei Leoni) Stone Lion: Alberto Ferlenga (Piazza di Este) Stone Lion: Daniel Libeskind & Cranbrook Graduate Students, Three Lessons in Architecture (Piazza di Palmanova) Stone Lion: Laura Foster Nicholson (Villa Farsetti at Santa Maria di Sala) Stone Lion: Maria Grazia Sironi and Peter Eisenman (Castelli di Romeo and Juliet at Montecchio Maggiore) 1981–82 The 2nd International Architecture Exhibition: Architecture in Islamic Countries. Directed by Paolo Portoghesi. 20 November 1981 – 6 January 1982. 1980 The 1st International Architecture Exhibition: The presence of the Past. Directed by Paolo Portoghesi. 27 July – 20 October 1980. Included the Strada Novissima exhibition at the Corderie dell'Arsenale, and exhibitions on Antonio Basile, the architect; The Banal Object. An Exhibition of Critics. An Exhibition of Young Architects. Homage to Gardella, Ridolfi and Johnson. 1979 Theatre of the World. The Dogana at the end of the Zattere, created by Aldo Rossi for the Architecture and Theatre Sections of the Biennale in occasion of the exhibition Venice and the Stage (winter 1979–80). 1978 Utopia and the Crisis of Anti-Nature. Architectural Intentions in Italy. Magazzini del Sale, Zattere. Director: Vittorio Gregotti. 1976 Werkbund 1907. The Origins of Design; Rationalism and Architecture in Italy during the fascist period; Europe-America, old city centre, suburbia; Ettore Sottsass, an Italian designer. Ca' Pesaro, San Lorenzo, Magazzini del Sale, Cini Foundation. Director. Vittorio Gregotti. 1975 On the subject of the Stucky Mill. Magazzini del Sale at the Zattere. Curated by the Visual Arts and Architecture Section of the Biennale, directed by Vittorio Gregotti. See also List of architecture prizes Asplund Pavilion References Further reading External links Venice Biennale website Satellite map of national pavilions Culture in Venice Exhibitions in Italy Architecture festivals Festivals established in 1980 Arts festivals in Italy Festivals established in 1968 Architecture
Venice Biennale of Architecture
[ "Engineering" ]
4,853
[ "Architecture festivals", "Architecture" ]
2,975,225
https://en.wikipedia.org/wiki/Nilpotent%20cone
In mathematics, the nilpotent cone of a finite-dimensional semisimple Lie algebra is the set of elements that act nilpotently in all representations of In other words, The nilpotent cone is an irreducible subvariety of (considered as a vector space). Example The nilpotent cone of , the Lie algebra of 2×2 matrices with vanishing trace, is the variety of all 2×2 traceless matrices with rank less than or equal to References . . Lie algebras
Nilpotent cone
[ "Mathematics" ]
109
[ "Algebra stubs", "Algebra" ]
2,975,398
https://en.wikipedia.org/wiki/Glycogen%20storage%20disease%20type%20III
Glycogen storage disease type III (GSD III) is an autosomal recessive metabolic disorder and inborn error of metabolism (specifically of carbohydrates) characterized by a deficiency in glycogen debranching enzymes. It is also known as Cori's disease in honor of the 1947 Nobel laureates Carl Cori and Gerty Cori. Other names include Forbes disease in honor of clinician Gilbert Burnett Forbes (1915–2003), an American physician who further described the features of the disorder, or limit dextrinosis, due to the limit dextrin-like structures in cytosol. Limit dextrin is the remaining polymer produced after hydrolysis of glycogen. Without glycogen debranching enzymes to further convert these branched glycogen polymers to glucose, limit dextrinosis abnormally accumulates in the cytoplasm. Glycogen is a molecule the body uses to store carbohydrate energy. Symptoms of GSD-III are caused by a deficiency of the enzyme amylo-1,6 glucosidase, or debrancher enzyme. This causes excess amounts of an abnormal glycogen to be deposited in the liver, muscles and, in some cases, the heart. Signs and symptoms Glycogen storage disease type III presents during infancy with hypoglycemia and failure to thrive. Clinical examination usually reveals hepatomegaly. Muscular disease, including hypotonia and cardiomyopathy, usually occurs later. The liver pathology typically regresses as the individual enters adolescence, as does splenomegaly, should the individual so develop it. Genetics In regards to genetics glycogen storage disease type III is inherited in an autosomal recessive pattern (which means both parents need be a carrier), and occurs in about 1 of every 100,000 live births. The highest incidence of glycogen storage disease type III is in the Faroe Islands where it occurs in 1 out of every 3,600 births, probably due to a founder effect. There seem to be two mutations in exon 3 (c.17_18delAG) being one of them, which are linked to the subtype IIIb. The amylo-alpha-1, 6-glucosidase, 4-alpha-glucanotransferase gene and mutations to it, are at the root of this condition. The gene is responsible for creating glycogen debranching enzyme, which in turn helps in glycogen decomposition. Diagnosis In terms of the diagnosis for glycogen storage disease type III, the following tests/exams are carried out to determine if the individual has the condition: Biopsy (muscle or liver) CBC Ultrasound DNA mutation analysis (helps ascertain GSD III subtype) Differential diagnosis The differential diagnosis of glycogen storage disease type III includes GSD I, GSD IX and GSD VI. This however does not mean other glycogen storage diseases should not be distinguished as well. Classification Clinical manifestations of glycogen storage disease type III are divided into four classes: GSD IIIa, is the most common, (along with GSD IIIb) and which clinically includes muscle and liver involvement GSD IIIb, which clinically has liver involvement but no muscle involvement GSD IIIc which clinically affects liver and muscle. GSD IV affects liver only (not muscle) Treatment Treatment for glycogen storage disease type III may involve a high-protein diet, in order to facilitate gluconeogenesis. Additionally the individual may need: IV glucose (if oral route is inadvisable) Nutritional specialist Vitamin D (for osteoporosis/secondary complication) Hepatic transplant (if complication occurs) References Further reading External links Autosomal recessive disorders Hepatology Inborn errors of carbohydrate metabolism
Glycogen storage disease type III
[ "Chemistry" ]
816
[ "Inborn errors of carbohydrate metabolism", "Carbohydrate metabolism" ]
2,975,468
https://en.wikipedia.org/wiki/Glycogen%20storage%20disease%20type%200
Glycogen storage disease type 0 is a disease characterized by a deficiency in the glycogen synthase enzyme (GSY). Although glycogen synthase deficiency does not result in storage of extra glycogen in the liver, it is often classified as a glycogen storage disease because it is another defect of glycogen storage and can cause similar problems. There are two isoforms (types) of glycogen synthase enzyme; GSY1 in muscle and GSY2 in liver, each with a corresponding form of the disease. Mutations in the liver isoform (GSY2), causes fasting hypoglycemia, high blood ketones, increased free fatty acids and low levels of alanine and lactate. Conversely, feeding in these patients results in hyperglycemia and hyperlactatemia. Signs and symptoms The most common clinical history in patients with glycogen-storage disease type 0 (GSD-0) is that of an infant or child with symptomatic hypoglycemia or seizures that occur before breakfast or after an inadvertent fast. In affected infants, this event typically begins after they outgrow their nighttime feeds. In children, this event may occur during acute GI illness or periods of poor enteral intake. Mild hypoglycemic episodes may be clinically unrecognized, or they may cause symptoms such as drowsiness, sweating, lack of attention, or pallor. Uncoordinated eye movements, disorientation, seizures, and coma may accompany severe episodes. Glycogen-storage disease type 0 affects only the liver. Growth delay may be evident with height and weight percentiles below average. Abdominal examination findings may be normal or reveal only mild hepatomegaly. Signs of acute hypoglycemia may be present, including the following: Causes Glycogen-storage disease type 0 is caused by genetic defects in the gene that codes for liver glycogen synthetase (GYS2), which is located on chromosome band 12p12.2. Glycogen synthetase catalyzes the rate-limiting reaction for glycogen synthesis in the liver by transferring glucose units from uridine 5'-diphosphate (UDP)-glucose to a glycogen primer. Its action is highly regulated by a mechanism of phosphorylation and dephosphorylation and modulated by counter-regulatory hormones including insulin, epinephrine, and glucagon. Mutations in the gene for liver glycogen synthetase (GYS2, 138571) result in decreased or absent activity of liver glycogen synthetase and moderately decreased amounts of structurally normal glycogen in the liver. Mutational studies of patients with glycogen-storage disease type 0 do not demonstrate correlations between genotype and phenotype. [3] A different gene (GYS1, 138570) encodes muscle glycogen synthetase, which has normal activity in patients with glycogen-storage disease type 0A. Pathophysiology In the early stages of fasting, the liver provides a steady source of glucose from glycogen breakdown (or glycogenolysis). With prolonged fasting, glucose is generated in the liver from noncarbohydrate precursors through gluconeogenesis. Such precursors include alanine (derived from the breakdown of proteins in skeletal muscle) and glycerol (derived from the breakdown of triacylglycerols in fat cells). In patients with glycogen-storage disease type 0, fasting hypoglycemia occurs within a few hours after a meal because of the limited stores of hepatic glycogen and inadequate gluconeogenesis to maintain normoglycemia. Feeding characteristically results in postprandial hyperglycemia and glucosuria, in addition to increased blood lactate levels, because glycogen synthesis is limited, and excess glucose is preferentially converted to lactate by means of the glycolytic pathway. Diagnostic Important clinical criteria to consider in the evaluation of a child with hypoglycemia and suspected glycogen-storage disease type 0 (GSD-0) include (1) the presence or absence of hepatomegaly; (2) the characteristic schedule of hypoglycemia, including unpredictable, postprandial, short fast, long fast, or precipitating factors; (3) the presence or absence of lactic acidosis; (4) any associated hyperketosis or hypoketosis; and (5) any associated liver failure or cirrhosis. The differential diagnosis also includes ketotic hypoglycemia. Patients with ketotic hypoglycemia have a normal response to glucagon in the fed state. Patients with glycogen-storage disease type 0 have normal-to-increased response to glucagon in the fed state, with hyperglycemia and lactic acidemia. Laboratory Studies Serum glucose levels are measured to document the degree of hypoglycemia. Serum electrolytes calculate the anion gap to determine presence of metabolic acidosis; typically, patients with glycogen-storage disease type 0 (GSD-0) have an anion gap in the reference range and no acidosis. See the Anion Gap calculator. Serum lipids (including triglyceride and total cholesterol) may be measured. In patients with glycogen-storage disease type 0, hyperlipidemia is absent or mild and proportional to the degree of fasting. Urine (first voided specimen with dipstick test for ketones and reducing substances) may be analyzed. In patients with glycogen-storage disease type 0, urine ketones findings are positive, and urine-reducing substance findings are negative. However, urine-reducing substance findings are positive (fructosuria) in those with fructose 1-phosphate aldolase deficiency (fructose intolerance). Serum lactate is in reference ranges in fasting patients with glycogen-storage disease type 0. Liver function studies provide evidence of mild hepatocellular damage in patients with mild elevations of aspartate aminotransferase (AST) and alanine aminotransferase (ALT) levels.Plasma amino-acid analysis shows plasma alanine levels as in reference ranges during a fast. Imaging Studies Skeletal radiography may reveal osteopenia. Other Tests Evaluation of a patient with suspected glycogen-storage disease type 0 requires monitored assessment of fasting adaptation in an inpatient setting. Patients typically have hypoglycemia and ketosis, with lactate and alanine levels in the low or normal part of the reference range approximately 5–7 hours after fasting. A glucagon tolerance test may be needed if the fast fails to elicit the expected rise in plasma glucose. Lactate and alanine levels are in the reference range. By contrast, a glucagon challenge test after a meal causes hyperglycemia, with increased levels of plasma lactate and alanine. Oral loading of glucose, galactose, or fructose results in a marked rise in blood lactate levels. Procedures Liver biopsy for microscopic analysis and enzyme assay is required for definitive diagnosis. Diagnosis may include linkage analysis in families with affected members and sequencing of the entire coding region of the GSY2 gene for mutations. Histologic Findings Histologic analysis of liver tissue demonstrates moderately decreased amounts of periodic acid-Schiff (PAS)–positive, diastase-sensitive glycogen stores. Evidence of increased fat accumulation in the liver may be observed, as in other glycogen-storage diseases. Electron microscopic analysis of liver sections shows normal glycogen structure. Muscle glycogen stores are normal. Differential Diagnoses Acute Hypoglycemia Fructose 1-Phosphate Aldolase Deficiency (Hereditary fructose intolerance). Types There are two types of glycogen storage disease type 0 to be considered, they are: Glycogen storage disease due to liver glycogen synthase deficiency Glycogen storage disease due to muscle and heart glycogen synthase deficiency Treatment The goal for treatment of Glycogen-storage disease type 0 is to avoid hypoglycemia. This is accomplished by avoiding fasting by eating every 1–2 hours during the day. At night, uncooked corn starch can be given because it is a complex glucose polymer. This will be acted on slowly by pancreatic amylase and glucose will be absorbed over a 6-hour period. Epidemiology The overall frequency of glycogen-storage disease is approximately 1 case per 20,000–25,000 people. Glycogen-storage disease type 0 is a rare form, representing less than 1% of all cases. The identification of asymptomatic and oligosymptomatic siblings in several glycogen-storage disease type 0 families has suggested that glycogen-storage disease type 0 is underdiagnosed. Mortality/Morbidity The major morbidity is a risk of fasting hypoglycemia, which can vary in severity and frequency. Major long-term concerns include growth delay, osteopenia, and neurologic damage resulting in developmental delay, intellectual deficits, and personality changes. Sex No sexual predilection is observed because the deficiency of glycogen synthetase activity is inherited as an autosomal recessive trait. Age Glycogen-storage disease type 0 is most commonly diagnosed during infancy and early childhood. References External links Inborn errors of carbohydrate metabolism Hepatology Gastroenterology
Glycogen storage disease type 0
[ "Chemistry" ]
2,088
[ "Inborn errors of carbohydrate metabolism", "Carbohydrate metabolism" ]
2,975,553
https://en.wikipedia.org/wiki/Superradiance
In physics, superradiance is the radiation enhancement effects in several contexts including quantum mechanics, astrophysics and relativity. Quantum optics In quantum optics, superradiance is a phenomenon that occurs when a group of N emitters, such as excited atoms, interact with a common light field. If the wavelength of the light is much greater than the separation of the emitters, then the emitters interact with the light in a collective and coherent fashion. This causes the group to emit light as a high-intensity pulse (with rate proportional to N2). This is a surprising result, drastically different from the expected exponential decay (with rate proportional to N) of a group of independent atoms (see spontaneous emission). Superradiance has since been demonstrated in a wide variety of physical and chemical systems, such as quantum dot arrays and J-aggregates. This effect has been used to produce a superradiant laser. Rotational superradiance Rotational superradiance is associated with the acceleration or motion of a nearby body (which supplies the energy and momentum for the effect). It is also sometimes described as the consequence of an "effective" field differential around the body (e.g. the effect of tidal forces). This allows a body with a concentration of angular or linear momentum to move towards a lower energy state, even when there is no obvious classical mechanism for this to happen. In this sense, the effect has some similarities with quantum tunnelling (e.g. the tendency of waves and particles to "find a way" to exploit the existence of an energy potential, despite the absence of an obvious classical mechanism for this to happen). In classical physics, the motion or rotation of a body in a particulate medium will normally be expected to result in momentum and energy being transferred to the surrounding particles, and there is then an increased statistical likelihood of particles being discovered following trajectories that imply removal of momentum from the body. In quantum mechanics, this principle is extended to the case of bodies moving, accelerating or rotating in a vacuum – in the quantum case, quantum fluctuations with appropriate vectors are said to be stretched and distorted and provided with energy and momentum by the nearby body's motion, with this selective amplification generating real physical radiation around the body. Where a classical description of a rotating isolated weightless sphere in a vacuum will tend to say that the sphere will continue to rotate indefinitely, due to the lack of frictional effects or any other form of obvious coupling with its smooth empty environment, under quantum mechanics the surrounding region of vacuum is not entirely smooth, and the sphere's field can couple with quantum fluctuations and accelerate them to produce real radiation. Hypothetical virtual wavefronts with appropriate paths around the body are stimulated and amplified into real physical wavefronts by the coupling process. Descriptions sometimes refer to these fluctuations "tickling" the field to produce the effect. In theoretical studies of black holes, the effect is also sometimes described as the consequence of the gravitational tidal forces around a strongly gravitating body pulling apart virtual particle pairs that would otherwise quickly mutually annihilate, to produce a population of real particles in the region outside the horizon. The black hole bomb is an exponentially growing instability in the interaction between a massive bosonic field and a rotating black hole. Astrophysics and relativity In astrophysics, a potential example of superradiance is Zeldovich radiation. It was Yakov Zeldovich who first described this effect in 1971, Igor Novikov at the University of Moscow further developed the theory. Zeldovich picked the case under quantum electrodynamics (QED) where the region around the equator of a spinning metal sphere is expected to throw off electromagnetic radiation tangentially, and suggested that the case of a spinning gravitational mass, such as a Kerr black hole ought to produce similar coupling effects, and ought to radiate in an analogous way. This was followed by arguments from Stephen Hawking and others that an accelerated observer near a black hole (e.g. an observer carefully lowered towards the horizon at the end of a rope) ought to see the region inhabited by "real" radiation, whereas for a distant observer this radiation would be said to be "virtual". If the accelerated observer near the event horizon traps a nearby particle and throws it out to the distant observer for capture and study, then for the distant observer, the appearance of the particle can be explained by saying that the physical acceleration of the particle has turned it from a virtual particle into a "real" particle (see Hawking radiation). Similar arguments apply for the cases of observers in accelerated frames (Unruh radiation). Cherenkov radiation, electromagnetic radiation emitted by charged particles travelling through a particulate medium at more than the nominal speed of light in that medium, has also been described as "inertial motion superradiance". Additional examples of superradiance in astrophysical environments include the study of radiation flares in maser-hosting regions and fast radio bursts. Evidence of superradiance in these settings suggests the existence of intense emissions from entangled quantum mechanical states, involving a very large number of molecules, ubiquitously present across the universe and spanning large distances (e.g. from a few kilometres in the interstellar medium to possibly over several billion kilometres ). Instruments Instruments that uses the super radiant emission. Free Electron Laser (FEL) Far Infrared (FIR) Laser Undulator allows to obtain the super radiant emission. See also Quantum optics Spontaneous emission Superradiant phase transition Dicke model Hawking radiation Unruh effect Cherenkov radiation Black hole bomb Superradiance in semiconductor optics Dicke state References Special relativity Quantum optics
Superradiance
[ "Physics" ]
1,149
[ "Special relativity", "Quantum optics", "Quantum mechanics", "Theory of relativity" ]
2,975,616
https://en.wikipedia.org/wiki/Penrose%20process
The Penrose process (also called Penrose mechanism) is theorised by Sir Roger Penrose as a means whereby energy can be extracted from a rotating black hole. The process takes advantage of the ergosphere – a region of spacetime around the black hole dragged by its rotation faster than the speed of light, meaning that from the point of view of an outside observer any matter inside is forced to move in the direction of the rotation of the black hole. In the process, a working body falls (black thick line in the figure) into the ergosphere (gray region). At its lowest point (red dot) the body fires a propellant backwards; however, to a faraway observer both seem to continue to move forward due to frame-dragging (albeit at different speeds). The propellant, being slowed, falls (thin gray line) to the event horizon of the black hole (black disk). The remains of the body, being sped up, fly away (thin black line) with an excess of energy (that more than offsets the loss of the propellant and the energy used to shoot it). The maximum amount of energy gain possible for a single particle decay via the original (or classical) Penrose process is 20.7% of its mass in the case of an uncharged black hole (assuming the best case of maximal rotation of the black hole). The energy is taken from the rotation of the black hole, so there is a limit on how much energy one can extract by Penrose process and similar strategies (for an uncharged black hole no more than 29% of its original mass; larger efficiencies are possible for charged rotating black holes). Details of the ergosphere The outer surface of the ergosphere is the surface at which light that moves in the direction opposite to the rotation of the black hole remains at a fixed angular coordinate, according to an external observer. Since massive particles necessarily travel slower than light, massive particles will necessarily move along with the black hole's rotation. The inner boundary of the ergosphere is the event horizon, the spatial perimeter beyond which light cannot escape. Inside the ergosphere even light cannot keep up with the rotation of the black hole, as the trajectories of stationary (from the outside perspective) objects become space-like, rather than time-like (that normal matter would have), or light-like. Mathematically, the component of the metric changes its sign inside the ergosphere. That allows matter to have negative energy inside of the ergosphere as long as it moves counter the black hole's rotation fast enough (or, from outside perspective, resists being dragged along to a sufficient degree). Penrose mechanism exploits that by diving into the ergosphere, dumping an object that was given negative energy, and returning with more energy than before. In this way, rotational energy is extracted from the black hole, resulting in the black hole being spun down to a lower rotational speed. The maximum amount of energy (per mass of the thrown in object) is extracted if the black hole is rotating at the maximal rate, the object just grazes the event horizon and decays into forwards and backwards moving packets of light (the first escapes the black hole, the second falls inside). In an adjunct process, a black hole can be spun up (its rotational speed increased) by sending in particles that do not split up, but instead give their entire angular momentum to the black hole. However, this is not a reverse of the Penrose process, as both increase the entropy of the black hole by throwing material into it. See also High Life, a 2018 science-fiction film that includes a mission to harness the process References Further reading Black holes Energy sources Hypothetical technology
Penrose process
[ "Physics", "Astronomy" ]
778
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects" ]
2,975,856
https://en.wikipedia.org/wiki/Przemys%C5%82aw%20Prusinkiewicz
Przemysław (Przemek) Prusinkiewicz is a Polish computer scientist who advanced the idea that Fibonacci numbers in nature can be in part understood as the expression of certain algebraic constraints on free groups, specifically as certain Lindenmayer grammars. Prusinkiewicz's main work is on the modeling of plant growth through such grammars. Early life and education in 1978 Prusinkiewicz received his PhD from Warsaw University of Technology . Career As of 2008 he was a professor of Computer Science at the University of Calgary. Awards Prusinkiewicz received the 1997 SIGGRAPH Computer Graphics Achievement Award for his work. Influences In 2006, Michael Hensel examined the work of Prusinkiewicz and his collaborators - the Calgary team - in an article published in Architectural Design. Hensel argued that the Calgary team's computational plant models or "virtual plants" which culminated in software they developed capable of modeling various plant characteristics, could provide important lessons for architectural design. Architects would learn from "the self-organisation processes underlying the growth of living organisms" and the Calgary team's work uncovered some of that potential. Their computational models allowed for a "quantitative understanding of developmental mechanisms" and had the potential to "lead to a synthetic understanding of the interplay between various aspects of development." Prusinkiewicz's work was informed by that of the Hungarian biologist Aristid Lindenmayer who developed the theory of L-systems in 1968. Lindenmayer used L-systems to describe the behaviour of plant cells and to model the growth processes, plant development and the branching architecture of plant development. Publications References External links Biography of Przemysław Prusinkiewicz from the University of Calgary Laboratory website at the University of Calgary Warsaw University of Technology alumni Polish mathematicians Living people Computer graphics professionals Computer graphics researchers Fibonacci numbers Polish computer scientists Year of birth missing (living people) Academic staff of the University of Calgary
Przemysław Prusinkiewicz
[ "Mathematics" ]
401
[ "Fibonacci numbers", "Mathematical relations", "Golden ratio", "Recurrence relations" ]
2,976,061
https://en.wikipedia.org/wiki/Harnack%27s%20inequality
In mathematics, Harnack's inequality is an inequality relating the values of a positive harmonic function at two points, introduced by . Harnack's inequality is used to prove Harnack's theorem about the convergence of sequences of harmonic functions. , and generalized Harnack's inequality to solutions of elliptic or parabolic partial differential equations. Such results can be used to show the interior regularity of weak solutions. Perelman's solution of the Poincaré conjecture uses a version of the Harnack inequality, found by , for the Ricci flow. The statement Harnack's inequality applies to a non-negative function f defined on a closed ball in Rn with radius R and centre x0. It states that, if f is continuous on the closed ball and harmonic on its interior, then for every point x with |x − x0| = r < R, In the plane R2 (n = 2) the inequality can be written: For general domains in the inequality can be stated as follows: If is a bounded domain with , then there is a constant such that for every twice differentiable, harmonic and nonnegative function . The constant is independent of ; it depends only on the domains and . Proof of Harnack's inequality in a ball By Poisson's formula where ωn − 1 is the area of the unit sphere in Rn and r = |x − x0|. Since the kernel in the integrand satisfies Harnack's inequality follows by substituting this inequality in the above integral and using the fact that the average of a harmonic function over a sphere equals its value at the center of the sphere: Elliptic partial differential equations For elliptic partial differential equations, Harnack's inequality states that the supremum of a positive solution in some connected open region is bounded by some constant times the infimum, possibly with an added term containing a functional norm of the data: The constant depends on the ellipticity of the equation and the connected open region. Parabolic partial differential equations There is a version of Harnack's inequality for linear parabolic PDEs such as heat equation. Let be a smooth (bounded) domain in and consider the linear elliptic operator with smooth and bounded coefficients and a positive definite matrix . Suppose that is a solution of in such that Let be compactly contained in and choose . Then there exists a constant C > 0 (depending only on K, , , and the coefficients of ) such that, for each , See also Harnack's theorem References Kassmann, Moritz (2007), "Harnack Inequalities: An Introduction" Boundary Value Problems 2007:081415, doi: 10.1155/2007/81415, MR 2291922 L. C. Evans (1998), Partial differential equations. American Mathematical Society, USA. For elliptic PDEs see Theorem 5, p. 334 and for parabolic PDEs see Theorem 10, p. 370. Harmonic functions Inequalities
Harnack's inequality
[ "Mathematics" ]
621
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
2,976,291
https://en.wikipedia.org/wiki/M1%20mortar
The M1 mortar is an American 81 millimeter caliber mortar. It was based on the French Brandt mortar. The M1 mortar was used from before World War II until the 1950s when it was replaced by the lighter and longer ranged M29 mortar. General data Weight: Tube 44.5 lb (20 kg) Mount 46.5 lb (21 kg) Base plate 45 lb (20 kg) Total Ammunition M43A1 light HE: 7.05 lb (3.20 kg); HE filling 1.22 lb (0.55 kg); range min 100 yd (91 m); range max 3300 yd (zone 7, 3018 m); 80% frag radius 25 yd (23 m) (compared favorably with the 75 mm howitzer). M52 superquick fuze (explode on surface). M43A1 light training: an empty version of the M43A1 light HE with an inert fuze. It was used as a training shell until it was replaced by the M68 training practice shell. M45 heavy HE: 15.10 lb (6.85 kg); HE filling 4.48 lb (2.03 kg); range max 1275 yd (zone 5, 1166 m); bursting radius comparable to the 105 mm howitzer. Equipped with M45 (super quick/delay action selective) or M53 (delay action only) P.D. fuze. M56 heavy HE: 10.77 lb (4.86 kg); HE filling 4.31 lb (1.96 kg); range max 2655 yd (zone 5, 2428 m), standard for issue and manufacture shell replacing M45. It used the M53 fuze back in 1944, but it was at some point replaced by the M77 Timed Super Quick (TSQ) fuze. M57 WP (white phosphorus) "bursting smoke": 10.74 lb (4.87 kg); range max 2470 yd (2260 m); designed to lay down screening smoke, but had definite anti-personnel and incendiary applications. M57 FS (a solution of sulfur trioxide in chlorosulfonic acid) chemical smoke: 10.74 lb (4.87 kg), range max 2470 yd (2260 m); laid down dense white fog consisting of small droplets of hydrochloric and sulfuric acids. In moderate concentrations, it is highly irritating to the eyes, nose, and skin. M68 training practice: 9.50 lb to 10.10 lb. An inert teardrop-shaped cast iron shell without provision for a fuze well that was used to simulate the M43 light HE shell. The casing on early models was painted black but post-World War 2 versions are painted blue. It came in 9 different weights (engraved on the shell) to allow it to simulate shell firing with and without booster charges. Weight zone one (9.5 lbs.) simulated a shell with the maximum of 8 booster charges and weight zone nine (10.10 lbs.) simulated the shell being fired without booster charges. M301 illuminating shell: range max 2200 yd (2012 m); attached to parachute; burned brightly (275,000 candelas) for about 60 seconds, illuminating an area of about 150 yards (137 m) diameter. It used the M84 time fuze, which was adjustable from 5 to 25 seconds before priming charge detonated, releasing the illuminator and chute. Fuzes The M1 mortar's shells sometimes used the same fuzes as the shells for the M2 60 mm mortar. An adapter collar was added to the smaller fuzes to allow them to fit the larger shells. M43 mechanical timing (MT) fuze: clockwork timed delay fuze. Models M43A5. M45 point detonating (PD) fuze: selective fuze that could be set for time delay or super-quick (less than a second) detonation on impact. Replaced by the M52 and M53 fuzes. M48 point detonating (PD) fuze: selective powder train burning fuze that can be set to super quick or delay ignition on impact. The factory pre-set delay time was stamped on the shell body. If the super-quick flash ignition failed, the delay fuse kicked in. If set on delay, the super-quick flash igniter mechanism was immobilized to prevent premature ignition. Models: M48, M48A1, M48A2 (either 0.05 or 0.15 second Delay), & M48A3 (0.05 second delay). M51 point detonating (PD) fuze: selective powder train burning fuze that can be set to super quick or delay ignition after impact. It is a modification of the M48 fuze with the addition of a booster charge. Models: M51A4, M51A5 (M48A3 Fuze with M21A4 booster). M52 point detonating super-quick (PDSQ) fuze: super-quick fuze that activates less than a second after impact. The pre-war M52 was made of aluminum, the M52B1 model was made of Bakelite, and the M52B2 model had a Bakelite body and an aluminum head; the suffix would be added to the shell designation. M53 point detonating delay (PDD) fuze: delay fuze that activates after impact. M54 time and super-quick (TSQ) fuze: powder train burning fuze that can be set for time delay (slow burn) or super-quick (flash ignition) detonation on impact. M77 time and super quick (TSQ) fuze: powder train burning fuze that can be set for time delay (slow burn) or super-quick (flash ignition) detonation on impact. M78 concrete penetrating (CP) fuze: delay fuze that was set off after the shell had impacted and buried itself to increase the damage done. M84 mechanical timing (MT) fuze: clockwork fuze that can be set from 0 to 25 seconds in 1-second intervals; seconds were indicated by vertical lines and 5-second intervals were indicated by metal bosses to allow it to be set in low-light or night-time conditions. M84A1 mechanical timing (MT) fuze: clockwork fuze that can be set from 0 to 50 seconds in 2-second intervals. Users It may be found in nearly all the non-Communist countries, including: : used on M21 mortar motor carriage : made under license :M-43 : The Armed Forces was equipped with 386 M1s before the Korean War, and 822 were in service with the Army by the end of the war. Began replacing with M29A1 or KM29A1 in 1970s. See also M2 Mortar List of U.S. Army weapons by supply catalog designation SNL A-33 M3 Half-track Weapons of comparable role, performance and era Ordnance ML 3 inch Mortar British equivalent 8 cm Granatwerfer 34 German equivalent References FM 23-90 TM 9-1260 SNL A-33 External links 90th Infantry Division Preservation Group - page on 81 mm mortars and equipment Popular Science, August 1943, Pill Boxes Destroyer article on M1 81mm mortar Infantry mortars World War II infantry weapons of the United States World War II mortars Mortars of the United States Chemical weapons of the United States Chemical weapon delivery systems 81mm mortars Military equipment introduced in the 1930s
M1 mortar
[ "Chemistry" ]
1,573
[ "Chemical weapon delivery systems", "Chemical weapons" ]
2,976,342
https://en.wikipedia.org/wiki/Equivariant%20cohomology
In mathematics, equivariant cohomology (or Borel cohomology) is a cohomology theory from algebraic topology which applies to topological spaces with a group action. It can be viewed as a common generalization of group cohomology and an ordinary cohomology theory. Specifically, the equivariant cohomology ring of a space with action of a topological group is defined as the ordinary cohomology ring with coefficient ring of the homotopy quotient : If is the trivial group, this is the ordinary cohomology ring of , whereas if is contractible, it reduces to the cohomology ring of the classifying space (that is, the group cohomology of when G is finite.) If G acts freely on X, then the canonical map is a homotopy equivalence and so one gets: Definitions It is also possible to define the equivariant cohomology of with coefficients in a -module A; these are abelian groups. This construction is the analogue of cohomology with local coefficients. If X is a manifold, G a compact Lie group and is the field of real numbers or the field of complex numbers (the most typical situation), then the above cohomology may be computed using the so-called Cartan model (see equivariant differential forms.) The construction should not be confused with other cohomology theories, such as Bredon cohomology or the cohomology of invariant differential forms: if G is a compact Lie group, then, by the averaging argument, any form may be made invariant; thus, cohomology of invariant differential forms does not yield new information. Koszul duality is known to hold between equivariant cohomology and ordinary cohomology. Relation with groupoid cohomology For a Lie groupoid equivariant cohomology of a smooth manifold is a special example of the groupoid cohomology of a Lie groupoid. This is because given a -space for a compact Lie group , there is an associated groupoidwhose equivariant cohomology groups can be computed using the Cartan complex which is the totalization of the de-Rham double complex of the groupoid. The terms in the Cartan complex arewhere is the symmetric algebra of the dual Lie algebra from the Lie group , and corresponds to the -invariant forms. This is a particularly useful tool for computing the cohomology of for a compact Lie group since this can be computed as the cohomology ofwhere the action is trivial on a point. Then,For example,since the -action on the dual Lie algebra is trivial. Homotopy quotient The homotopy quotient, also called homotopy orbit space or Borel construction, is a “homotopically correct” version of the orbit space (the quotient of by its -action) in which is first replaced by a larger but homotopy equivalent space so that the action is guaranteed to be free. To this end, construct the universal bundle EG → BG for G and recall that EG admits a free G-action. Then the product EG × X —which is homotopy equivalent to X since EG is contractible—admits a “diagonal” G-action defined by (e,x).g = (eg,g−1x): moreover, this diagonal action is free since it is free on EG. So we define the homotopy quotient XG to be the orbit space (EG × X)/G of this free G-action. In other words, the homotopy quotient is the associated X-bundle over BG obtained from the action of G on a space X and the principal bundle EG → BG. This bundle X → XG → BG is called the Borel fibration. An example of a homotopy quotient The following example is Proposition 1 of . Let X be a complex projective algebraic curve. We identify X as a topological space with the set of the complex points , which is a compact Riemann surface. Let G be a complex simply connected semisimple Lie group. Then any principal G-bundle on X is isomorphic to a trivial bundle, since the classifying space is 2-connected and X has real dimension 2. Fix some smooth G-bundle on X. Then any principal G-bundle on is isomorphic to . In other words, the set of all isomorphism classes of pairs consisting of a principal G-bundle on X and a complex-analytic structure on it can be identified with the set of complex-analytic structures on or equivalently the set of holomorphic connections on X (since connections are integrable for dimension reason). is an infinite-dimensional complex affine space and is therefore contractible. Let be the group of all automorphisms of (i.e., gauge group.) Then the homotopy quotient of by classifies complex-analytic (or equivalently algebraic) principal G-bundles on X; i.e., it is precisely the classifying space of the discrete group . One can define the moduli stack of principal bundles as the quotient stack and then the homotopy quotient is, by definition, the homotopy type of . Equivariant characteristic classes Let E be an equivariant vector bundle on a G-manifold M. It gives rise to a vector bundle on the homotopy quotient so that it pulls-back to the bundle over . An equivariant characteristic class of E is then an ordinary characteristic class of , which is an element of the completion of the cohomology ring . (In order to apply Chern–Weil theory, one uses a finite-dimensional approximation of EG.) Alternatively, one can first define an equivariant Chern class and then define other characteristic classes as invariant polynomials of Chern classes as in the ordinary case; for example, the equivariant Todd class of an equivariant line bundle is the Todd function evaluated at the equivariant first Chern class of the bundle. (An equivariant Todd class of a line bundle is a power series (not a polynomial as in the non-equivariant case) in the equivariant first Chern class; hence, it belongs to the completion of the equivariant cohomology ring.) In the non-equivariant case, the first Chern class can be viewed as a bijection between the set of all isomorphism classes of complex line bundles on a manifold M and In the equivariant case, this translates to: the equivariant first Chern gives a bijection between the set of all isomorphism classes of equivariant complex line bundles and . Localization theorem The localization theorem is one of the most powerful tools in equivariant cohomology. See also Equivariant differential form Kirwan map Localization formula for equivariant cohomology GKM variety Bredon cohomology Notes References Relation to stacks PDF page 10 has the main result with examples. Further reading External links — Excellent survey article describing the basics of the theory and the main important theorems What is the equivariant cohomology of a group acting on itself by conjugation? Algebraic topology Homotopy theory Symplectic topology Group actions (mathematics)
Equivariant cohomology
[ "Physics", "Mathematics" ]
1,577
[ "Group actions", "Algebraic topology", "Fields of abstract algebra", "Topology", "Symmetry" ]
2,976,531
https://en.wikipedia.org/wiki/Tholobate
A tholobate (from ), drum or tambour is the upright part of a building on which a dome is raised. It is generally in the shape of a cylinder or a polygonal prism. The name derives from the tholos, the Greek term for a round building with a roof and a circular wall. Another architectural meaning of "drum" is a circular section of a column shaft Examples In the earlier Byzantine church architecture the dome rested directly on the pendentives and the windows were pierced in the dome itself; in later examples, between the pendentive and the dome an intervening circular wall was built in which the windows were pierced. This is the type which was universally employed by the architects of the Renaissance, of whose works the best-known example is St. Peter's Basilica at Rome. Other examples of churches of this type are St Paul's Cathedral in London and the churches of the Les Invalides, the Val-de-Grâce, and the Sorbonne in Paris. There are also secular buildings with tholobates: the United States Capitol dome in Washington, D.C. is set on a drum, a feature imitated in numerous American state capitols. The Panthéon in Paris is another secular building featuring a dome on a drum. St Paul's Cathedral and the Panthéon were the two inspirations for the U.S. Capitol. In contrast, the dome of the Reichstag building in Berlin before its post-war restoration was a quadrilateral, so its tholobate was square and not round. Gallery See also Cupola - a smaller tholobate with a dome Roof lanterns are sometimes placed above a dome References Architectural elements
Tholobate
[ "Technology", "Engineering" ]
340
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
2,976,760
https://en.wikipedia.org/wiki/Baluster
A baluster () is an upright support, often a vertical moulded shaft, square, or lathe-turned form found in stairways, parapets, and other architectural features. In furniture construction it is known as a spindle. Common materials used in its construction are wood, stone, and less frequently metal and ceramic. A group of balusters supporting a handrail, coping, or ornamental detail is known as a balustrade. The term baluster shaft is used to describe forms such as a candlestick, upright furniture support, and the stem of a brass chandelier. The term banister (also bannister) refers to a baluster or to the system of balusters and handrail of a stairway. It may be used to include its supporting structures, such as a supporting newel post. In the UK, there are different height requirements for domestic and commercial balustrades, as outlined in Approved Document K. Etymology According to the Oxford English Dictionary, "baluster" is derived through the , from , from balaustra, "pomegranate flower" [from a resemblance to the swelling form of the half-open flower (illustration, below right)], from Latin balaustrium, from Greek βαλαύστριον (balaustrion). History The earliest examples of balusters are those shown in the bas-reliefs representing the Assyrian palaces, where they were employed as functional window balustrades and apparently had Ionic capitals. As an architectural element alone the balustrade did not seem to have been known to either the Greeks or the Romans, but baluster forms are familiar in the legs of chairs and tables represented in Roman bas-reliefs, where the original legs or the models for cast bronze ones were shaped on the lathe, or in Antique marble candelabra, formed as a series of stacked bulbous and disc-shaped elements, both kinds of sources familiar to Quattrocento designers. The application to architecture was a feature of the early Renaissance architecture: late fifteenth-century examples are found in the balconies of palaces at Venice and Verona. These quattrocento balustrades are likely to be following yet-unidentified Gothic precedents. They form balustrades of colonettes as an alternative to miniature arcading. Rudolf Wittkower withheld judgement as to the inventor of the baluster<ref>H. Siebenhüner, in tracing the baluster's career, found its origin in the profile of the round base of Donatello's Judith and Holofernes, c 1460 (Siebenhüner, "Docke", in Reallexikon zur Deutsche Kunstgeschichte vol. 4 1988:102-107)</ref> and credited Giuliano da Sangallo with using it consistently as early as the balustrade on the terrace and stairs at the Medici villa at Poggio a Caiano (c 1480), and used balustrades in his reconstructions of antique structures. Sangallo passed the motif to Bramante (his Tempietto, 1502) and Michelangelo, through whom balustrades gained wide currency in the 16th century. Wittkower distinguished two types, one symmetrical in profile that inverted one bulbous vase-shape over another, separating them with a cushionlike torus or a concave ring, and the other a simple vase shape, whose employment by Michelangelo at the Campidoglio steps (c 1546), noted by Wittkower, was preceded by very early vasiform balusters in a balustrade round the drum of Santa Maria delle Grazie (c 1482), and railings in the cathedrals of Aquileia (c'' 1495) and Parma, in the cortile of San Damaso, Vatican, and Antonio da Sangallo's crowning balustrade on the Santa Casa at Loreto installed in 1535, and liberally in his model for the Basilica of Saint Peter. Because of its low center of gravity, this "vase-baluster" may be given the modern term "dropped baluster". Materials used Balusters may be made of carved stone, cast stone, plaster, polymer, polyurethane/polystyrene, polyvinyl chloride (PVC), precast concrete, wood, or wrought iron. Cast-stone balusters were a development of the 18th century in Great Britain (see Coade stone), and cast iron balusters a development largely of the 1840s. As balusters and balustrades have evolved, they can now be made from various materials with a few popular choices being timber, glass and stainless steel. Profiles and style changes The baluster, being a turned structure, tends to follow design precedents that were set in woodworking and ceramic practices, where the turner's lathe and the potter's wheel are ancient tools. The profile a baluster takes is often diagnostic of a particular style of architecture or furniture, and may offer a rough guide to date of a design, though not of a particular example. Some complicated Mannerist baluster forms can be read as a vase set upon another vase. The high shoulders and bold, rhythmic shapes of the Baroque vase and baluster forms are distinctly different from the sober baluster forms of Neoclassicism, which look to other precedents, like Greek amphoras. The distinctive twist-turned designs of balusters in oak and walnut English and Dutch seventeenth-century furniture, which took as their prototype the Solomonic column that was given prominence by Bernini, fell out of style after the 1710s. Once it had been taken from the lathe, a turned wood baluster could be split and applied to an architectural surface, or to one in which architectonic themes were more freely treated, as on cabinets made in Italy, Spain and Northern Europe from the sixteenth through the seventeenth centuries. Modern baluster design is also in use for example in designs influenced by the Arts and Crafts movement in a 1905 row of houses in Etchingham Park Road Finchley London England. Outside Europe, the baluster column appeared as a new motif in Mughal architecture, introduced in Shah Jahan's interventions in two of the three great fortress-palaces, the Red Fort of Agra and Delhi, in the early seventeenth century. Foliate baluster columns with naturalistic foliate capitals, unexampled in previous Indo-Islamic architecture according to Ebba Koch, rapidly became one of the most widely used forms of supporting shaft in Northern and Central India in the eighteenth and nineteenth centuries. The modern term baluster shaft is applied to the shaft dividing a window in Saxon architecture. In the south transept of the Abbey in St Albans, England, are some of these shafts, supposed to have been taken from the old Saxon church. Norman bases and capitals have been added, together with plain cylindrical Norman shafts. Balusters are normally separated by at least the same measurement as the size of the square bottom section. Placing balusters too far apart diminishes their aesthetic appeal, and the structural integrity of the balustrade they form. Balustrades normally terminate in heavy newel posts, columns, and building walls for structural support. Balusters may be formed in several ways. Wood and stone can be shaped on the lathe, wood can be cut from square or rectangular section boards, while concrete, plaster, iron, and plastics are usually formed by molding and casting. Turned patterns or old examples are used for the molds. Gallery See also Bollard Guard rail Handrail Citations General and cited references (Links are to the 1983 American edition.) External links Architectural elements Garden features Stairways Pedestrian infrastructure Architectural history Ironmongery
Baluster
[ "Technology", "Engineering" ]
1,629
[ "Architectural history", "Building engineering", "Architectural elements", "Components", "Architecture" ]
2,977,079
https://en.wikipedia.org/wiki/Paris%20Kanellakis%20Award
The Paris Kanellakis Theory and Practice Award is granted yearly by the Association for Computing Machinery (ACM) to honor "specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing". It was instituted in 1996, in memory of Paris C. Kanellakis, a computer scientist who died with his immediate family in an airplane crash in South America in 1995 (American Airlines Flight 965). The award is accompanied by a prize of $10,000 and is endowed by contributions from Kanellakis's parents, with additional financial support provided by four ACM Special Interest Groups (SIGACT, SIGDA, SIGMOD, and SIGPLAN), the ACM SIG Projects Fund, and individual contributions. Winners See also List of computer science awards References External links Paris Kanellakis Theory and Practice Award on the ACM website. The Paris Kanellakis Theory and Practice Award Committee on the ACM website. Awards of the Association for Computing Machinery Computer science awards
Paris Kanellakis Award
[ "Technology" ]
208
[ "Science and technology awards", "Computer science", "Computer science awards" ]
6,947,000
https://en.wikipedia.org/wiki/Peter%20Zoller
Peter Zoller (born 16 September 1952) is a theoretical physicist from Austria. He is professor at the University of Innsbruck and works on quantum optics and quantum information and is best known for his pioneering research on quantum computing and quantum communication and for bridging quantum optics and solid state physics. Biography Peter Zoller studied physics at the University of Innsbruck, obtained his doctorate there in February 1977, and became a lecturer at their Institute of Theoretical Physics. For 1978/79, he was granted a Max Kade stipend to research with Peter Lambropoulos at the University of Southern California. In 1980, he stayed at the University of Waikato in Hamilton, New Zealand, as a researcher with the group around Dan Walls. In 1981, Peter Zoller handed in his book "Über die lichtstatistische Abhängigkeit resonanter Multiphoton-Prozesse" at the University of Innsbruck to qualify as a professor by receiving the "venia docendi". He spent 1981/82 and 1988 as visiting fellow at the Joint Institute for Laboratory Astrophysics (JILA) of the University of Colorado, Boulder, and 1986 as guest professor at the Université de Paris-Sud 11, Orsay. In 1991, Peter Zoller was appointed Professor of Physics and JILA Fellow at JILA and at the Physics Department of the University of Colorado, Boulder. At the end of 1994, he accepted a chair at the University of Innsbruck, where he has worked ever since. From 1995 to 1999, he headed the Institute of Theoretical Physics, from 2001 to 2004, he was vice-dean of studies. Peter Zoller continues to keep in close touch with JILA as Adjoint Fellow. Numerous guest professorships have taken him to all major centers of physics throughout the world. He was Loeb lecturer in Harvard, Boston, MA (2004) and Yan Jici chair professor at the University of Science and Technology of China, Hefei, chair professor at Tsinghua University, Beijing (2004), Lorentz professor at the University of Leiden in the Netherlands (2005), Distinguished Lecturer at the Technion in Haifa (2007), Moore Distinguished Scholar at Caltech (2008/2010) and Arnold Sommerfeld Lecturer at LMU München (2010). In 2012/13 he was "Distinguished Fellow" at the Max Planck Institute of Quantum Optics in Garching, Munich. In 2014 he has been elected as an "External Scientific Member" at the Max Planck Institute of Quantum Optics. In 2015 he held the International Jacques Solvay Chair in Physics at the University of Brussels . Since 2003, Peter Zoller has also held the position of Scientific Director at the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences. In 2018, Peter Zoller co-founded Alpine Quantum Technologies, a quantum computing hardware company. Research As a theoretician, Peter Zoller has written major works on the interaction of laser light and atoms. In addition to fundamental developments in quantum optics he has succeeded in bridging quantum information and solid state physics. The model of a quantum computer, suggested by him and Ignacio Cirac in 1995, is based on the interaction of lasers with cold ions confined in an electromagnetic trap. The principles of this idea have been implemented in experiments over recent years and it is considered one of the most promising concepts for the development of a scalable quantum computer. Zoller and his researcher colleagues have also managed to link quantum physics with solid state physics. One of his suggestions has been to build a quantum simulator with cold atoms and use it to research hitherto unexplained phenomena in high temperature superconductors. Zoller's ideas and concepts attract widespread interest within the scientific community and his works are highly cited. Books Peter Zoller and Crispin Gardiner have jointly written the books C W Gardiner and Peter Zoller: Quantum Noise; Springer, Berlin Heidelberg, 2nd ed. 1999, 3rd ed. 2004 Crispin Gardiner and Peter Zoller: The Quantum World of Ultra-Cold Atoms and Light Book I: Foundations of Quantum Optics, Imperial College Press, London and Singapore 2014. Crispin Gardiner and Peter Zoller: The Quantum World of Ultra-Cold Atoms and Light Book II: Physics of Quantum Optical Devices, Imperial College Press, London and Singapore 2015. Crispin Gardiner and Peter Zoller: The Quantum World of Ultra-Cold Atoms and Light Book III: Ultra-Cold Atoms, World Scientific, London and Singapore 2014. Awards Peter Zoller has received numerous awards for his achievements in the field of quantum optics and quantum information and especially for his pioneering work on quantum computers and quantum communication. These include: the John Stewart Bell Prize (2019) the Norman F. Ramsey Prize (2018) of the American Physical Society the Micius Quantum Prize (2018) the Willis E. Lamb Award for Laser Science and Quantum Optics (2018) the Herbert Walther Award from the OSA (2016) the Wolf Prize in Physics (with Juan Ignacio Cirac) (2013) the Hamburg Prize for Theoretical Physics (2011) the Blaise Pascal Medal in Physics of the European Academy of Sciences (2011) the Benjamin Franklin Medal in Physics (2010) of the Franklin Institute (with Juan Ignacio Cirac and David Wineland) the BBVA Foundation Frontiers of Knowledge Award (2008), in the Basic Sciences category (ex aequo with Ignacio Cirac) the Dirac Medal of the ICTP (2006) the 6th International Quantum Communication Award (2006) the UNESCO Niels Bohr Medal (2005) the Max Planck Medal (2005) of the Deutsche Physikalische Gesellschaft the Humboldt Research Award (2000) the Schrödinger Prize (1998) of the Austrian Academy of Sciences the Max Born Award (1998) of the Optical Society of America the Wittgenstein Award (1998), Austria's highest scientific accolade the Ludwig Boltzmann Prize (1983) of the Austrian Physical Society. In 2001, Peter Zoller became full member of the Austrian Academy of Sciences. In 2008 he was elected to the United States National Academy of Sciences and the Royal Netherlands Academy of Arts and Sciences, in 2009 to the Spanish Royal Academy of Sciences, in 2010 to the German Academy of Sciences Leopoldina, in 2012 to the European Academy of Sciences, in 2013 to the Academia Europaea, and in 2023 in the Accademia Nazionale dei Lincei. He received honorary doctorates ot the University of Concepción (2024), the University of Colorado Boulder (2019) and the University of Amsterdam (2012). See also Open quantum system Quantum jump method References External links Biography Peter Zoller Peter Zoller at the Institute of Quantum Optics and Quantum Information (IQOQI) Quantum Optics Theory Group, University of Innsbruck Peter Zoller’s Thomson Reuters RESEARCHERID 1952 births Living people Austrian physicists Quantum physicists Optical physicists University of Innsbruck alumni University of Southern California alumni Harvard University staff University of Colorado Boulder faculty Academic staff of Paris-Sud University Academic staff of the University of Innsbruck Members of the Austrian Academy of Sciences Members of the Royal Netherlands Academy of Arts and Sciences Foreign associates of the National Academy of Sciences Members of Academia Europaea UNESCO Niels Bohr Medal recipients Wolf Prize in Physics laureates Members of the German National Academy of Sciences Leopoldina Quantum information scientists Winners of the Max Planck Medal Fellows of the American Physical Society Benjamin Franklin Medal (Franklin Institute) laureates
Peter Zoller
[ "Physics" ]
1,527
[ "Quantum physicists", "Quantum mechanics" ]
6,947,162
https://en.wikipedia.org/wiki/Spiritual%20transformation
Spiritual transformation involves a fundamental change in a person's sacred or spiritual life. Psychologists examine spiritual transformation within the context of an individual's meaning system, especially in relation to concepts of the sacred or of ultimate concern. Two of the fuller treatments of the concept in psychology come from Kenneth Pargament and from Raymond Paloutzian. Pargament holds that "at its heart, spiritual transformation refers to a fundamental change in the place of the sacred or the character of the sacred in the life of the individual. Spiritual transformation can be understood in terms of new configurations of strivings" (p. 18). Paloutzian suggests that "spiritual transformation constitutes a change in the meaning system that a person holds as a basis for self-definition, the interpretation of life, and overarching purposes and ultimate concerns" (p. 334). One school of thought emphasises the importance of "rigorous self-discipline" in spiritual transformation. Research The Metanexus Institute (founded 1997) in New York has sponsored scientific research on spiritual transformation. Terminology Occurrences of the phrase "spiritual transformation" in Google Books suggest a surge in the popularity of the concept from the late-20th century. See also Aurobindo Integral transformative practice Meditation Sivananda Spiritual evolution Supermind Transpersonal psychology Shriram Sharma Acharya Mahdi References External links The Spiritual Transformation Scientific Research Program The University of Philosophical Research Transformational Psychology program Article about Spiritual Transformation for Christians On the Spiritual Path of The Fourth Way and Advaita: Teachers of No-Thing & Nothing Spiritual evolution
Spiritual transformation
[ "Biology" ]
318
[ "Spiritual evolution", "Non-Darwinian evolution", "Biology theories" ]
6,948,407
https://en.wikipedia.org/wiki/BioModels
BioModels is a free and open-source repository for storing, exchanging and retrieving quantitative models of biological interest created in 2006. All the models in the curated section of BioModels Database have been described in peer-reviewed scientific literature. The models stored in BioModels' curated branch are compliant with MIRIAM, the standard of model curation and annotation. The models have been simulated by curators to check that when run in simulations, they provide the same results as described in the publication. Model components are annotated, so the users can conveniently identify each model element and retrieve further information from other resources. Modellers can submit the models in SBML and CellML. Models can subsequently be downloaded in SBML, VCML , XPP, SciLab, Octave, BioPAX and RDF/XML. The reaction networks of models are presented in some graphic formats, such as PNG, SVG and graphic Java applet, in which some networks were presented by following Systems Biology Graphical Notation. And a human readable summary of each model is available in PDF. Content BioModels is composed of several branches. The curated branch hosts models that are well curated and annotated. The non-curated-branch provides models that are still not curated, are non-curatable (spatial models, steady-state models etc.), or too huge to be curated. Non-curated models can be later moved into the curated branch. The repository also hosts models which were automatically generated from pathways databases. All the models are freely available under the Creative Commons CC0 Public Domain Dedication, and can be easily accessed via the website or Web Services. One can also download archives of all the models from the EBI FTP server. BioModels announced its 31st release on June 26, 2017. It now publicly provides 144,710 models. This corresponds to 1,640 models published in the literature and 143,070 models automatically generated from pathway resources. Deposition of models in BioModels is advocated by many scientific journals, included Molecular Systems Biology, all the journals of the Public Library of Science, all the journals of BioMed Central and all the journals published by the Royal Society of Chemistry. Development BioModels is developed by the BioModels.net Team at the EMBL-EBI, UK, the Le Novère lab at the Babraham Institute, UK, and the SBML Team in Caltech, USA. Funding BioModels Development has benefited from the funds of the European Molecular Biology Laboratory, the Biotechnology and Biological Sciences Research Council, the Innovative Medicines Initiative, the Seventh Framework Programme (FP7), the National Institute of General Medical Sciences, the DARPA, and the National Center for Research Resources. References External links Official website of BioModels Caltech Mirror site Biological databases Free biosimulation software Genetics databases Molecular biology Protein methods Science and technology in Cambridgeshire South Cambridgeshire District Systems biology
BioModels
[ "Chemistry", "Biology" ]
607
[ "Biochemistry methods", "Protein methods", "Protein biochemistry", "Bioinformatics", "Molecular biology", "Biochemistry", "Biological databases", "Systems biology" ]
6,948,409
https://en.wikipedia.org/wiki/Binary%20erasure%20channel
In coding theory and information theory, a binary erasure channel (BEC) is a communications channel model. A transmitter sends a bit (a zero or a one), and the receiver either receives the bit correctly, or with some probability receives a message that the bit was not received ("erased") . Definition A binary erasure channel with erasure probability is a channel with binary input, ternary output, and probability of erasure . That is, let be the transmitted random variable with alphabet . Let be the received variable with alphabet , where is the erasure symbol. Then, the channel is characterized by the conditional probabilities: Capacity The channel capacity of a BEC is , attained with a uniform distribution for (i.e. half of the inputs should be 0 and half should be 1). {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Proof |- |By symmetry of the input values, the optimal input distribution is . The channel capacity is: Observe that, for the binary entropy function (which has value 1 for input ), as is known from (and equal to) y unless , which has probability . By definition , so . |} If the sender is notified when a bit is erased, they can repeatedly transmit each bit until it is correctly received, attaining the capacity . However, by the noisy-channel coding theorem, the capacity of can be obtained even without such feedback. Related channels If bits are flipped rather than erased, the channel is a binary symmetric channel (BSC), which has capacity (for the binary entropy function ), which is less than the capacity of the BEC for . If bits are erased but the receiver is not notified (i.e. does not receive the output ) then the channel is a deletion channel, and its capacity is an open problem. History The BEC was introduced by Peter Elias of MIT in 1955 as a toy example. See also Erasure code Packet erasure channel Notes References Coding theory
Binary erasure channel
[ "Mathematics" ]
423
[ "Discrete mathematics", "Coding theory" ]
6,948,446
https://en.wikipedia.org/wiki/Jaroslav%20Ne%C5%A1et%C5%99il
Jaroslav "Jarik" Nešetřil (; born March 13, 1946) is a Czech mathematician, working at Charles University in Prague. His research areas include combinatorics (structural combinatorics, Ramsey theory), graph theory (coloring problems, sparse structures), algebra (representation of structures, categories, homomorphisms), posets (diagram and dimension problems), computer science (complexity, NP-completeness). Education and career Nešetřil received his Ph.D. from Charles University in 1973 under the supervision of Aleš Pultr and Gert Sabidussi. He is responsible for more than 300 publications. Since 2006, he is chairman of the Committee of Mathematics of Czech Republic (the Czech partner of IMU). Jaroslav Nešetřil is Editor in Chief of Computer Science Review and INTEGERS: the Electronic Journal of Combinatorial Number Theory. He is also honorary editor of Electronic Journal of Graph Theory and Applications. Since 2008, Jaroslav Nešetřil belongs to the Advisory Board of the Academia Sinica. Awards and honors He was awarded the state prize (1985 jointly with Vojtěch Rödl) for a collection of papers in Ramsey theory. The book Sparsity - Graphs, Structures, and Algorithms he co-authored with Patrice Ossona de Mendez was included in ACM Computing Reviews list of Notable Books and Articles of 2012. Nešetřil is a corresponding member of the German Academy of Sciences since 1996 and has been declared Doctor Honoris Causa of the University of Alaska (Fairbanks) in 2002. He has also been declared Doctor Honoris Causa of the University of Bordeaux 1 in 2009; the speech he made in French at this occasion attracted a great deal of attention. He received in 2010 the Medal of Merit of Czech Republic and the Gold medal of Faculty of Mathematics and Physics, Charles University in 2011. In 2012, he has been elected to the Academia Europaea. Also, he has been elected honorary member of the Hungarian Academy of Sciences in 2013. He was an invited speaker of the European Congress of Mathematics, in Amsterdam, 2008, and invited speaker (by both the Logic and Foundations and Combinatorics sections) at the Combinatorics session of the International Congress of Mathematicians, in Hyderabad, 2010. In 2018, on the occasion of the 670th anniversary of the establishment of Charles University, Nešetřil has received from the rector of Charles university the Donatio Universitatis Carolinae prize “for his contribution to mathematics and for his leading role in establishing a world-renowned group in discrete mathematics at Charles University”. Books 2008 2nd edition (hbk); 2009 2nd edition (pbk) 2012 pbk reprint References External links 1946 births Living people Czechoslovak mathematicians 20th-century Czech mathematicians 21st-century Czech mathematicians Combinatorialists Graph theorists Recipients of Medal of Merit (Czech Republic) Members of Academia Europaea Academic staff of Charles University People from Brno
Jaroslav Nešetřil
[ "Mathematics" ]
598
[ "Graph theory", "Combinatorics", "Combinatorialists", "Mathematical relations", "Graph theorists" ]
6,948,635
https://en.wikipedia.org/wiki/Birkenhead%20dock%20disaster
The Birkenhead dock disaster was a tragedy that happened when a temporary dam collapsed during construction of the Vittoria Dock in Birkenhead, Wirral Peninsula, England, on 6 March 1909. It left 14 workers (or "navvies") dead and three injured. The disaster led to a huge public outpouring of sympathy and grief in the local area. However, the Government refused to hold a public inquiry and the cause of the disaster was never definitively established. Very little evidence or documentation surrounding the event now exists. Building the Vittoria Dock The £206,000 contract to build a dock on the Vittoria Wharf area of Birkenhead was awarded by the Mersey Docks and Harbour Board in 1905 to John Scott of Darlington. Scott was the son of Sir Walter Scott (1826-1910), one of the greatest regional civil engineering contractors of his era, and had recently built an extension to the docks in Middlesbrough. The Vittoria Dock - sited at the northern end of Vittoria Street - was to serve as an accessible, organised berthing facility for vessels, which were increasing in size. Work began in 1905 and was due to be finished by the end of 1909. However, by March 1909 it was nine months ahead of schedule. The whole project was merely a few hours from completion when the disaster occurred. Disaster strikes Just after midnight on 6 March 1909, during a blinding snowstorm, disaster struck. A gang of navvies were working in a which formed the entrance channel to the new dock. They were clearing away rubble and timber, which was hauled up to the dockside by a crane which straddled the excavation. The waters of the neighbouring East Float were held back from the entrance channel by a temporary coffer dam, formed from pilings rammed with mud and cement, which had been built in 1907. There was only a small amount of work left to do, and the whole four-year dock project would be finished by the following evening. High-tide in the River Mersey had been about 11:15 pm and the East Float was full of water. At around 12:25 am the foundation of the coffer dam gave way without warning; the fifteen workers were overwhelmed by water and debris. A platform carrying the crane, engine and boiler collapsed into the excavation, and, it is believed, trapped the men underwater. Fourteen men were killed, but one survived by clinging on the dock wall until he was rescued. The engine-driver and a boy acting as a signaller were swept into the water but were rescued; the boy being trapped between baulks of timber later had his leg amputated. The disaster widowed seven women and left 13 children fatherless. It took a month for divers to recover all the bodies, and the victims were buried in three mass graves in Flaybrick Hill Cemetery, Birkenhead, now known as Flaybrick Memorial Gardens. Aftermath At the ensuing inquest, John Scott's chief engineer claimed that the disaster was probably caused when the base of the coffer dam shifted after pilings from the old dock wall were removed, and this event could not have been foreseen. However, this explanation was never independently tested or verified. One man—John Jones, the operator of the piledriving machine used to build the dam—bravely spoke out at the inquest, claiming there had been shoddy workmanship and rotten building materials had been used on the project. But his evidence was disregarded and the jury, heavily influenced by the coroner's summing-up, returned a verdict that no one was to blame. The Vittoria Dock opened for business four months after the disaster and is still in operation today. References Birkenhead Birkenhead docks Engineering failures 1909 disasters in the United Kingdom 1909 in England Disasters in Cheshire History of Merseyside March 1909
Birkenhead dock disaster
[ "Technology", "Engineering" ]
792
[ "Systems engineering", "Reliability engineering", "Technological failures", "Engineering failures", "Civil engineering" ]
6,948,785
https://en.wikipedia.org/wiki/Nuclear%20protein
A nuclear protein is a protein found in the cell nucleus. Proteins are transported inside the nucleus with the help of the nuclear pore complex, which acts a barrier between cytoplasm and nuclear membrane. The import and export of proteins through the nuclear pore complex plays a fundamental role in gene regulation and other biological functions. References External links http://npd.hgu.mrc.ac.uk/user/about Cell nucleus
Nuclear protein
[ "Chemistry" ]
92
[ "Biochemistry stubs", "Protein stubs" ]
6,949,803
https://en.wikipedia.org/wiki/Concept%20processing
Concept processing is a technology that uses an artificial intelligence engine to provide flexible user interfaces. This technology is used in some electronic medical record (EMR) software applications, as an alternative to the more rigid template-based technology. Some methods of data entry in electronic medical records The most widespread methods of data entry into an EMR are templates, voice recognition, transcription, and concept processing. Templates The physician selects either a general, symptom-based or diagnosis-based template pre-fabricated for the type of case at that moment, making it specific through use of forms, pick-lists, check-boxes and free-text boxes. This method became predominant especially in emergency medicine during the late 1990s. Voice recognition The physician dictates into a computer voice recognition device that enters the data directly into a free-text area of the EMR. Transcription The physician dictates the case into a recording device, which is then sent to a transcriptionist for entry into the EMR, usually into free text areas. concept processing Based on artificial intelligence technology and Boolean logic, concept processing attempts to mirror the mind of each physician by recalling elements from past cases that are the same or similar to the case being seen at that moment. How concept processing works For every physician the bell-shaped curve effect is found, representing a frequency distribution of case types. Some cases are so rare that physicians will have never handled them before. The majority of other cases become repetitive, and are found on top of this bell shape curve. A concept processor brings forward the closest previous encounter in relation to the one being seen at that moment, putting that case in front of the physician for fine-tuning. There are only three possibilities of cases : The closest encounter could be identical to the current encounter (not an impossible event). It could be similar to the current note, or it could be a rare new case. If the closest encounter is identical to your present one, the physician has effectively completed charting. A concept processor will pull through all the related information needed. If the encounter is similar but not identical, the physician modifies the differences from the closest case using hand-writing recognition, voice recognition, or keyboard. A Concept Processor then memorizes all the changes, so that when the next encounter falls between two similar cases, the editing is cut in half, and then by a quarter for the next case, and then by an eighth....and so on. In fact, the more a Concept Processor is used, the faster and smarter it becomes. concept processing also can be used for rare cases. These are usually combinations of SOAP note elements, which in themselves are not rare. If the text of each element is saved for a given type of case, there will be elements available to use with other cases, even though the other cases may not be similar overall. The role of a concept processor is simply to reflect that thinking process accurately in a doctor's own words. See also Electronic health record Electronic medical record Health informatics Medical record Health informatics Electronic health record software Electronic health records Medical software
Concept processing
[ "Technology", "Biology" ]
626
[ "Health informatics", "Electronic health records", "Information technology", "Medical software", "Medical technology" ]
6,950,441
https://en.wikipedia.org/wiki/Hydroxamic%20acid
In organic chemistry, hydroxamic acids are a class of organic compounds having a general formula bearing the functional group , where R and R' are typically organyl groups (e.g., alkyl or aryl) or hydrogen. They are amides () wherein the nitrogen atom has a hydroxyl () substituent. They are often used as metal chelators. Common example of hydroxamic acid is aceto-N-methylhydroxamic acid (). Some uncommon examples of hydroxamic acids are formo-N-chlorohydroxamic acid () and chloroformo-N-methylhydroxamic acid (). Synthesis and reactions Hydroxamic acids are usually prepared from either esters or acid chlorides by a reaction with hydroxylamine salts. For the synthesis of benzohydroxamic acid ( or , where Ph is phenyl group), the overall equation is: Hydroxamic acids can also be synthesized from aldehydes and N-sulfonylhydroxylamine via the Angeli-Rimini reaction. Alternatively, molybdenum oxide diperoxide oxidizes trimethylsilated amides to hydroxamic acids, although yields are only about 50%. In a variation on the Nef reaction, primary nitro compounds kept in an acidic solution (to minimize the nitronate tautomer) hydrolyze to a hydroxamic acid. A well-known reaction of hydroxamic acid esters is the Lossen rearrangement. Coordination chemistry and biochemistry The conjugate base of hydroxamic acids forms is called a hydroxamate. Deprotonation occurs at the group, with the hydrogen atom being removed, resulting in a hydroxamate anion . The resulting conjugate base presents the metal with an anionic, conjugated O,O chelating ligand. Many hydroxamic acids and many iron hydroxamates have been isolated from natural sources. They function as ligands, usually for iron. Nature has evolved families of hydroxamic acids to function as iron-binding compounds (siderophores) in bacteria. They extract iron(III) from otherwise insoluble sources (rust, minerals, etc.). The resulting complexes are transported into the cell, where the iron is extracted and utilized metabolically. Ligands derived from hydroxamic acid and thiohydroxamic acid (a hydroxamic acid where one or both oxygens in the functional group are replaced by sulfur) also form strong complexes with lead(II). Other uses and occurrences Hydroxamic acids are used extensively in flotation of rare earth minerals during the concentration and extraction of ores to be subjected to further processing. Some hydroxamic acids (e.g. vorinostat, belinostat, panobinostat, and trichostatin A) are HDAC inhibitors with anti-cancer properties. Fosmidomycin is a natural hydroxamic acid inhibitor of 1-deoxy-D-xylulose-5-phosphate reductoisomerase (DXP reductoisomerase). Hydroxamic acids have also been investigated for reprocessing of irradiated fuel. References Further reading Functional groups
Hydroxamic acid
[ "Chemistry" ]
694
[ "Organic compounds", "Functional groups", "Hydroxamic acids" ]
6,950,454
https://en.wikipedia.org/wiki/Tetanolysin
Tetanolysin is a toxin produced by Clostridium tetani bacteria. Its function is unknown, but it is believed to contribute to the pathogenesis of tetanus. The other C. tetani toxin, tetanospasmin, is more definitively linked to tetanus. It is sensitive to oxygen. Tetanolysin belongs to a family of protein toxins known as thiol-activated cytolysins, which bind to cholesterol. It is related to streptolysin O and the θ-toxin of Clostridium perfringens. Cytolysins form pores in the cytoplasmic membrane that allows for the passage of ions and other molecules into the cell. The molecular weight of tetanolysin is around 55,000 daltons. References Further reading Alouf, J. (1997) pp 7–10 in Guidebook to Protein Toxins and Their Use in Cell Biology, Ed. Rappuoli, R. and Montecucco, C. (Oxford University Press). Ahnert-Hilger, G., Pahner, I., and Höltje, M. (1999) Pore-forming Toxins as Cell Biological and Pharmacological Tools. In press. Conti, A., Brando, C., DeBell, K.E., Alava, M.A., Hoffman, T., Bonvini, E. (1993) J. Biol. Chem. 268, 783-791. Raya, S.A., Trembovler, V., Shohami, E. and Lazarovici, P. (1993) Nat. Toxins 1, 263-70. Bacterial toxins Tetanus
Tetanolysin
[ "Chemistry", "Biology" ]
377
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
6,950,605
https://en.wikipedia.org/wiki/Blocking%20%28construction%29
Blocking (dwang, nog, noggin, and nogging) is the use of short pieces of dimensional lumber in wood framed construction to brace longer members or to provide grounds for fixings. Uses The primary purpose of blocking is to brace longer frame members to help resist buckling under vertical compression. The intervals for the blocks are specified in the building code or as calculated by a structural engineer. Blocking also resists the rotational movement, or twisting, of floor joists as they deflect under load. This may take the form of diagonal cross bracing, or herringbone, bracing between floor joists. When solid blocks are used instead of diagonals it is called bridging, block bridging, solid bridging or solid strutting. The illustration, right, shows solid blocking. Note how they are displaced alternately to allow nailing through their ends. Blocking may also provide spacers or attachment points between adjoining stud walls, for example, where an interior and exterior wall meets, or at a corner where techniques such as the "three-stud corner with blocking" are used. When correctly placed, blocking also provides grounds (also backing or back blocking) for supporting the cut ends of wall claddings and linings or for attaching items such as cabinets, shelving, handrails, vanity tops and backsplashes, towel bars, decorative mouldings, etc. Properly placed grounds make the second fixings easier once the walls are lined and they distribute the weight of heavy items across structural members. However, the locations required for use as grounds are dictated by the needs of the fittings and these often do not coincide with the locations required by the engineering specifications for use as bracing, consequently, the two forms may be present in the wall acting independently. When used only as grounds rather than as bracing, they are typically shallower. Blocking is typically made from short off-cuts or to make use of defective, warped, pieces unsuited for use in longer lengths. References Building engineering Carpentry
Blocking (construction)
[ "Engineering" ]
414
[ "Building engineering", "Civil engineering", "Architecture" ]
6,950,643
https://en.wikipedia.org/wiki/Optical%20bistability
In optics, optical bistability is an attribute of certain optical devices where two resonant transmissions states are possible and stable, dependent on the input. Optical devices with a feedback mechanism, e.g. a laser, provide two methods of achieving bistability. Absorptive bistability utilizes an absorber to block light inversely dependent on the intensity of the source light. The first bistable state resides at a given intensity where no absorber is used. The second state resides at the point where the light intensity overcomes the absorber's ability to block light. Refractive bistability utilizes an optical mechanism that changes its refractive index inversely dependent on the intensity of the source light. The first bistable state resides at a given intensity where no optical mechanism is used. The second state resides at the point where a certain light intensity causes the light to resonate to the corresponding refractive index. This effect is caused by two factors Nonlinear atom-field interaction Feedback effect of mirror Important cases that might be regarded are: Atomic detuning Cooperating factor Cavity mistuning Applications of this phenomenon include its use in optical transmitters, memory elements and pulse shapers. Optical bistability was first observed within vapor of sodium during 1974. Intrinsic bistability When the feedback mechanism is provided by an internal procedure (not by an external entity like the mirror within the Interferometers), the latter will be known as intrinsic optical bistability. This process can be seen in nonlinear media containing the nanoparticles through which the effect of surface plasmon resonance can potentially occur. References bistability
Optical bistability
[ "Physics", "Chemistry" ]
330
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
6,950,659
https://en.wikipedia.org/wiki/Arf%20invariant
In mathematics, the Arf invariant of a nonsingular quadratic form over a field of characteristic 2 was defined by Turkish mathematician when he started the systematic study of quadratic forms over arbitrary fields of characteristic 2. The Arf invariant is the substitute, in characteristic 2, for the discriminant for quadratic forms in characteristic not 2. Arf used his invariant, among others, in his endeavor to classify quadratic forms in characteristic 2. In the special case of the 2-element field F2 the Arf invariant can be described as the element of F2 that occurs most often among the values of the form. Two nonsingular quadratic forms over F2 are isomorphic if and only if they have the same dimension and the same Arf invariant. This fact was essentially known to , even for any finite field of characteristic 2, and Arf proved it for an arbitrary perfect field. The Arf invariant is particularly applied in geometric topology, where it is primarily used to define an invariant of -dimensional manifolds (singly even-dimensional manifolds: surfaces (2-manifolds), 6-manifolds, 10-manifolds, etc.) with certain additional structure called a framing, and thus the Arf–Kervaire invariant and the Arf invariant of a knot. The Arf invariant is analogous to the signature of a manifold, which is defined for 4k-dimensional manifolds (doubly even-dimensional); this 4-fold periodicity corresponds to the 4-fold periodicity of L-theory. The Arf invariant can also be defined more generally for certain 2k-dimensional manifolds. Definitions The Arf invariant is defined for a quadratic form q over a field K of characteristic 2 such that q is nonsingular, in the sense that the associated bilinear form is nondegenerate. The form is alternating since K has characteristic 2; it follows that a nonsingular quadratic form in characteristic 2 must have even dimension. Any binary (2-dimensional) nonsingular quadratic form over K is equivalent to a form with in K. The Arf invariant is defined to be the product . If the form is equivalent to , then the products and differ by an element of the form with in K. These elements form an additive subgroup U of K. Hence the coset of modulo U is an invariant of , which means that it is not changed when is replaced by an equivalent form. Every nonsingular quadratic form over K is equivalent to a direct sum of nonsingular binary forms. This was shown by Arf, but it had been earlier observed by Dickson in the case of finite fields of characteristic 2. The Arf invariant Arf() is defined to be the sum of the Arf invariants of the . By definition, this is a coset of K modulo U. Arf showed that indeed does not change if is replaced by an equivalent quadratic form, which is to say that it is an invariant of . The Arf invariant is additive; in other words, the Arf invariant of an orthogonal sum of two quadratic forms is the sum of their Arf invariants. For a field K of characteristic 2, Artin–Schreier theory identifies the quotient group of K by the subgroup U above with the Galois cohomology group H1(K, F2). In other words, the nonzero elements of K/U are in one-to-one correspondence with the separable quadratic extension fields of K. So the Arf invariant of a nonsingular quadratic form over K is either zero or it describes a separable quadratic extension field of K. This is analogous to the discriminant of a nonsingular quadratic form over a field F of characteristic not 2. In that case, the discriminant takes values in F*/(F*)2, which can be identified with H1(F, F2) by Kummer theory. Arf's main results If the field K is perfect, then every nonsingular quadratic form over K is uniquely determined (up to equivalence) by its dimension and its Arf invariant. In particular, this holds over the field F2. In this case, the subgroup U above is zero, and hence the Arf invariant is an element of the base field F2; it is either 0 or 1. If the field K of characteristic 2 is not perfect (that is, K is different from its subfield K2 of squares), then the Clifford algebra is another important invariant of a quadratic form. A corrected version of Arf's original statement is that if the degree [K: K2] is at most 2, then every quadratic form over K is completely characterized by its dimension, its Arf invariant and its Clifford algebra. Examples of such fields are function fields (or power series fields) of one variable over perfect base fields. Quadratic forms over F2 Over F2, the Arf invariant is 0 if the quadratic form is equivalent to a direct sum of copies of the binary form , and it is 1 if the form is a direct sum of with a number of copies of . William Browder has called the Arf invariant the democratic invariant because it is the value which is assumed most often by the quadratic form. Another characterization: q has Arf invariant 0 if and only if the underlying 2k-dimensional vector space over the field F2 has a k-dimensional subspace on which q is identically 0 – that is, a totally isotropic subspace of half the dimension. In other words, a nonsingular quadratic form of dimension 2k has Arf invariant 0 if and only if its isotropy index is k (this is the maximum dimension of a totally isotropic subspace of a nonsingular form). The Arf invariant in topology Let M be a compact, connected 2k-dimensional manifold with a boundary such that the induced morphisms in -coefficient homology are both zero (e.g. if is closed). The intersection form is non-singular. (Topologists usually write F2 as .) A quadratic refinement for is a function which satisfies Let be any 2-dimensional subspace of , such that . Then there are two possibilities. Either all of are 1, or else just one of them is 1, and the other two are 0. Call the first case , and the second case . Since every form is equivalent to a symplectic form, we can always find subspaces with x and y being -dual. We can therefore split into a direct sum of subspaces isomorphic to either or . Furthermore, by a clever change of basis, We therefore define the Arf invariant Examples Let be a compact, connected, oriented 2-dimensional manifold, i.e. a surface, of genus such that the boundary is either empty or is connected. Embed in , where . Choose a framing of M, that is a trivialization of the normal (m − 2)-plane vector bundle. (This is possible for , so is certainly possible for ). Choose a symplectic basis for . Each basis element is represented by an embedded circle . The normal (m − 1)-plane vector bundle of has two trivializations, one determined by a standard framing of a standard embedding and one determined by the framing of M, which differ by a map i.e. an element of for . This can also be viewed as the framed cobordism class of with this framing in the 1-dimensional framed cobordism group , which is generated by the circle with the Lie group framing. The isomorphism here is via the Pontrjagin-Thom construction. Define to be this element. The Arf invariant of the framed surface is now defined Note that so we had to stabilise, taking to be at least 4, in order to get an element of . The case is also admissible as long as we take the residue modulo 2 of the framing. The Arf invariant of a framed surface detects whether there is a 3-manifold whose boundary is the given surface which extends the given framing. This is because does not bound. represents a torus with a trivialisation on both generators of which twists an odd number of times. The key fact is that up to homotopy there are two choices of trivialisation of a trivial 3-plane bundle over a circle, corresponding to the two elements of . An odd number of twists, known as the Lie group framing, does not extend across a disc, whilst an even number of twists does. (Note that this corresponds to putting a spin structure on our surface.) Pontrjagin used the Arf invariant of framed surfaces to compute the 2-dimensional framed cobordism group , which is generated by the torus with the Lie group framing. The isomorphism here is via the Pontrjagin-Thom construction. Let be a Seifert surface for a knot, , which can be represented as a disc with bands attached. The bands will typically be twisted and knotted. Each band corresponds to a generator . can be represented by a circle which traverses one of the bands. Define to be the number of full twists in the band modulo 2. Suppose we let bound , and push the Seifert surface into , so that its boundary still resides in . Around any generator , we now have a trivial normal 3-plane vector bundle. Trivialise it using the trivial framing of the normal bundle to the embedding for 2 of the sections required. For the third, choose a section which remains normal to , whilst always remaining tangent to . This trivialisation again determines an element of , which we take to be . Note that this coincides with the previous definition of . The Arf invariant of a knot is defined via its Seifert surface. It is independent of the choice of Seifert surface (The basic surgery change of S-equivalence, adding/removing a tube, adds/deletes a direct summand), and so is a knot invariant. It is additive under connected sum, and vanishes on slice knots, so is a knot concordance invariant. The intersection form on the -dimensional -coefficient homology of a framed -dimensional manifold M has a quadratic refinement , which depends on the framing. For and represented by an embedding the value is 0 or 1, according as to the normal bundle of is trivial or not. The Kervaire invariant of the framed -dimensional manifold M is the Arf invariant of the quadratic refinement on . The Kervaire invariant is a homomorphism on the -dimensional stable homotopy group of spheres. The Kervaire invariant can also be defined for a -dimensional manifold M which is framed except at a point. In surgery theory, for any -dimensional normal map there is defined a nonsingular quadratic form on the -coefficient homology kernel refining the homological intersection form . The Arf invariant of this form is the Kervaire invariant of (f,b). In the special case this is the Kervaire invariant of M. The Kervaire invariant features in the classification of exotic spheres by Michel Kervaire and John Milnor, and more generally in the classification of manifolds by surgery theory. William Browder defined using functional Steenrod squares, and C. T. C. Wall defined using framed immersions. The quadratic enhancement crucially provides more information than : it is possible to kill x by surgery if and only if . The corresponding Kervaire invariant detects the surgery obstruction of in the L-group . See also de Rham invariant, a mod 2 invariant of -dimensional manifolds Notes References See Lickorish (1997) for the relation between the Arf invariant and the Jones polynomial. See Chapter 3 of Carter's book for another equivalent definition of the Arf invariant in terms of self-intersections of discs in 4-dimensional space. Glen Bredon: Topology and Geometry, 1993, . J. Scott Carter: How Surfaces Intersect in Space, Series on Knots and Everything, 1993, . W. B. Raymond Lickorish, An Introduction to Knot Theory, Graduate Texts in Mathematics, Springer, 1997, Lev Pontryagin, Smooth manifolds and their applications in homotopy theory American Mathematical Society Translations, Ser. 2, Vol. 11, pp. 1–114 (1959) Further reading Quadratic forms Surgery theory
Arf invariant
[ "Mathematics" ]
2,576
[ "Quadratic forms", "Number theory" ]
6,950,713
https://en.wikipedia.org/wiki/Financial%20Management%20Standard
The Financial Management Standard 1997 (also known as the FMS) is a state law of the Queensland Government empowered by the Financial Administration and Audit Act 1977 (Qld). Its primary purpose is to provide the policies and principles to be observed in financial management, including planning, performance management, internal control, and corporate management within Queensland Government. This is achieved by stating the functions of each accountable officer and statutory body in relation to corporate management. A key aspect of the FMS is its directives associated with management of information and communications technology (ICT), included the requirement for detailed ICT planning to ensure appropriate acquisition processes and ongoing management. Unlike the United States federal law known as the Information Technology Management Reform Act (also known as the Clinger-Cohen Act), the FMS contains explicit references to the use of Enterprise Architecture. In particular the FMS links the planning of ICT directly to the Government Enterprise Architecture. Penalties, including imprisonment, exist for failure of accountable officers to comply with the requirements of the Financial Administration and Audit Act 1977 (Qld) and associated regulations and the FMS. External links Queensland Government Queensland Treasury Government of Queensland Information technology management Financial management
Financial Management Standard
[ "Technology" ]
235
[ "Information technology", "Information technology management" ]
6,950,790
https://en.wikipedia.org/wiki/Angeli%E2%80%93Rimini%20reaction
The Angeli–Rimini reaction is an organic reaction between an aldehyde and N-hydroxybenzenesulfonamide in presence of base forming a hydroxamic acid. The other reaction product is a sulfinic acid. The reaction was discovered by the two Italian chemists Angelo Angeli and Enrico Rimini (1874–1917), and was published in 1896. Chemical test The reaction is used in a chemical test for the detection of aldehydes in combination with ferric chloride. In this test a few drops of aldehyde containing specimen is dissolved in ethanol, the sulfonamide is added together with some sodium hydroxide solution and then the solution is acidified to Congo red. An added drop of ferric chloride will turn the solution an intense red when aldehyde is present. The sulfonamide can be prepared by reaction of hydroxylamine and benzenesulfonyl chloride in ethanol with potassium metal. Reaction mechanism The reaction mechanism for this reaction is not clear and several potential pathways exist. The N-hydroxybenzenesulfonamide 1 or its deprotonated form 2 is a nucleophile in reaction with the aldehyde 3 to intermediate 4. After intramolecular proton exchange to 5 a sulfinic acid anion is split off and hydroxamic acid 8 results through nitroso compound 6 and intermediate 7. Alternatively aziridine intermediate 9 directly forms the end=product. The formation of the nitrene intermediate 10 is ruled out given the lack of reactivity of the chemical mixture towards simple alkenes. Scope The Angeli–Rimini reaction has recently been applied in solid-phase synthesis with the sulfonamide covalently linked to a polystyrene solid support. References Organic redox reactions Chemical tests Name reactions
Angeli–Rimini reaction
[ "Chemistry" ]
376
[ "Name reactions", "Organic redox reactions", "Chemical tests", "Organic reactions" ]
6,951,047
https://en.wikipedia.org/wiki/Enzyme%20potentiated%20desensitization
Enzyme potentiated desensitization (EPD), is a treatment for allergies developed in the 1960s by Dr. Leonard M. McEwen in the United Kingdom. EPD uses much lower doses of antigens than conventional desensitization treatment paired with the enzyme β-glucuronidase. EPD is approved in the United Kingdom for the treatment of hay fever, food allergy and intolerance and environmental allergies. EPD was developed for the treatment of autoimmune disease by the United Kingdom company Epidyme which was owned by Dr. McEwen and had been granted a United Kingdom patent. Despite encouraging results in an experimental model of rheumatoid arthritis, the company was placed into liquidation in April 2010. United States use EPD was available in the United States until 2001, when the Food and Drug Administration revoked approval for an investigative study which it had previously sanctioned. That study had allowed EPD to be imported into the United States without being licensed. The approval was revoked because the EPD treatments included complex mixtures of allergens that were not allowed under FDA rules. Since 2001, the FDA has banned importation of EPD for the following reasons: EPD is not licensed. the labeling of the medicine does not contain adequate directions for use. (EPD is only supplied to doctors who have been through a one-week training course, and instructions supplied with the medicine would not be adequate) A related treatment, Low Dose Allergens (LDA), was developed in the US by Dr. Shrader, which, being a compounding rather than a drug, is not regulated by the FDA. In addition, LDA uses a different allergen mix for the US environment. However, LDA is considered by many in the field to be a repackaging of EPD that circumvents the FDA guidelines that caused EPD to be revoked. EPD treatment The enzyme beta glucuronidase appears to potentiate the desensitizing effect of a small dose of allergen. The quantities of both are smaller than those occurring naturally in the body, but not so small that they can be regarded as homeopathic. Intradermal injections are used. The treatment takes 3–4 weeks before any effect is seen. For food and environmental allergies and intolerances treatments are typically given at two monthly intervals at first, but the interval between treatments is gradually lengthened. Hay fever is treated with two shots of EPD outside the pollen season. Mechanism for EPD The treatment uses dilutions of allergen and enzyme to which T-regulatory lymphocytes are believed to respond by favouring desensitization, or down-regulation, rather than sensitization. Once activated these lymphocytes travel to lymph nodes and reproduce or stimulate similar T-lymphocytes. Evidence for the Effectiveness of EPD EPD is considered experimental by some doctors and allergists. However, there is evidence for the efficacy of EPD in the treatment of hay fever and other conditions as a result of nine placebo-controlled, double-blind trials involving 271 patients. These trials showed a significant improvement in the symptoms with probabilities of 0.001 to 0.01 (a chance of one in a thousand to one in a hundred that the results of the trial would be seen by chance alone assuming EPD had no effect). However, one trial involving 183 patients published in the British Medical Journal showed no overall effect. Dr Len McEwen, inventor of EPD, speculated that the reason for the failure might have been that the beta glucuronidase enzyme preparation was inadvertently heated or frozen during storage in the hospital pharmacy, as it is sensitive to the storage temperature and enzyme from the same manufactured batch had been used to treat a number of patients successfully. However, there is no evidence available after the event to test this theory as the remaining trial materials were destroyed immediately after the trial ended. Safety of EPD While the efficacy of EPD is sometimes the subject of controversy among the medical community, the safety of EPD is demonstrated in one study under the control of an Investigational Review Board and reported by the American EPD Society. 5,400 patients received at least 3 doses of EPD with no severe reactions reported. Comparison of EPD with conventional escalating-dose immunotherapy (hyposensitization) By contrast, uncontrolled use of conventional (escalating dose) immunotherapy (hyposensitization not EPD) for general allergic conditions was believed to be responsible for at least 29 deaths in the UK, and is now banned in the United Kingdom except in hospital under close observation. A working party of the British Society for Allergy and Clinical Immunology reviewed the role of conventional high dose specific allergen immunotherapy (not EPD) in the treatment of allergic disease and recommends high dose specific allergen immunotherapy for treating summer hay fever uncontrolled by conventional medication and for wasp and bee venom hypersensitivity. For the recommended indications the risk:benefit ratio was found to be acceptable for conventional immunotherapy provided patients are carefully selected; in particular, patients with asthma should be excluded and injections should be given only by allergists experienced in this form of treatment in a clinic where resuscitative facilities are available and patients remain symptom free for an observation period after injection which is sufficient to detect all serious adverse reactions. Conventional escalating-dose immunotherapy (not EPD) has been used to treat tens of millions of people in the United States with appropriate medical supervision with a death rate of less than one in one million according to the American Academy of Allergy, Asthma, and Immunology. Restrictions on EPD EPD has not been developed for treatment of allergy to insect stings (for which convenventional immunotherapy is recommended), nor for contact dermatitis and allergy to drugs. It is not FDA approved. References Immunology
Enzyme potentiated desensitization
[ "Biology" ]
1,266
[ "Immunology" ]
6,951,435
https://en.wikipedia.org/wiki/Th%C3%A9ophile%20de%20Donder
Théophile Ernest de Donder (; 19 August 1872 – 11 May 1957) was a Belgian mathematician, physicist and chemist famous for his work (published in 1923) in developing correlations between the Newtonian concept of chemical affinity and the Gibbsian concept of free energy. Education He received his doctorate in physics and mathematics from the Université Libre de Bruxelles in 1899, for a thesis entitled Sur la Théorie des Invariants Intégraux (On the Theory of Integral Invariants). Career He was professor between 1911 and 1942, at the Université Libre de Bruxelles. Initially he continued the work of Henri Poincaré and Élie Cartan. From 1914 on, he was influenced by the work of Albert Einstein and was an enthusiastic proponent of the theory of relativity. He gained significant reputation in 1923, when he developed his definition of chemical affinity. He pointed out a connection between the chemical affinity and the Gibbs free energy. He is considered the father of thermodynamics of irreversible processes. De Donder's work was later developed further by Ilya Prigogine. De Donder was an associate and friend of Albert Einstein. He was in 1927, one of the participants of the fifth Solvay Conference on Physics, that took place at the International Solvay Institute for Physics in Belgium. Books by De Donder Thermodynamic Theory of Affinity: A Book of Principles. Oxford, England: Oxford University Press (1936) The Mathematical Theory of Relativity. Cambridge, MA: MIT (1927) Sur la théorie des invariants intégraux (thesis) (1899). Théorie du champ électromagnétique de Maxwell-Lorentz et du champ gravifique d'Einstein (1917) La gravifique Einsteinienne (1921) Introduction à la gravifique einsteinienne (1925) Théorie mathématique de l'électricité (1925) Théorie des champs gravifiques (1926) Application de la gravifique einsteinienne (1930) Théorie invariantive du calcul des variations (1931) See also Klein–Gordon equation Schrödinger equation References External links Theophile de Donder - Science World at Wolfram.com Prigogine on de Donder De Donder's math genealogy De Donder's academic tree 1872 births 1957 deaths Belgian physicists Belgian mathematicians Belgian chemists Thermodynamicists Free University of Brussels (1834–1969) alumni
Théophile de Donder
[ "Physics", "Chemistry" ]
510
[ "Thermodynamics", "Thermodynamicists" ]
6,952,283
https://en.wikipedia.org/wiki/Omnilingual
"Omnilingual" is a science fiction short story by American writer H. Beam Piper. Originally published in the February 1957 issue of Astounding Science Fiction, it focuses on the problem of archaeology on an alien culture. Synopsis An expedition from Earth to Mars discovers a deserted city, the remains of an advanced civilization that died out 50,000 years before. The human scientists recover books and documents left behind, and are puzzled by their contents. Earnest young archeologist Martha Dane deciphers a few words, but the real breakthrough comes when the team explores what appears to have been a university in which the last few civilized Martians made their last stand. Inside, they find a "Rosetta Stone": the periodic table of the elements. The story builds tension from the skepticism of the rest of the team, mostly male, as well as from Dr. Dane's competitive, spotlight-seeking teammate, Tony Lattimer. Reception Jo Walton stated that Omnilingual was "influential" and "the classic SF short story, the one everyone ought to read if they’re only going to read one", and noted that the story "raises a question that everyone who has dealt with the subject [when writing science fiction] since has had to either accept or find a way around", namely "If scientific truths are true for everyone, will we therefore be able to communicate with all scientifically literate cultures using science?" Walton also commended the story's use of gender equality and multicultural characters, with "the only thing that made [her] raise [her] eyebrows" being the constant use of alcohol and tobacco. it focuses on the problem of archaeology on an alien culture. James Nicoll questioned the basic premise of scientific language being necessarily decipherable — "what if Martian didn't use letters and a numbering system which sounds very akin to ours?" — but overall concluded that the story was "well worth reading." The Routledge Companion to Science Fiction similarly faulted this "ideological sleight-of-hand", emphasizing that the "extinct Martian civilization closely resembles the [then-]contemporary US: language is recorded in a linear written form divided into words; the title pages of printed magazines feature the title, month of publication, issue number, and table of contents; Martians live in cities with universities; universities are divided into disciplinary departments — and classrooms — more or less identical to terrestrial ones; and on the wall of the material sciences lab hangs a periodic table of elements, organizing information which might apply universally but which in no way demands graphic representation or public display." John W. Cowan, inventor of Lojban, praised Omnilingual as "one of the best, science fiction stories in which the science is linguistic archaeology", and published a modernized version to his website in 2009. Stylometric study In 2018, Tomi S. Melka and Michal Místecký carried out a complex quantitative analysis of the novelette's style. Publication history Omnilingual has been reprinted several times since its original publication. Prologue to Analog (1962, Doubleday) Analog Anthology (1965, Dobson) Great Science Fiction Stories About Mars (1966, Fredrick Fell) Apeman, Spaceman (1968, Doubleday) Mars, We Love You (1971, Doubleday) - Also published under the title The Book of Mars Where Do We Go from Here? (1971, Doubleday) The Days After Tomorrow (1971, Little Brown) Tomorrow, and Tomorrow, and Tomorrow... (1974, Rinehart & Winston) Science Fiction Novellas (1975, Scribner) Federation (1981, Ace) Isaac Asimov Presents The Great SF Stories 19 (1957) (1989, DAW) The World Turned Upside Down (2005, Baen) See also Michael Moorcock Isaac Asimov A. E. van Vogt Robert Silverberg Xenoarchaeology Xenolinguistics References External links Short stories set on Mars 1957 short stories Science fiction short stories Short stories by H. Beam Piper Works originally published in Analog Science Fiction and Fact Fiction set on desert planets Periodic table in popular culture Extraterrestrial life in popular culture
Omnilingual
[ "Chemistry" ]
853
[ "Periodic table", "Periodic table in popular culture" ]
6,952,389
https://en.wikipedia.org/wiki/Standardized%20mortality%20ratio
In epidemiology, the standardized mortality ratio or SMR, is a quantity, expressed as either a ratio or percentage quantifying the increase or decrease in mortality of a study cohort with respect to the general population. Standardized mortality ratio The standardized mortality ratio is the ratio of observed deaths in the study group to expected deaths in the general population. This ratio can be expressed as a percentage simply by multiplying by 100. The SMR may be quoted as either a ratio or a percentage. If the SMR is quoted as a ratio and is equal to 1.0, then this means the number of observed deaths equals that of expected cases. If higher than 1.0, then there is a higher number of deaths than is expected. SMR constitutes an indirect form of standardization. It has an advantage over the direct method of standardization since age-adjustment is permitted in situations where age stratification may not be available for the cohort being studied or where strata-specific data are subject to excessive random variability. Definition The requirements for calculating SMR for a cohort are: The number of persons in each age group in the population being studied The age specific death rates of the general population in the same age groups of the study population The observed deaths in the study population Expected deaths would then be calculated simply by multiplying the death rates of the general population by the total number of participants in the study group at the corresponding age group and summing up all the values for each age group to arrive at the number of expected deaths. The study groups are weighted based on their particular distribution (for example, age), as opposed to the general populations's distribution. This is a fundamental distinction between an indirect method of standardization like SMR from direct standardization techniques. The SMR may well be quoted with an indication of the uncertainty associated with its estimation, such as a confidence interval (CI) or p value, which allows it to be interpreted in terms of statistical significance. Example An example might be a cohort study into cumulative exposure to arsenic from drinking water, whereby the mortality rates due to a number of cancers in a highly exposed group (which drinks water with a mean arsenic concentration of, say 10 mg) is compared with those in the general population. An SMR for bladder cancer of 1.70 in the exposed group would mean that there is {(1.70 - 1)*100} 70% more cases of death due to bladder cancer in the cohort than in the reference population (in this case the national population, which is generally considered not to exhibit cumulative exposure to high arsenic levels). Standardized mortality rate Standardized mortality rate tells how many persons, per thousand of the population, will die in a given year and what the causes of death will be. Such statistics have many uses: Life insurance companies periodically update their premiums based on the mortality rate, adjusted for age. Medical researchers can track disease-related deaths and shift focus and funding to address increasing or decreasing risks. Organizations, both non- and for-profit, can utilize such statistics to justify their missions. Regarding occupational uses: Mortality tables are also often used when numbers of deaths for each age-specific stratum are not available. It is also used to study mortality rate in an occupationally exposed population: Do people who work in a certain industry, such as mining or construction, have a higher mortality than people of the same age in the general population? Is an additional risk associated with that occupation? To answer the question of whether a population of miners has a higher mortality than we would expect in a similar population that is not engaged in mining, the age-specific rates for such a known population, such as all men of the same age, are applied to each age group in the population of interest. This will yield the number of deaths expected in each age group in the population of interest, if this population had had the mortality experience of the known population. Thus, for each age group, the number of deaths expected is calculated, and these numbers are totaled. The numbers of deaths that were actually observed in that population are also calculated and totaled. The ratio of the total number of deaths actually observed to the total number of deaths expected, if the population of interest had had the mortality experience of the known population, is then calculated. This ratio is called the standardized mortality ratio (SMR). The SMR is defined as follows: SMR = (Observed no. of deaths per year)/(Expected no. of deaths per year). See also Age-specific mortality rate Crude death rate Vulnerability index References External links PAMCOMP Person-Years Analysis and Computation Program for calculating SMRs Epidemiology Biostatistics Medical statistics Statistical ratios
Standardized mortality ratio
[ "Environmental_science" ]
956
[ "Epidemiology", "Environmental social science" ]
6,952,874
https://en.wikipedia.org/wiki/National%20Vaccine%20Injury%20Compensation%20Program
The Office of Special Masters of the U.S. Court of Federal Claims, popularly known as "vaccine court", administers a no-fault system for litigating vaccine injury claims. These claims against vaccine manufacturers cannot normally be filed in state or federal civil courts, but instead must be heard in the U.S. Court of Federal Claims, sitting without a jury. The National Vaccine Injury Compensation Program (VICP or NVICP) was established by the 1986 National Childhood Vaccine Injury Act (NCVIA), passed by the United States Congress in response to a threat to the vaccine supply due to a 1980s scare over the DPT vaccine. Despite the belief of most public health officials that claims of side effects were unfounded, large jury awards had been given to some plaintiffs, most DPT vaccine makers had ceased production, and officials feared the loss of herd immunity. Between its inception in 1986 and May 2023, it has awarded a total of $4.6 billion, with the average award amount between 2006 and 2020 being $450,000, and the award rate (which varies by vaccine) being 1.2 awards per million doses administered. The Health Resources and Services Administration reported in July 2022 that "approximately 60 percent of all compensation awarded by the VICP comes as result of a negotiated settlement between the parties in which HHS has not concluded, based upon review of the evidence, that the alleged vaccine(s) caused the alleged injury". Cases are settled to minimize the risk of loss for both parties, to minimize the time and expense of litigation, and to resolve petitions quickly. National Childhood Vaccine Injury Act The U.S. Department of Health and Human Services set up the National Vaccine Injury Compensation Program (VICP) in 1988 to compensate individuals and families of individuals injured by covered childhood vaccines. The VICP was adopted in response to concerns over the pertussis portion of the DPT vaccine. Several U.S. lawsuits against vaccine makers won substantial awards. Most makers ceased production, and the last remaining major manufacturer threatened to do so. The VICP uses a no-fault system for resolving vaccine injury claims. Compensation covers medical and legal expenses, loss of future earning capacity, and up to $250,000 for pain and suffering; a death benefit of up to $250,000 is available. If certain minimal requirements are met, legal expenses are compensated even for unsuccessful claims. Since 1988, the program has been funded by an excise tax of 75 cents on every purchased dose of covered vaccine. To win an award, a claimant must have experienced an injury that is named as a vaccine injury in a table included in the law within the required time period or show a causal connection. The burden of proof is the civil law preponderance-of-the-evidence standard, in other words a showing that causation was more likely than not. Denied claims can be pursued in civil courts, though this is rare. The VICP covers all vaccines listed on the Vaccine Injury Table maintained by the Secretary of Health and Human Services; in 2007 the list included vaccines against diphtheria, tetanus, pertussis (whooping cough), measles, mumps, rubella (German measles), polio, hepatitis B, varicella (chicken pox), Haemophilus influenzae type b, rotavirus, and pneumonia. From 1988 until January 8, 2008, 5,263 claims relating to autism, and 2,865 non-autism claims, were made to the VICP. Of these claims, 925 (see previous rulings), were compensated, with 1,158 non-autism and 350 autism claims dismissed, and one autism-like claim compensated; awards (including attorney's fees) totaled $847 million. The VICP also applies to claims for injuries suffered before 1988; there were 4,264 of these claims of which 1,189 were compensated with awards totaling $903 million. As of October 2019, $4.2 billion in compensation (not including attorneys fees and costs) has been awarded. , filing a claim with the Court of Federal Claims requires a $402.00 filing fee, which can be waived for those unable to pay. Medical records such as prenatal, birth, pre-vaccination, vaccination, and post-vaccination records are strongly suggested, as medical review and claim processing may be delayed without them. Because this is a legal process most people use a lawyer, though this is not required. By 1999 the average claim took two years to resolve, and 42% of resolved claims were awarded compensation, as compared with 23% for medical malpractice claims through the tort system. There is a three-year statute of limitations for filing a claim, timed from the first manifestation of the medical problem. Autism claims More than 5,300 petitions alleging autism caused by vaccines have been filed in the vaccine court. In 2002, the court instituted the Omnibus Autism Proceeding in which plaintiffs were allowed to proceed with the three cases they considered to be the strongest before a panel of special masters. In each of the cases, the panel found that the plaintiffs had failed to demonstrate a causal effect between the MMR vaccine and autism. Following this determination, the vaccine court has routinely dismissed such suits, finding no causal effect between the MMR vaccine and autism. Many studies have failed to conclude that there is a causal link between autism spectrum disorders and vaccines, and the current scientific consensus is that routine childhood vaccines are not linked to the development of autism. Several claimants have attempted to bypass the VICP process with claims that thimerosal in vaccines had caused autism, but these were ultimately not successful. They have demanded medical monitoring for vaccinated children who do not show signs of autism and have filed class-action suits on behalf of parents. In March 2006, the U.S. Fifth Circuit Court of Appeals ruled that plaintiffs suing three manufacturers of thimerosal could bypass the vaccine court and litigate in either state or federal court using the ordinary channels for recovery in tort. This was the first instance where a federal appeals court has held that a suit of this nature may bypass the vaccine court. The argument was that thimerosal is a preservative, not a vaccine, so it does not fall under the provisions of the vaccine act. The claims that vaccines (or thimerosal in vaccines) caused autism eventually had to be filed in the vaccine court as part of the Omnibus Autism Proceeding. The scientific consensus, developed from substantial medical and scientific research, states that there is no evidence supporting these claims, and the rate of autism continues to climb despite elimination of thimerosal from most routine early childhood vaccines. Major scientific and medical bodies such as the Institute of Medicine and World Health Organization, as well as governmental agencies such as the Food and Drug Administration and the CDC reject any role for thimerosal in autism or other neurodevelopmental disorders. Compensation awards As of May 2023, nearly $4.6 billion in compensation and $450 million in attorneys’ fees have been awarded. The following table shows the awards by main classes of vaccines made to victims in the years 2006-2017. This shows that on average 1.2 awards were made per million vaccine doses. It also shows that multiple vaccines such as MMR do not have an abnormal award rate. * This covers the vaccinations known by the abbreviations DT, DTaP, DTaP-HIB, DTaP-IPV, DTap-IPV-HIB, Td, Tdap Attorneys fees and costs Self representation is permitted, although the NVICP also pays attorneys fees out of the fund, separate from any compensation given to the petitioner. This is "to ensure that vaccine claimants have readily available a competent bar to prosecute their claims". Homeland Security Act The Homeland Security Act of 2002 provides another exception to the exclusive jurisdiction of the vaccine court. If smallpox vaccine were to be widely administered by public health authorities in response to a terrorist or other biological warfare attack, persons administering or producing the vaccine would be deemed federal employees and claims would be subject to the Federal Tort Claims Act, in which case claimants would sue the U.S. Government in the U.S. district courts, and would have the burden of proving the defendants' negligence, a much more difficult standard. Petitioner's burden of proof Notably, the Health Resources and Services Administration reported in July 2022 that "approximately 60 percent of all compensation awarded by the VICP comes as result of a negotiated settlement between the parties in which HHS has not concluded, based upon review of the evidence, that the alleged vaccine(s) caused the alleged injury". Cases are settled to minimize the risk of loss for both parties, to minimize the time and expense of litigation, and to resolve petitions quickly. Of the remaining cases, in the vaccine court, as in civil tort cases, the burden of proof is a preponderance of evidence, but while in tort cases this is met by expert testimony based on epidemiology or rigorous scientific studies showing both general and specific causation, in the vaccine court, the burden is met with a three prong test established in Althen, a 2005 United States Court of Appeals for the Federal Circuit ruling. Althen held that an award should be granted if a petitioner either establishes a "Tabled Injury" or proves "causation in fact" by proving three prongs: a medical theory causally connecting the vaccination and the injury; a logical sequence of cause and effect showing that the vaccination was the reason for the injury; and a showing of a proximate temporal relationship between vaccination and injury. This ruling held that tetanus vaccine caused a particular case of optic neuritis, even though no scientific evidence supported the petitioner's claim. Other rulings have allowed petitioners to gain awards for claims that the MMR vaccine causes fibromyalgia, that the Hib vaccine causes transverse myelitis, and that the hepatitis B vaccine causes Guillain–Barré syndrome, chronic demyelinating polyneuropathy, and multiple sclerosis. In the most extreme of these cases, a 2006 petitioner successfully claimed that a hepatitis B vaccine caused her multiple sclerosis despite several studies showing that the vaccine neither causes nor worsens the disease, and despite a conclusion by the Institute of Medicine that evidence favors rejection of a causal relationship. In 2008, the federal government settled a case brought to the vaccine court by the family of Hannah Poling, a girl who developed autistic-like symptoms after receiving a series of vaccines in a single day. The vaccines given were DTaP, Hib, MMR, varicella, and inactivated polio. Poling was diagnosed months later with encephalopathy (brain disease) caused by a mitochondrial enzyme deficit, a mitochondrial disorder; it is not unusual for children with such deficits to develop neurologic signs between their first and second years. There is little scientific research in the area: no scientific studies show whether childhood vaccines can cause or contribute to mitochondrial disease, and there is no scientific evidence that vaccinations damage the brains of children with mitochondrial disorders. Although many parents view this ruling as confirming that vaccines cause regressive autism, most children with autism do not seem to have mitochondrial disorders, and the case was settled without proof of causation. With the commencement of hearings in the case of Cedillo v. Secretary of Health and Human Services (Case #98-916V), the argument over whether autism is a vaccine injury moved into the vaccine court. A panel of three special masters began hearing the first cases of the historic Omnibus Autism Proceedings in June 2007. There were six test cases in all, and the entire record of the cases is publicly available. The lead petitioners, the parents of Michelle Cedillo, claimed that Michelle's autism was caused by a vaccine. Theresa and Michael Cedillo contended that thimerosal seriously weakened Michelle's immune system and prevented her body from clearing the measles virus after her vaccination at the age of fifteen months. At the outset Special Master George Hastings, Jr. said "Clearly the story of Michelle's life is a tragic one," while pledging to listen carefully to the evidence. On February 12, 2009, the court ruled in three test cases that the combination of the MMR vaccine and thimerosal-containing vaccines were not to blame for autism. Hastings concluded in his decision, "Unfortunately, the Cedillos have been misled by physicians who are guilty, in my view, of gross medical misjudgment." The ruling was appealed to the U.S. Court of Appeals, and upheld. On March 13, 2010, the court ruled in three test cases that thimerosal-containing vaccines do not cause autism. Special Master Hastings concluded, "The overall weight of the evidence is overwhelmingly contrary to the petitioners' causation theories." See also Vaccine Damage Payment National Childhood Vaccine Injury Act Countermeasures Injury Compensation Program References External links National Vaccine Injury Compensation Program (VICP) Vaccine Program / Office of Special Masters United States federal health legislation Vaccination-related organizations Drug safety Court, Vaccine United States Court of Federal Claims Vaccination in the United States
National Vaccine Injury Compensation Program
[ "Chemistry", "Biology" ]
2,748
[ "Vaccination", "Drug safety", "Vaccine controversies" ]
6,953,458
https://en.wikipedia.org/wiki/Hadley%20cell
The Hadley cell, also known as the Hadley circulation, is a global-scale tropical atmospheric circulation that features air rising near the equator, flowing poleward near the tropopause at a height of above the Earth's surface, cooling and descending in the subtropics at around 25 degrees latitude, and then returning equatorward near the surface. It is a thermally direct circulation within the troposphere that emerges due to differences in insolation and heating between the tropics and the subtropics. On a yearly average, the circulation is characterized by a circulation cell on each side of the equator. The Southern Hemisphere Hadley cell is slightly stronger on average than its northern counterpart, extending slightly beyond the equator into the Northern Hemisphere. During the summer and winter months, the Hadley circulation is dominated by a single, cross-equatorial cell with air rising in the summer hemisphere and sinking in the winter hemisphere. Analogous circulations may occur in extraterrestrial atmospheres, such as on Venus and Mars. Global climate is greatly influenced by the structure and behavior of the Hadley circulation. The prevailing trade winds are a manifestation of the lower branches of the Hadley circulation, converging air and moisture in the tropics to form the Intertropical Convergence Zone (ITCZ) where the Earth's heaviest rains are located. Shifts in the ITCZ associated with the seasonal variability of the Hadley circulation cause monsoons. The sinking branches of the Hadley cells give rise to the oceanic subtropical ridges and suppress rainfall; many of the Earth's deserts and arid regions are located in the subtropics coincident with the position of the sinking branches. The Hadley circulation is also a key mechanism for the meridional transport of heat, angular momentum, and moisture, contributing to the subtropical jet stream, the moist tropics, and maintaining a global thermal equilibrium. The Hadley circulation is named after George Hadley, who in 1735 postulated the existence of hemisphere-spanning circulation cells driven by differences in heating to explain the trade winds. Other scientists later developed similar arguments or critiqued Hadley's qualitative theory, providing more rigorous explanations and formalism. The existence of a broad meridional circulation of the type suggested by Hadley was confirmed in the mid-20th century once routine observations of the upper troposphere became available via radiosondes. Observations and climate modelling indicate that the Hadley circulation has expanded poleward since at least the 1980s as a result of climate change, with an accompanying but less certain intensification of the circulation; these changes have been associated with trends in regional weather patterns. Model projections suggest that the circulation will widen and weaken throughout the 21st century due to climate change. Mechanism and characteristics The Hadley circulation describes the broad, thermally direct, and meridional overturning of air within the troposphere over the low latitudes. Within the global atmospheric circulation, the meridional flow of air averaged along lines of latitude are organized into circulations of rising and sinking motions coupled with the equatorward or poleward movement of air called meridional cells. These include the prominent "Hadley cells" centered over the tropics and the weaker "Ferrell cells" centered over the mid-latitudes. The Hadley cells result from the contrast of insolation between the warm equatorial regions and the cooler subtropical regions. The uneven heating of Earth's surface results in regions of rising and descending air. Over the course of a year, the equatorial regions absorb more radiation from the Sun than they radiate away. At higher latitudes, the Earth emits more radiation than it receives from the Sun. Without a mechanism to exchange heat meridionally, the equatorial regions would warm and the higher latitudes would cool progressively in disequilibrium. The broad ascent and descent of air results in a pressure gradient force that drives the Hadley circulation and other large-scale flows in both the atmosphere and the ocean, distributing heat and maintaining a global long-term and subseasonal thermal equilibrium. The Hadley circulation covers almost half of the Earth's surface area, spanning from roughly the Tropic of Cancer to the Tropic of Capricorn. Vertically, the circulation occupies the entire depth of the troposphere. The Hadley cells comprising the circulation consist of air carried equatorward by the trade winds in the lower troposphere that ascends when heated near the equator, along with air moving poleward in the upper troposphere. Air that is moved into the subtropics cools and then sinks before returning equatorward to the tropics; the position of the sinking air associated with the Hadley cell is often used as a measure of the meridional width of the global tropics. The equatorward return of air and the strong influence of heating make the Hadley cell a thermally-driven and enclosed circulation. Due to the buoyant rise of air near the equator and the sinking of air at higher latitudes, a pressure gradient develops near the surface with lower pressures near the equator and higher pressures in the subtropics; this provides the motive force for the equatorward flow in the lower troposphere. However, the release of latent heat associated with condensation in the tropics also relaxes the decrease in pressure with height, resulting in higher pressures aloft in the tropics compared to the subtropics for a given height in the upper troposphere; this pressure gradient is stronger than its near-surface counterpart and provides the motive force for the poleward flow in the upper troposphere. Hadley cells are most commonly identified using the mass-weighted, zonally-averaged stream function of meridional winds, but they can also be identified by other measurable or derivable physical parameters such as velocity potential or the vertical component of wind at a particular pressure level. Given the latitude and the pressure level , the Stokes stream function characterizing the Hadley circulation is given by where is the radius of Earth, is the acceleration due to the gravity of Earth, and is the zonally averaged meridional wind at the prescribed latitude and pressure level. The value of gives the integrated meridional mass flux between the specified pressure level and the top of the Earth's atmosphere, with positive values indicating northward mass transport. The strength of the Hadley cells can be quantified based on including the maximum and minimum values or averages of the stream function both overall and at various pressure levels. Hadley cell intensity can also be assessed using other physical quantities such as the velocity potential, vertical component of wind, transport of water vapor, or total energy of the circulation. Structure and components The structure of the Hadley circulation and its components can be inferred by graphing zonal and temporal averages of global winds throughout the troposphere. At shorter timescales, individual weather systems perturb wind flow. Although the structure of the Hadley circulation varies seasonally, when winds are averaged annually (from an Eulerian perspective) the Hadley circulation is roughly symmetric and composed of two similar Hadley cells with one in each of the northern and southern hemispheres, sharing a common region of ascending air near the equator; however, the Southern Hemisphere Hadley cell is stronger. The winds associated with the annually-averaged Hadley circulation are on the order of . However, when averaging the motions of air parcels as opposed to the winds at fixed locations (a Lagrangian perspective), the Hadley circulation manifests as a broader circulation that extends farther poleward. Each Hadley cell can be described by four primary branches of airflow within the tropics: An equatorward, lower branch within the planetary boundary layer An ascending branch near the equator A poleward, upper branch in the upper troposphere A descending branch in the subtropics The trade winds in the low-latitudes of both Earth's northern and southern hemispheres converge air towards the equator, producing a belt of low atmospheric pressure exhibiting abundant storms and heavy rainfall known as the Intertropical Convergence Zone (ITCZ). This equatorward movement of air near the Earth's surface constitutes the lower branch of the Hadley cell. The position of the ITCZ is influenced by the warmth of sea surface temperatures (SST) near the equator and the strength of cross-equatorial pressure gradients. In general, the ITCZ is located near the equator or is offset towards the summer hemisphere where the warmest SSTs are located. On an annual average, the rising branch of the Hadley circulation is slightly offset towards the Northern Hemisphere, away from the equator. Due to the Coriolis force, the trade winds deflect opposite the direction of Earth's rotation, blowing partially westward rather than directly equatorward in both hemispheres. The lower branch accrues moisture resulting from evaporation across Earth's tropical oceans. A warmer environment and converging winds force the moistened air to ascend near the equator, resulting in the rising branch of the Hadley cell. The upward motion is further enhanced by the release of latent heat as the uplift of moist air results in an equatorial band of condensation and precipitation. The Hadley circulation's upward branch largely occurs in thunderstorms occupying only around one percent of the surface area of the tropics. The transport of heat in the Hadley circulation's ascending branch is accomplished most efficiently by hot towerscumulonimbus clouds bearing strong updrafts that do not mix in drier air commonly found in the middle troposphere and thus allow the movement of air from the highly moist tropical lower troposphere into the upper troposphere. Approximately 1,500–5,000 hot towers daily near the ITCZ region are required to sustain the vertical heat transport exhibited by the Hadley circulation. The ascent of air rises into the upper troposphere to a height of , after which air diverges outward from the ITCZ and towards the poles. The top of the Hadley cell is set by the height of the tropopause as the stable stratosphere above prevents the continued ascent of air. Air arising from the low latitudes has higher absolute angular momentum about Earth's axis of rotation. The distance between the atmosphere and Earth's axis decreases poleward; to conserve angular momentum, poleward-moving air parcels must accelerate eastward. The Coriolis effect limits the poleward extent of the Hadley circulation, accelerating air in the direction of the Earth's rotation and forming a jet stream directed zonally rather than continuing the poleward flow of air at each Hadley cell's poleward boundary. Considering only the conservation of angular momentum, a parcel of air at rest along the equator would accelerate to a zonal speed of by the time it reached 30° latitude. However, small-scale turbulence along the parcel's poleward trek and large-scale eddies in the mid-latitude dissipate angular momentum. The jet associated with the Southern Hemisphere Hadley cell is stronger than its northern counterpart due to the stronger intensity of the Southern Hemisphere cell. The cooler, higher-latitudes leads to cooling of air parcels, which causes the poleward air to eventually descend. When the movement of air is averaged annually, the descending branch of the Hadley cell is located roughly over the 25th parallel north and the 25th parallel south. The moisture in the subtropics is then partly advected poleward by eddies and partly advected equatorward by the lower branch of the Hadley cell, where it is later brought towards the ITCZ. Although the zonally-averaged Hadley cell is organized into four main branches, these branches are aggregations of more concentrated air flows and regions of mass transport. Several theories and physical models have attempted to explain the latitudinal width of the Hadley cell. The Held–Hou model provides one theoretical constraint on the meridional extent of the Hadley cells. By assuming a simplified atmosphere composed of a lower layer subject to friction from the Earth's surface and an upper layer free from friction, the model predicts that the Hadley circulation would be restricted to within of the equator if parcels do not have any net heating within the circulation. According to the Held–Hou model, the latitude of the Hadley cell's poleward edge scales according to where is the difference in potential temperature between the equator and the pole in radiative equilibrium, is the height of the tropopause, is the Earth's rotation rate, and is a reference potential temperature. Other compatible models posit that the width of the Hadley cell may scale with other physical parameters such as the vertically-averaged Brunt–Väisälä frequency in the tropopshere or the growth rate of baroclinic waves shed by the cell. Seasonality and variability The Hadley circulation varies considerably with seasonal changes. Around the equinox during the spring and autumn for either the northern or southern hemisphere, the Hadley circulation takes the form of two relatively weaker Hadley cells in both hemispheres, sharing a common region of ascent over the ITCZ and moving air aloft towards each cell's respective hemisphere. However, closer to the solstices, the Hadley circulation transitions into a more singular and stronger cross-equatorial Hadley cell with air rising in the summer hemisphere and broadly descending in the winter hemisphere. The transition between the two-cell and single-cell configuration is abrupt, and during most of the year the Hadley circulation is characterized by a single dominant Hadley cell that transports air across the equator. In this configuration, the ascending branch is located in the tropical latitudes of the warmer summer hemisphere and the descending branch is positioned in the subtropics of the cooler winter hemisphere. Two cells are still present in each hemisphere, though the winter hemisphere's cell becomes much more prominent while the summer hemisphere's cell becomes displaced poleward. The intensification of the winter hemisphere's cell is associated with a steepening of gradients in geopotential height, leading to an acceleration of trade winds and stronger meridional flows. The presence of continents relaxes temperature gradients in the summer hemisphere, accentuating the contrast between the hemispheric Hadley cells. Reanalysis data from 1979–2001 indicated that the dominant Hadley cell in boreal summer extended from 13°S to 31°N on average. In both boreal and austral winters, the Indian Ocean and the western Pacific Ocean contribute most to the rising and sinking motions in the zonally-averaged Hadley circulation. However, vertical flows over Africa and the Americas are more marked in boreal winter. At longer interannual timescales, variations in the Hadley circulation are associated with variations in the El Niño–Southern Oscillation (ENSO), which impacts the positioning of the ascending branch; the response of the circulation to ENSO is non-linear, with a more marked response to El Niño events than La Niña events. During El Niño, the Hadley circulation strengthens due to the increased warmth of the upper troposphere over the tropical Pacific and the resultant intensification of poleward flow. However, these changes are not asymmetric, during the same events, the Hadley cells over the western Pacific and the Atlantic are weakened. During the Atlantic Niño, the circulation over the Atlantic is intensified. The Atlantic circulation is also enhanced during periods when the North Atlantic oscillation is strongly positive. The variation in the seasonally-averaged and annually-averaged Hadley circulation from year to year is largely accounted for by two juxtaposed modes of oscillation: an equatorial symmetric mode characterized by single cell straddling the equator and an equatorial symmetric mode characterized by two cells on either side of the equator. Energetics and transport The Hadley cell is an important mechanism by which moisture and energy are transported both between the tropics and subtropics and between the northern and southern hemispheres. However, it is not an efficient transporter of energy due to the opposing flows of the lower and upper branch, with the lower branch transporting sensible and latent heat equatorward and the upper branch transporting potential energy poleward. The resulting net energy transport poleward represents around 10 percent of the overall energy transport involved in the Hadley cell. The descending branch of the Hadley cell generates clear skies and a surplus of evaporation relative to precipitation in the subtropics. The lower branch of the Hadley circulation accomplishes most of the transport of the excess water vapor accumulated in the subtropical atmosphere towards the equatorial region. The strong Southern Hemisphere Hadley cell relative to its northern counterpart leads to a small net energy transport from the northern to the southern hemisphere; as a result, the transport of energy at the equator is directed southward on average, with an annual net transport of around 0.1 PW. In contrast to the higher latitudes where eddies are the dominant mechanism for transporting energy poleward, the meridional flows imposed by the Hadley circulation are the primary mechanism for poleward energy transport in the tropics. As a thermally direct circulation, the Hadley circulation converts available potential energy to the kinetic energy of horizontal winds. Based on data from January 1979 and December 2010, the Hadley circulation has an average power output of 198 TW, with maxima in January and August and minima in May and October. Although the stability of the tropopause largely limits the movement of air from the troposphere to the stratosphere, some tropospheric air penetrates into the stratosphere via the Hadley cells. The Hadley circulation may be idealized as a heat engine converting heat energy into mechanical energy. As air moves towards the equator near the Earth's surface, it accumulates entropy from the surface either by direct heating or the flux of sensible or latent heat. In the ascending branch of a Hadley cell, the ascent of air is approximately an adiabatic process with respect to the surrounding environment. However, as parcels of air move equatorward in the cell's upper branch, they lose entropy by radiating heat to space at infrared wavelengths and descend in response. This radiative cooling occurs at a rate of at least 60  W m−2 and may exceed 100 W m−2 in winter. The heat accumulated during the equatorward branch of the circulation is greater than the heat lost in the upper poleward branch; the excess heat is converted into the mechanical energy that drives the movement of air. This difference in heating also results in the Hadley circulation transporting heat poleward as the air supplying the Hadley cell's upper branch has greater moist static energy than the air supplying the cell's lower branch. Within the Earth's atmosphere, the timescale at which air parcels lose heat due to radiative cooling and the timescale at which air moves along the Hadley circulation are at similar orders of magnitude, allowing the Hadley circulation to transport heat despite cooling in the circulation's upper branch. Air with high potential temperature is ultimately moved poleward in the upper troposphere while air with lower potential temperature is brought equatorward near the surface. As a result, the Hadley circulation is one mechanism by which the disequilibrium produced by uneven heating of the Earth is brought towards equilibrium. When considered as a heat engine, the thermodynamic efficiency of the Hadley circulation averaged around 2.6 percent between 1979–2010, with small seasonal variability. The Hadley circulation also transports planetary angular momentum poleward due to Earth's rotation. Because the trade winds are directed opposite the Earth's rotation, eastward angular momentum is transferred to the atmosphere via frictional interaction between the winds and topography. The Hadley cell then transfers this angular momentum through its upward and poleward branches. The poleward branch accelerates and is deflected east in both the northern and southern hemispheres due to the Coriolis force and the conservation of angular momentum, resulting in a zonal jet stream above the descending branch of the Hadley cell. The formation of such a jet implies the existence of a thermal wind balance supported by the amplification of temperature gradients in the jet's vicinity resulting from the Hadley circulation's poleward heat advection. The subtropical jet in the upper troposphere coincides with where the Hadley cell meets the Ferrell cell. The strong wind shear accompanying the jet presents a significant source of baroclinic instability from which waves grow; the growth of these waves transfers heat and momentum polewards. Atmospheric eddies extract westerly angular momentum from the Hadley cell and transport it downward, resulting in the mid-latitude westerly winds. Formulation and discovery The broad structure and mechanism of the Hadley circulationcomprising convective cells moving air due to temperature differences in a manner influenced by the Earth's rotationwas first proposed by Edmund Halley in 1685 and George Hadley in 1735. Hadley had sought to explain the physical mechanism for the trade winds and the westerlies; the Hadley circulation and the Hadley cells are named in honor of his pioneering work. Although Hadley's ideas invoked physical concepts that would not be formalized until well after his death, his model was largely qualitative and without mathematical rigor. Hadley's formulation was later recognized by most meteorologists by the 1920s to be a simplification of more complicated atmospheric processes. The Hadley circulation may have been the first attempt to explain the global distribution of winds in Earth's atmosphere using physical processes. However, Hadley's hypothesis could not be verified without observations of winds in the upper-atmosphere. Data collected by routine radiosondes beginning in the mid-20th century confirmed the existence of the Hadley circulation. Early explanations of the trade winds In the 15th and 16th centuries, observations of maritime weather conditions were of considerable importance to maritime transport. Compilations of these observations showed consistent weather conditions from year to year and significant seasonal variability. The prevalence of dry conditions and weak winds at around 30° latitude and the equatorward trade winds closer to the equator, mirrored in the northern and southern hemispheres, was apparent by 1600. Early efforts by scientists to explain aspects of global wind patterns often focused on the trade winds as the steadiness of the winds was assumed to portend a simple physical mechanism. Galileo Galilei proposed that the trade winds resulted from the atmosphere lagging behind the Earth's faster tangential rotation speed in the low latitudes, resulting in the westward trades directed opposite of Earth's rotation. In 1685, English polymath Edmund Halley proposed at a debate organized by the Royal Society that the trade winds resulted from east to west temperature differences produced over the course of a day within the tropics. In Halley's model, as the Earth rotated, the location of maximum heating from the Sun moved west across the Earth's surface. This would cause air to rise, and by conservation of mass, Halley argued that air would be moved to the region of evacuated air, generating the trade winds. Halley's hypothesis was criticized by his friends, who noted that his model would lead to changing wind directions throughout the course of a day rather than the steady trade winds. Halley conceded in personal correspondence with John Wallis that "Your questioning my hypothesis for solving the Trade Winds makes me less confident of the truth thereof". Nonetheless, Halley's formulation was incorporated into Chambers's Encyclopaedia and La Grande Encyclopédie, becoming the most widely-known explanation for the trade winds until the early 19th century. Though his explanation of the trade winds was incorrect, Halley correctly predicted that the surface trade winds should be accompanied by an opposing flow aloft following mass conservation. George Hadley's explanation Unsatisfied with preceding explanations for the trade winds, George Hadley proposed an alternate mechanism in 1735. Hadley's hypothesis was published in the paper "On the Cause of the General Trade Winds" in Philosophical Transactions of the Royal Society. Like Halley, Hadley's explanation viewed the trade winds as a manifestation of air moving to take the place of rising warm air. However, the region of rising air prompting this flow lay along the lower latitudes. Understanding that the tangential rotation speed of the Earth was fastest at the equator and slowed farther poleward, Hadley conjectured that as air with lower momentum from higher latitudes moved equatorward to replace the rising air, it would conserve its momentum and thus curve west. By the same token, the rising air with higher momentum would spread poleward, curving east and then sinking as it cooled to produce westerlies in the mid-latitudes. Hadley's explanation implied the existence of hemisphere-spanning circulation cells in the northern and southern hemispheres extending from the equator to the poles, though he relied on an idealization of Earth's atmosphere that lacked seasonality or the asymmetries of the oceans and continents. His model also predicted rapid easterly trade winds of around , though he argued that the action of surface friction over the course of a few days slowed the air to the observed wind speeds. Colin Maclaurin extended Hadley's model to the ocean in 1740, asserting that meridional ocean currents were subject to similar westward or eastward deflections. Hadley was not widely associated with his theory due to conflation with his older brother, John Hadley, and Halley; his theory failed to gain much traction in the scientific community for over a century due to its unintuitive explanation and the lack of validating observations. Several other natural philosophers independently forwarded explanations for the global distribution of winds soon after Hadley's 1735 proposal. In 1746, Jean le Rond d'Alembert provided a mathematical formulation for global winds, but disregarded solar heating and attributed the winds to the gravitational effects of the Sun and Moon. Immanuel Kant, also unsatisfied with Halley's explanation for the trade winds, published an explanation for the trade winds and westerlies in 1756 with similar reasoning as Hadley. In the latter part of the 18th century, Pierre-Simon Laplace developed a set of equations establishing a direct influence of Earth's rotation on wind direction. Swiss scientist Jean-André Deluc published an explanation of the trade winds in 1787 similar to Hadley's hypothesis, connecting differential heating and the Earth's rotation with the direction of the winds. English chemist John Dalton was the first to clearly credit Hadley's explanation of the trade winds to George Hadley, mentioning Hadley's work in his 1793 book Meteorological Observations and Essays. In 1837, Philosophical Magazine published a new theory of wind currents developed by Heinrich Wilhelm Dove without reference to Hadley but similarly explaining the direction of the trade winds as being influenced by the Earth's rotation. In response, Dalton later wrote a letter to the editor to the journal promoting Hadley's work. Dove subsequently credited Hadley so frequently that the overarching theory became known as the "Hadley–Dove principle", popularizing Hadley's explanation for the trade winds in Germany and Great Britain. Critique of Hadley's explanation The work of Gustave Coriolis, William Ferrel, Jean Bernard Foucault, and Henrik Mohn in the 19th century helped establish the Coriolis force as the mechanism for the deflection of winds due to Earth's rotation, emphasizing the conservation of angular momentum in directing flows rather than the conservation of linear momentum as Hadley suggested; Hadley's assumption led to an underestimation of the deflection by a factor of two. The acceptance of the Coriolis force in shaping global winds led to debate among German atmospheric scientists beginning in the 1870s over the completeness and validity of Hadley's explanation, which narrowly explained the behavior of initially meridional motions. Hadley's use of surface friction to explain why the trade winds were much slower than his theory would predict was seen as a key weakness in his ideas. The southwesterly motions observed in cirrus clouds at around 30°N further discounted Hadley's theory as their movement was far slower than the theory would predict when accounting for the conservation of angular momentum. In 1899, William Morris Davis, a professor of physical geography at Harvard University, gave a speech at the Royal Meteorological Society criticizing Hadley's theory for its failure to account for the transition of an initially unbalanced flow to geostrophic balance. Davis and other meteorologists in the 20th century recognized that the movement of air parcels along Hadley's envisaged circulation was sustained by a constant interplay between the pressure gradient and Coriolis forces rather than the conservation of angular momentum alone. Ultimately, while the atmospheric science community considered the general ideas of Hadley's principle valid, his explanation was viewed as a simplification of more complex physical processes. Hadley's model of the global atmospheric circulation being characterized by hemisphere-wide circulation cells was also challenged by weather observations showing a zone of high pressure in the subtropics and a belt of low pressure at around 60° latitude. This pressure distribution would imply a poleward flow near the surface in the mid-latitudes rather than an equatorward flow implied by Hadley's envisioned cells. Ferrel and James Thomson later reconciled the pressure pattern with Hadley's model by proposing a circulation cell limited to lower altitudes in the mid-latitudes and nestled within the broader, hemisphere-wide Hadley cells. Carl-Gustaf Rossby proposed in 1947 that the Hadley circulation was limited to the tropics, forming one part of a dynamically-driven and multi-celled meridional flow. Rossby's model resembled that of a similar three-celled model developed by Ferrel in 1860. Direct observation The three-celled model of the global atmospheric circulationwith Hadley's conceived circulation forming its tropical componenthad been widely accepted by the meteorological community by the early 20th century. However, the Hadley cell's existence was only validated by weather observations near the surface, and its predictions of winds in the upper troposphere remained untested. The routine sampling of the upper troposphere by radiosondes that emerged in the mid-20th century confirmed the existence of meridional overturning cells in the atmosphere. Influence on climate The Hadley circulation is one of the most important influences on global climate and planetary habitability, as well as an important transporter of angular momentum, heat, and water vapor. Hadley cells flatten the temperature gradient between the equator and the poles, making the extratropics milder. The global precipitation pattern of high precipitation in the tropics and a lack of precipitation at higher latitudes is a consequence of the positioning of the rising and sinking branches of Hadley cells, respectively. Near the equator, the ascent of humid air results in the heaviest precipitation on Earth. The periodic movement of the ITCZ and thus the seasonal variation of the Hadley circulation's rising branches produces the world's monsoons. The descending motion of air associating with the sinking branch produces surface divergence consistent with the prominence of subtropical high-pressure areas. These semipermanent regions of high pressure lie primarily over the ocean between 20° and 40° latitude. Arid conditions are associated with the descending branches of the Hadley circulation, with many of the Earth's deserts and semiarid or arid regions underlying the sinking branches of the Hadley circulation. The cloudy marine boundary layer common in the subtropics may be seeded by cloud condensation nuclei exported out of the tropics by the Hadley circulation. Effects of climate change Natural variability Paleoclimate reconstructions of trade winds and rainfall patterns suggest that the Hadley circulation changed in response to natural climate variability. During Heinrich events within the last 100,000 years, the Northern Hemisphere Hadley cell strengthened while the Southern Hemisphere Hadley cell weakened. Variation in insolation during the mid- to late-Holocene resulted in a southward migration of the Northern Hemisphere Hadley cell's ascending and descending branches closer to their present-day positions. Tree rings from the mid-latitudes of the Northern Hemisphere suggest that the historical position of the Hadley cell branches have also shifted in response to shorter oscillations, with the Northern Hemisphere descending branch moving southward during positive phases of the El Niño–Southern Oscillation and Pacific decadal oscillation and northward during the corresponding negative phases. The Hadley cells were displaced southward between 1400–1850, concurrent with drought in parts of the Northern Hemisphere. Hadley cell expansion and intensity changes Observed trends According to the IPCC Sixth Assessment Report (AR6), the Hadley circulation has likely expanded since at least the 1980s in response to climate change, with medium confidence in an accompanying intensification of the circulation. An expansion of the overall circulation poleward by about 0.1°–0.5° latitude per decade since the 1980s is largely accounted for by the poleward shift of the Northern Hemisphere Hadley cell, which in atmospheric reanalysis has shown a more marked expansion since 1992. However, the AR6 also reported medium confidence in the expansion of the Northern Hemisphere Hadley cell being within the range of internal variability. In contrast, the AR6 assessed that it was likely that the Southern Hemisphere Hadley cell's poleward expansion was due to anthropogenic influence; this finding was based on CMIP5 and CMIP6 climate models. Studies have produced a large range of estimates for the rate of widening of the tropics due to the use of different metrics; estimates based on upper-tropospheric properties tend to yield a wider range of values. The degree to which the circulation has expanded varies by season, with trends in summer and autumn being larger and statistically significant in both hemispheres. The widening of the Hadley circulation has also resulted in a likely widening of the ITCZ since the 1970s. Reanalyses also suggest that the summer and autumn Hadley cells in both hemispheres have widened and that the global Hadley circulation has intensified since 1979, with a more pronounced intensification in the Northern Hemisphere. Between 1979–2010, the power generated by the global Hadley circulation increased by an average of 0.54 TW per year, consistent with an increased input of energy into the circulation by warming SSTs over the tropical oceans. (For comparison, the Hadley circulation's overall power ranges from 0.5 TW to 218 TW throughout the year in the Northern Hemisphere and from 32 to 204 TW in the Southern.) In contrast to reanalyses, CMIP5 climate models depict a weakening of the Hadley circulation since 1979. The magnitude of long-term changes in the circulation strength are thus uncertain due to the influence of large interannual variability and the poor representation of the distribution of latent heat release in reanalyses. The expansion of the Hadley circulation due to climate change is consistent with the Held–Hou model, which predicts that the latitudinal extent of the circulation is proportional to the square root of the height of the tropopause. Warming of the troposphere raises the tropopause height, enabling the upper poleward branch of the Hadley cells to extend farther and leading to an expansion of the cells. Results from climate models suggest that the impact of internal variability (such as from the Pacific decadal oscillation) and the anthropogenic influence on the expansion of the Hadley circulation since the 1980s have been comparable. Human influence is most evident in the expansion of the Southern Hemisphere Hadley cell; the AR6 assessed medium confidence in associating the expansion of the Hadley circulation in both hemispheres with the added radiative forcing of greenhouse gasses. Physical mechanisms and projected changes The physical processes by which the Hadley circulation expands by human influence are unclear but may be linked to the increased warming of the subtropics relative to other latitudes in both the Northern and Southern hemispheres. The enhanced subtropical warmth could enable expansion of the circulation poleward by displacing the subtropical jet and baroclinic eddies poleward. Poleward expansion of the Southern Hemisphere Hadley cell in the austral summer was attributed by the IPCC Fifth Assessment Report (AR5) to stratospheric ozone depletion based on CMIP5 model simulations, while CMIP6 simulations have not shown as clear of a signal. Ozone depletion could plausibly affect the Hadley circulation through the increase of radiative cooling in the lower stratosphere; this would increase the phase speed of baroclinic eddies and displace them poleward, leading to expansion of Hadley cells. Other eddy-driven mechanisms for expanding Hadley cells have been proposed, involving changes in baroclinicity, wave breaking, and other releases of instability. In the extratropics of the Northern Hemisphere, increasing concentrations of black carbon and tropospheric ozone may be a major forcing on that hemisphere's Hadley cell expansion in boreal summer. Projections from climate models indicate that a continued increase in the concentration of greenhouse gas would result in continued widening of the Hadley circulation. However, simulations using historical data suggest that forcing from greenhouse gasses may account for about 0.1° per decade of expansion of the tropics. Although the widening of the Hadley cells due to climate change has occurred concurrent with an increase in their intensity based on atmospheric reanalyses, climate model projections generally depict a weakening circulation in tandem with a widening circulation by the end of the 21st century. A longer term increase in the concentration of carbon dioxide may lead to a weakening of the Hadley circulation as a result of the reduction of radiative cooling in the troposphere near the circulation's sinking branches. However, changes in the oceanic circulation within the tropics may attenuate changes in the intensity and width of the Hadley cells by reducing thermal contrasts. Changes to weather patterns The expansion of the Hadley circulation due to climate change is connected to changes in regional and global weather patterns. A widening of the tropics could displace the tropical rain belt, expand subtropical deserts, and exacerbate wildfires and drought. The documented shift and expansion of subtropical ridges are associated with changes in the Hadley circulation, including a westward extension of the subtropical high over the northwestern Pacific, changes in the intensity and position of the Azores High, and the poleward displacement and intensification of the subtropical high pressure belt in the Southern Hemisphere. These changes have influenced regional precipitation amounts and variability, including drying trends over southern Australia, northeastern China, and northern South Asia. The AR6 assessed limited evidence that the expansion of the Northern Hemisphere Hadley cell may have led in part to drier conditions in the subtropics and a poleward expansion of aridity during boreal summer. Precipitation changes induced by Hadley circulation changes may lead to changes in regional soil moisture, with modelling showing the most significant declines in the Mediterranean Sea, South Africa, and the Southwestern United States. However, the concurrent effects of changing surface temperature patterns over land lead to uncertainties over the influence of Hadley cell broadening on drying over subtropical land areas. Climate modelling suggests that the shift in the position of the subtropical highs induced by Hadley cell broadening may reduce oceanic upwelling at low latitudes and enhance oceanic upwelling at high latitudes. The expansion of subtropical highs in tandem with the circulation's expansion may also entail a widening of oceanic regions of high salinity and low marine primary production. A decline in extratropical cyclones in the storm track regions in model projections is partly influenced by Hadley cell expansion. Poleward shifts in the Hadley circulation are associated with shifts in the paths of tropical cyclones in the Northern and Southern hemispheres, including a poleward trend in the locations where storms attained their peak intensity. Extraterrestrial Hadley circulations Outside of Earth, any thermally direct circulation that circulates air meridionally across planetary-scale gradients of insolation may be described as a Hadley circulation. A terrestrial atmosphere subject to excess equatorial heating tends to maintain an axisymmetric Hadley circulation with rising motions near the equator and sinking at higher latitudes. Differential heating is hypothesized to result in Hadley circulations analogous to Earth's on other atmospheres in the Solar System, such as on Venus, Mars, and Titan. As with Earth's atmosphere, the Hadley circulation would be the dominant meridional circulation for these extraterrestrial atmospheres. Though less understood, Hadley circulations may also be present on the gas giants of the Solar System and should in principle materialize on exoplanetary atmospheres. The spatial extent of a Hadley cell on any atmosphere may be dependent on the rotation rate of the planet or moon, with a faster rotation rate leading to more contracted Hadley cells (with a more restrictive poleward extent) and a more cellular global meridional circulation. The slower rotation rate reduces the Coriolis effect, thus reducing the meridional temperature gradient needed to sustain a jet at the Hadley cell's poleward boundary and thus allowing the Hadley cell to extend farther poleward. Venus, which rotates slowly, may have Hadley cells that extend farther poleward than Earth's, spanning from the equator to high latitudes in each of the northern and southern hemispheres. Its broad Hadley circulation would efficiently maintain the nearly isothermal temperature distribution between the planet's pole and equator and vertical velocities of around . Observations of chemical tracers such as carbon monoxide provide indirect evidence for the existence of the Venusian Hadley circulation. The presence of poleward winds with speeds up to around at an altitude of are typically understood to be associated with the upper branch of a Hadley cell, which may be located above the Venusian surface. The slow vertical velocities associated with the Hadley circulation have not been measured, though they may have contributed to the vertical velocities measured by Vega and Venera missions. The Hadley cells may extend to around 60° latitude, equatorward of a mid-latitude jet stream demarcating the boundary between the hypothesized Hadley cell and the polar vortex. The planet's atmosphere may exhibit two Hadley circulations, with one near the surface and the other at the level of the upper cloud deck. The Venusian Hadley circulation may contribute to the superrotation of the planet's atmosphere. Simulations of the Martian atmosphere suggest that a Hadley circulation is also present in Mars' atmosphere, exhibiting a stronger seasonality compared to Earth's Hadley circulation. This greater seasonality results from diminished thermal inertia resulting from the lack of an ocean and the planet's thinner atmosphere. Additionally, Mars' orbital eccentricity leads to a stronger and wider Hadley cell during its northern winter compared to its southern winter. During most of the Martian year, when a single Hadley cell prevails, its rising and sinking branches are located at 30° and 60° latitude, respectively, in global climate modelling. The tops of the Hadley cells on Mars may reach higher (to around altitude) and be less defined compared to on Earth due to the lack of a strong tropopause on Mars. While latent heating from phase changes associated with water drive much of the ascending motion in Earth's Hadley circulation, ascent in Mars' Hadley circulation may be driven by radiative heating of lofted dust and intensified by the condensation of carbon dioxide near the polar ice cap of Mars' wintertime hemisphere, steepening pressure gradients. Over the course of the Martian year, the mass flux of the Hadley circulation ranges between 109 kg s−1 during the equinoxes and 1010 at the solstices. A Hadley circulation may also be present in the atmosphere of Saturn's moon Titan. Like Venus, the slow rotation rate of Titan may support a spatially broad Hadley circulation. General circulation modeling of Titan's atmosphere suggests the presence of a cross-equatorial Hadley cell. This configuration is consistent with the meridional winds observed by the Huygens spacecraft when it landed near Titan's equator. During Titan's solstices, its Hadley circulation may take the form of a single Hadley cell that extends from pole to pole, with warm gas rising in the summer hemisphere and sinking in the winter hemisphere. A two-celled configuration with ascent near the equator is present in modelling during a limited transitional period near the equinoxes. The distribution of convective methane clouds on Titan and observations from Huygens spacecraft suggest that the rising branch of its Hadley circulation occurs in the mid-latitudes of its summer hemisphere. Frequent cloud formation occurs at 40° latitude in Titan's summer hemisphere from ascent analogous to Earth's ITCZ. See also Polar vortex – a broad semi-permanent region of cold, cyclonically-rotating air encircling Earth's poles Brewer–Dobson circulation – a circulation between the tropical troposphere and the stratosophere Atlantic meridional overturning circulation – a broad oceanic circulation important for energy exchange across a wide range of latitudes Notes References Sources Tropical meteorology Oceanography Atmospheric circulation fr:Circulation atmosphérique#Cellules de Hadley
Hadley cell
[ "Physics", "Environmental_science" ]
9,117
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
6,954,175
https://en.wikipedia.org/wiki/Koenigs%E2%80%93Knorr%20reaction
The Koenigs–Knorr reaction in organic chemistry is the substitution reaction of a glycosyl halide with an alcohol to give a glycoside. It is one of the oldest glycosylation reactions. It is named after Wilhelm Koenigs (1851–1906), a student of von Baeyer and fellow student with Hermann Emil Fischer, and Edward Knorr, a student of Koenigs. In its original form, Koenigs and Knorr treated acetobromoglucose with alcohols in the presence of silver carbonate. Shortly afterwards Fischer and Armstrong reported very similar findings. In the above example, the stereochemical outcome is determined by the presence of the neighboring group at C2 that lends anchimeric assistance, resulting in the formation of a 1,2-trans stereochemical arrangement. Esters (e.g. acetyl, benzoyl, pivalyl) generally provide good anchimeric assistance, whereas ethers (e.g. benzyl, methyl etc.) do not, leading to mixtures of stereoisomers. Mechanism In the first step of the mechanism, the glycosyl bromide reacts with silver carbonate upon elimination of silver bromide and the silver carbonate anion to the oxocarbenium ion. From this structure a dioxolanium ring is formed, which is attacked by methanol via an mechanism at the carbonyl carbon atom. This attack leads to the inversion. After deprotonation of the intermediate oxonium, the product glycoside is formed. The reaction can also be applied to carbohydrates with other protecting groups. In the oligosaccharide synthesis in place of the methanol other carbohydrates are used, which have been modified with protective groups in such a way that only one hydroxyl group is accessible. History The method was later transferred by Emil Fischer and Burckhardt Helferich to other chloro-substituted purines and produced thus for the first time synthetic nucleosides. It was later improved and modified by numerous chemists. Alternative reactions Generally, the Koenigs–Knorr reaction refers to the use of glycosyl chlorides, bromides and more recently iodides as glycosyl donors. The Koenigs–Knorr reaction can be performed with alternative promoters such as various heavy metal salts including mercuric bromide/mercuric oxide, mercuric cyanide and silver triflate. When mercury salts are used, the reaction is normally called the Helferich method. Other glycosidation methods are Fischer glycosidation, use of glycosyl acetates, thioglycosides, glycosyl trichloroacetimidates, glycosyl fluorides or n-pentenyl glycosides as glycosyl donors, or intramolecular aglycon delivery. References Carbohydrate chemistry Substitution reactions Name reactions
Koenigs–Knorr reaction
[ "Chemistry" ]
634
[ "Name reactions", "Carbohydrate chemistry", "nan", "Chemical synthesis", "Glycobiology" ]
6,954,327
https://en.wikipedia.org/wiki/Globoidnan%20A
Globoidnan A is a lignan found in Eucalyptus globoidea, a tree native to Australia. The molecule has been found to weakly inhibit the action of HIV integrase (IC50 = 0.64 μM) in vitro. HIV integrase is an enzyme which is responsible for the introduction of HIV viral DNA into a host's cellular DNA. It is not known that globoidnan A inhibits the action of other retroviral integrases. References Lignans Naphthalenes Catechols Carboxylate esters Propionic acids
Globoidnan A
[ "Chemistry" ]
126
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
6,954,595
https://en.wikipedia.org/wiki/Shear%20band
In solid mechanics, a shear band (or, more generally, a strain localization) is a narrow zone of intense strain due to shearing, usually of plastic nature, developing during severe deformation of ductile materials. As an example, a soil (overconsolidated silty-clay) specimen is shown in Fig. 1, after an axialsymmetric compression test. Initially the sample was cylindrical in shape and, since symmetry was tried to be preserved during the test, the cylindrical shape was maintained for a while during the test and the deformation was homogeneous, but at extreme loading two X-shaped shear bands had formed and the subsequent deformation was strongly localized (see also the sketch on the right of Fig. 1). Materials in which shear bands are observed Although not observable in brittle materials (for instance glass at room temperature), shear bands or, more generally, ‘localized deformations’ usually develop within a broad range of ductile materials (alloys, metals, granular materials, plastics, polymers, and soils) and even in quasi-brittle materials (concrete, ice, rock, and some ceramics). The relevance of the shear banding phenomena is that they precede failure, since extreme deformations occurring within shear bands lead to intense damage and fracture. Therefore, the formation of shear bands is the key to the understanding of failure in ductile materials, a research topic of great importance for the design of new materials and for the exploiting of existing materials in extreme conditions. As a consequence, localization of deformation has been the focus of an intense research activity since the middle of the 20th century. Mathematical modeling Shear band formation is an example of a material instability, corresponding to an abrupt loss of homogeneity of deformation occurring in a solid sample subject to a loading path compatible with continued uniform deformation. In this sense, it may be interpreted as a deformation mechanism ‘alternative’ to a trivial one and therefore a bifurcation or loss of uniqueness of a ‘perfect’ equilibrium path. The distinctive character of this bifurcation is that it may occur even in an infinite body (or under the extreme constraint of smooth contact with a rigid constraint). Consider an infinite body made up of a nonlinear material, quasi-statically deformed in a way that stress and strain may remain homogeneous. The incremental response of this nonlinear material is assumed for simplicity linear, so that it can be expressed as a relation between a stress increment and a strain increment , through a fourth-order constitutive tensor as where the fourth-order constitutive tensor depends on the current state, i.e. the current stress, the current strain and, possibly, other constitutive parameters (for instance, hardening variables for metals, or density for granular materials). Conditions are sought for the emergence of a surface of discontinuity (of unit normal vector ) in the incremental stress and strain. These conditions are identified with the conditions for the occurrence of localization of deformation. In particular, incremental equilibrium requires that the incremental tractions (not the stresses!) remain continuous (where + and - denote the two sides of the surface) and geometrical compatibility imposes a strain compatibility restriction on the form of incremental strain: where the symbol denotes tensor product and is a vector defining the deformation discontinuity mode (orthogonal to for incompressible materials). A substitution of the incremental constitutive law (1) and of the strain compatibility () into the continuity of incremental tractions () yields the necessary condition for strain localization: Since the second-order tensor defined for every vector as is the so-called 'acoustic tensor', defining the condition of propagation of acceleration waves, we can conclude that the condition for strain localization coincides with the condition of singularity (propagation at null speed) of an acceleration wave. This condition represents the so-called 'loss of ellipticity' of the differential equations governing the rate equilibrium. State-of-the-art The state-of-the-art of the research on shear bands is that the phenomenon is well understood from the theoretical and experimental point of view and available constitutive models give nice qualitative predictions, although quantitative predictions are often poor. Moreover, great progresses have been made on numerical simulations, so that shear band nucleation and propagation in relatively complex situations can be traced numerically with finite element models, although still at the cost of a great computational effort. Of further interest are simulations that reveal the crystallographic orientation dependence of shear banding in single crystal and polycrystals. These simulations show that certain orientations are much more prone to undergo shear localization than others. Shear banding and crystallographic texture Most polycrystalline metals and alloys usually deform via shear caused by dislocations, twins, and / or shear bands. This leads to pronounced plastic anisotropy at the grain scale and to preferred grain orientation distributions, i.e. crystallographic textures. Cold rolling textures of most face centered cubic metals and alloys for instance range between two types, i.e. the brass-type texture and the copper-type texture. The stacking fault energy plays an important role for the prevailing mechanisms of plastic deformation and the resultant textures. For aluminum and other fcc materials with high SFE, dislocation glide is the main mechanism during cold rolling and the {112}<111> (copper) and {123}<634> (S) texture components (copper-type textures) are developed. In contrast, in Cu–30 wt.% Zn (alpha-brass) and related metals and alloys with low SFE, mechanical twinning and shear banding occur together with dislocation glide as main deformation carriers, particularly at large plastic deformations. The resulting rolling textures are characterized by the {011}<211> (brass) and {01 1}<100> (Goss) texture components (brass-type texture). In either case non-crystallographic shear banding plays an essential role for the specific type of deformation texture evolved. A perturbative approach to analyze shear band emergence Closed-form solutions disclosing the shear band emergence can be obtained through the perturbative approach, consisting in the superimposition of a perturbation field upon an unperturbed deformed state. In particular, an infinite, incompressible, nonlinear elastic material, homogeneously deformed under the plane strain condition can be perturbed through superposition of concentrated forces or by the presence of cracks or rigid line inclusions. It has been shown that, when the unperturbed state is taken close to the localization condition (4), the perturbed fields self-arrange in the form of localized fields, taking extreme values in the neighbourhood of the introduced perturbation and focussed along the shear bands directions. In particular, in the case of cracks and rigid line inclusions such shear bands emerge from the linear inclusion tips. Within the perturbative approach, an incremental model for a shear band of finite length has been introduced prescribing the following conditions along its surface: null incremental nominal shearing tractions; continuity of the incremental nominal normal traction; continuity of normal incremental displacement. Employing this model, the following main features of shear banding have been demonstrated: similarly to fracture mechanics, a square-root singularity in the stress/deformation fields develops at the shear band tips; in presence of a shear band, the strain field is localized and strongly focussed in the direction aligned parallel to the shear band; since the energy release rate associated to the shear band growth blows up to infinity near the localization condition (4), shear bands represent preferential failure modes. See also Amorphous metal Deformation (engineering) Triaxial shear test Adiabatic shear band References External links Ames Laboratory, US DOE, video of shear band formation. Laboratory for Physical Modeling of Structures and Photoelasticity (University of Trento, Italy) Materials science Polymers
Shear band
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,672
[ "Applied and interdisciplinary physics", "Materials science", "nan", "Polymer chemistry", "Polymers" ]
6,954,902
https://en.wikipedia.org/wiki/Subgame%20perfect%20equilibrium
In game theory, a subgame perfect equilibrium (or subgame perfect Nash equilibrium) is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a subgame perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Informally, this means that at any point in the game, the players' behavior from that point onward should represent a Nash equilibrium of the continuation game (i.e. of the subgame), no matter what happened before. Every finite extensive game with perfect recall has a subgame perfect equilibrium. Perfect recall is a term introduced by Harold W. Kuhn in 1953 and "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves". A common method for determining subgame perfect equilibria in the case of a finite game is backward induction. Here one first considers the last actions of the game and determines which actions the final mover should take in each possible circumstance to maximize his/her utility. One then supposes that the last actor will do these actions, and considers the second to last actions, again choosing those that maximize that actor's utility. This process continues until one reaches the first move of the game. The strategies which remain are the set of all subgame perfect equilibria for finite-horizon extensive games of perfect information. However, backward induction cannot be applied to games of imperfect or incomplete information because this entails cutting through non-singleton information sets. A subgame perfect equilibrium necessarily satisfies the one-shot deviation principle. The set of subgame perfect equilibria for a given game is always a subset of the set of Nash equilibria for that game. In some cases the sets can be identical. The ultimatum game provides an intuitive example of a game with fewer subgame perfect equilibria than Nash equilibria. Example Determining the subgame perfect equilibrium by using backward induction is shown below in Figure 1. Strategies for Player 1 are given by {Up, Uq, Dp, Dq}, whereas Player 2 has the strategies among {TL, TR, BL, BR}. There are 4 subgames in this example, with 3 proper subgames. Using the backward induction, the players will take the following actions for each subgame: Subgame for actions p and q: Player 1 will take action p with payoff (3, 3) to maximize Player 1's payoff, so the payoff for action L becomes (3,3). Subgame for actions L and R: Player 2 will take action L for 3 > 2, so the payoff for action D becomes (3, 3). Subgame for actions T and B: Player 2 will take action T to maximize Player 2's payoff, so the payoff for action U becomes (1, 4). Subgame for actions U and D: Player 1 will take action D to maximize Player 1's payoff. Thus, the subgame perfect equilibrium is {Dp, TL} with the payoff (3, 3). An extensive-form game with incomplete information is presented below in Figure 2. Note that the node for Player 1 with actions A and B, and all succeeding actions is a subgame. Player 2's nodes are not a subgame as they are part of the same information set. The first normal-form game is the normal form representation of the whole extensive-form game. Based on the provided information, (UA, X), (DA, Y), and (DB, Y) are all Nash equilibria for the entire game. The second normal-form game is the normal form representation of the subgame starting from Player 1's second node with actions A and B. For the second normal-form game, the Nash equilibrium of the subgame is (A, X). For the entire game Nash equilibria (DA, Y) and (DB, Y) are not subgame perfect equilibria because the move of Player 2 does not constitute a Nash equilibrium. The Nash equilibrium (UA, X) is subgame perfect because it incorporates the subgame Nash equilibrium (A, X) as part of its strategy. To solve this game, first find the Nash Equilibria by mutual best response of Subgame 1. Then use backwards induction and plug in (A,X) → (3,4) so that (3,4) become the payoffs for Subgame 2. The dashed line indicates that player 2 does not know whether player 1 will play A or B in a simultaneous game. Player 1 chooses U rather than D because 3 > 2 for Player 1's payoff. The resulting equilibrium is (A, X) → (3,4). Thus, the subgame perfect equilibrium through backwards induction is (UA, X) with the payoff (3, 4). Repeated games For finitely repeated games, if a stage game has only one unique Nash equilibrium, the subgame perfect equilibrium is to play without considering past actions, treating the current subgame as a one-shot game. An example of this is a finitely repeated Prisoner's dilemma game. The Prisoner's dilemma gets its name from a situation that contains two guilty culprits. When they are interrogated, they have the option to stay quiet or defect. If both culprits stay quiet, they both serve a short sentence. If both defect, they both serve a moderate sentence. If they choose opposite options, then the culprit that defects is free and the culprit who stays quiet serves a long sentence. Ultimately, using backward induction, the last subgame in a finitely repeated Prisoner's dilemma requires players to play the unique Nash equilibrium (both players defecting). Because of this, all games prior to the last subgame will also play the Nash equilibrium to maximize their single-period payoffs. If a stage-game in a finitely repeated game has multiple Nash equilibria, subgame perfect equilibria can be constructed to play non-stage-game Nash equilibrium actions, through a "carrot and stick" structure. One player can use the one stage-game Nash equilibrium to incentivize playing the non-Nash equilibrium action, while using a stage-game Nash equilibrium with lower payoff to the other player if they choose to defect. Finding subgame-perfect equilibria Reinhard Selten proved that any game which can be broken into "sub-games" containing a sub-set of all the available choices in the main game will have a subgame perfect Nash Equilibrium strategy (possibly as a mixed strategy giving non-deterministic sub-game decisions). Subgame perfection is only used with games of complete information. Subgame perfection can be used with extensive form games of complete but imperfect information. The subgame-perfect Nash equilibrium is normally deduced by "backward induction" from the various ultimate outcomes of the game, eliminating branches which would involve any player making a move that is not credible (because it is not optimal) from that node. One game in which the backward induction solution is well known is tic-tac-toe, but in theory even Go has such an optimum strategy for all players. The problem of the relationship between subgame perfection and backward induction was settled by Kaminski (2019), who proved that a generalized procedure of backward induction produces all subgame perfect equilibria in games that may have infinite length, infinite actions as each information set, and imperfect information if a condition of final support is satisfied. The interesting aspect of the word "credible" in the preceding paragraph is that taken as a whole (disregarding the irreversibility of reaching sub-games) strategies exist which are superior to subgame perfect strategies, but which are not credible in the sense that a threat to carry them out will harm the player making the threat and prevent that combination of strategies. For instance in the game of "chicken" if one player has the option of ripping the steering wheel from their car they should always take it because it leads to a "sub game" in which their rational opponent is precluded from doing the same thing (and killing them both). The wheel-ripper will always win the game (making his opponent swerve away), and the opponent's threat to suicidally follow suit is not credible. See also Centipede game Dynamic inconsistency Glossary of game theory Minimax theorem Retrograde analysis Solution concept Bellman's principle of optimality References External links Selten, R. (1965). Spieltheoretische behandlung eines oligopolmodells mit nachfrageträgheit. Zeitschrift für die gesamte Staatswissenschaft/Journal of Institutional and Theoretical Economics, (H. 2), 301-324, 667-689. [in German - part 1, part 2] Example of Extensive Form Games with imperfect information Java applet to find a subgame perfect Nash Equilibrium solution for an extensive form game from gametheory.net. Java applet to find a subgame perfect Nash Equilibrium solution for an extensive form game from gametheory.net. Kaminski, M.M. Generalized Backward Induction: Justification for a Folk Algorithm. Games 2019, 10, 34. Game theory equilibrium concepts
Subgame perfect equilibrium
[ "Mathematics" ]
1,949
[ "Game theory", "Game theory equilibrium concepts" ]
6,955,552
https://en.wikipedia.org/wiki/Fetal%20hydantoin%20syndrome
Fetal hydantoin syndrome, also called fetal dilantin syndrome, is a group of defects caused to the developing fetus by exposure to teratogenic effects of phenytoin. Dilantin is the brand name of the drug phenytoin sodium in the United States, commonly used in the treatment of epilepsy. It may also be called congenital hydantoin syndrome, fetal hydantoin syndrome, dilantin embryopathy, or phenytoin embryopathy. Association with EPHX1 has been suggested. Signs and symptoms About one third of children whose mothers are taking this drug during pregnancy typically have intrauterine growth restriction with a small head and develop minor dysmorphic craniofacial features (microcephaly and intellectual disability) and limb defects including hypoplastic nails and distal phalanges (birth defects). Heart defects including ventricular septal defect, atrial septal defect, patent ductus arteriosus and coarctation of the aorta may occur in these children. A smaller population will have growth problems and developmental delay, or intellectual disability. Heart defects and cleft lip may also be featured. Diagnosis There is no diagnostic testing that can identify fetal hydantoin syndrome. A diagnosis is made clinically based upon identification of characteristic symptoms in an affected infant in conjunction with a history of phenytoin exposure during gestation. It is important to note that the majority of infants born to women who take phenytoin during pregnancy will not develop fetal hydantoin syndrome. Treatment The treatment of fetal hydantoin syndrome is directed toward the specific symptoms that are apparent in each individual. Treatment may require the coordinated efforts of a team of specialists. Pediatricians, oral surgeons, plastic surgeons, neurologists, psychologists, and other healthcare professionals may need to systematically and comprehensively plan an affected child's treatment. Infants with fetal hydantoin syndrome can benefit from early developmental intervention to ensure that affected children reach their potential. Affected children may benefit from occupational, physical and speech therapy. Various methods of rehabilitative and behavioral therapy may be beneficial. Additional medical, social and/or vocational services may be necessary. Psychosocial support for the entire family is essential as well. When cleft lip and/or palate are present, the coordinated efforts of a team of specialists may be used to plan an affected child's treatment and rehabilitation. Cleft lip may be surgically corrected. Generally surgeons repair the lip when the child is still an infant. A second surgery is sometimes necessary for cosmetic purposes when the child is older. Cleft palate may be repaired by surgery or covered by an artificial device (prosthesis) that closes or blocks the opening. Surgical repair can be carried out in stages or in a single operation, according to the nature and severity of the defect. The first palate surgery is usually scheduled during the toddler period. References External links Congenital malformation due to exogenous toxicity Syndromes Syndromes with intellectual disability Rare syndromes
Fetal hydantoin syndrome
[ "Environmental_science" ]
624
[ "Toxicology", "Congenital malformation due to exogenous toxicity" ]
6,956,310
https://en.wikipedia.org/wiki/Spektr
Spektr (; ) (TKM-O, 77KSO, 11F77O) was the fifth module of the Mir Space Station. The module was designed for remote observation of Earth's environment and contained atmospheric and surface research equipment. Spektr also had four solar arrays which generated about half of the station's electrical power. Development The Spektr module was originally developed as part of a top-secret military program code-named "Oktant". It was planned to carry experiments with space-borne surveillance and test antimissile defense. The surveillance instruments were mounted on the exterior of the module opposite the docking port. Also in this location were two launchers for artificial targets. The heart of the Spektr payload was an experimental optical telescope code-named "Pion” (Peony). Instrument list: 286K binocular radiometer Astra 2 – monitored atmospheric trace constituents, Mir environment Balkan 1 lidar – measures upper cloud altitude. Used a 5320-angstrom laser source, provided 4.5 m resolution EFO 2 photometer KOMZA – interstellar gas detector MIRAS absorption spectrometer – had to measure neutral atmospheric composition, but couldn't work due to a failure Phaza spectrometer – surface studies. Examined wavelengths between 0.340 and 285 micrometers, and provides 200 km resolution Taurus/Grif – monitored Mir's induced X/gamma-ray background VRIZ UV spectroradiometer These experiments would have been a continuation of the research aboard a top-secret TKS-M module, which docked to Salyut 7 in 1985. However, with the end of the Cold War and the shrinking of Russia's space budget, the module was stuck on the ground. In the mid-1990s with the return of US-Russian cooperation in space, NASA agreed to provide funds to complete the Spektr and Priroda modules in exchange for having 600 to 700 kg of US experiments installed. The Oktava military component was replaced with a conical mounting area for two additional solar arrays. The airlock for the Oktava targets to be used instead to expose experiments to the vacuum of space. Once in orbit, Spektr served as the living quarters for American astronauts until the collision in late June 1997. Collision On June 25, 1997, the Progress M-34 spacecraft crashed into Spektr while doing an experimental docking maneuver with the Kvant-1 module. The collision damaged one of Spektr's solar arrays and punctured the hull, causing a relatively slow leak. The crew had enough time to install a hatch cover and seal the module off to prevent depressurization of the entire Mir station. To seal the module, the crew had to remove the cables that were routed through the (open) hatchway, including the power cables from Spektr's solar panels. An internal spacewalk in the Spektr module in August 1997 by cosmonauts Anatoly Solovyov and Pavel Vinogradov, from Soyuz TM-26, succeeded in restoring these power connections by installing a modified hatch cover to allow the power cables to pass through the hatch when it was in the closed position. In a second internal spacewalk in October they connected two of the panels to a computer system to allow the panels to be controlled remotely and align with the Sun. These modifications allowed power generation to return to approximately 70% of the pre-collision generation capability. Spektr was left depressurized and isolated from the remainder of the Mir complex. Gallery References External links Spektr module (77KSO) on russianspaceweb.com, containing diagrams, pictures, and background information Spektr on Encyclopedia Astronautica, with design history and equipment information Take a Tour of Mir > Spektr on PBS NOVA Online Gunter's Space Page – information on Spektr Mir Spacecraft launched in 1995 Satellite collisions
Spektr
[ "Technology" ]
806
[ "Satellite collisions", "Space debris" ]
6,956,314
https://en.wikipedia.org/wiki/Kristall
The Kristall () (77KST, TsM-T, 11F77T) module was the fourth module and the third major addition to Mir. As with previous modules, its configuration was based on the 77K (TKS) module, and was originally named "Kvant 3". It was launched on May 31, 1990 on Proton-K. It docked to Mir autonomously on June 10, 1990. Description Kristall had several materials processing furnaces. They were called Krater 5, Optizon 1, Zona 2, and Zona 3. It also had a biotechnology experiment called the Aniur electrophoresis unit. These experiments were capable of generating 100 kg of raw materials for use on Earth. Located in the docking node was the Priroda 5 camera which was used for Earth resources experiments. Kristall also had several astronomy and astrophysics experiments which were designed to augment experiments that were already located in Kvant-1. Kristall's solar panels were also different from others on Mir. They were designed to be "collapsible" which means that they could be deployed and retracted several times. One of Kristall's solar panels was removed and re-deployed on Kvant-1 in 1995. That solar panel was later disposed of in November, 1997. Kristall also carried six gyrodines for attitude control and to augment those already on the station. The control system of Kristall was developed by the JSC "Khartron" (Kharkiv, Ukraine). List of experiments: Ainur electrophoresis unit Krater 5, Optizon 1, and CSK-1/Kristallizator semiconductor materials processing furnaces Zona 2/3 materials processing furnaces Buket gamma-ray spectrometer Glazar 2 UV telescope - cosmic radiation studies Granar astrophysics spectrometer Marina gamma ray telescope Mariya magnetic spectrometer Priroda 5 Earth resources camera system - consists of 2 KFA-1000 film cameras Svet plant cultivation unit Relation to Buran and Space Shuttle programs The most notable feature of Kristall was its relation to the Soviet Buran program. Kristall carried two APAS-89 designed to be compatible with the Buran shuttle. One unit was located axially and the other was located radially. After the cancellation of the Buran program in 1993, the lateral docking port found use for the Shuttle-Mir Program. The radial port was never used. The axial port was tested by the modified Soyuz TM-16 spacecraft in 1993 in preparation for Shuttle dockings. On May 26, 1995, Kristall was moved from the -Y port on the Mir base block to the -X port. It was then moved on May 30 to -Z port in preparation for the arrival of the Spektr module. On June 10, Kristall was moved back to -X port to prepare for the upcoming Shuttle docking. The first Space Shuttle docking occurred in 1995 during STS-71 by the . On July 17, 1995, Kristall was moved one last time to its permanent position at the -Z port. For Buran dockings, the entire procedure of moving Kristall would have to be used. On STS-74, the next Shuttle docking, Atlantis carried a docking module that was attached to Kristall. This allowed future Shuttle dockings to be carried out without the module rearrangement that had needed previously. References External links Russian Space Web Encyclopedia Astronautica Gunter's Space Page - information on Kristall Mir 1990 in the Soviet Union Buran program Crewed space observatories Spacecraft which reentered in 2001 Spacecraft launched in 1990
Kristall
[ "Astronomy" ]
768
[ "Space telescopes", "Crewed space observatories" ]
6,956,352
https://en.wikipedia.org/wiki/Outline%20of%20air%20pollution%20dispersion
The following outline is provided as an overview of and topical guide to air pollution dispersion: In environmental science, air pollution dispersion is the distribution of air pollution into the atmosphere. Air pollution is the introduction of particulates, biological molecules, or other harmful materials into Earth's atmosphere, causing disease, death to humans, damage to other living organisms such as food crops, and the natural or built environment. Air pollution may come from anthropogenic or natural sources. Dispersion refers to what happens to the pollution during and after its introduction; understanding this may help in identifying and controlling it. Air pollution dispersion has become the focus of environmental conservationists and governmental environmental protection agencies (local, state, province and national) of many countries (which have adopted and used much of the terminology of this field in their laws and regulations) regarding air pollution control. Air pollution emission plumes Air pollution emission plume – flow of pollutant in the form of vapor or smoke released into the air. Plumes are of considerable importance in the atmospheric dispersion modelling of air pollution. There are three primary types of air pollution emission plumes: Buoyant plumes – Plumes which are lighter than air because they are at a higher temperature and lower density than the ambient air which surrounds them, or because they are at about the same temperature as the ambient air but have a lower molecular weight and hence lower density than the ambient air. For example, the emissions from the flue gas stacks of industrial furnaces are buoyant because they are considerably warmer and less dense than the ambient air. As another example, an emission plume of methane gas at ambient air temperatures is buoyant because methane has a lower molecular weight than the ambient air. Dense gas plumes – Plumes which are heavier than air because they have a higher density than the surrounding ambient air. A plume may have a higher density than air because it has a higher molecular weight than air (for example, a plume of carbon dioxide). A plume may also have a higher density than air if the plume is at a much lower temperature than the air. For example, a plume of evaporated gaseous methane from an accidental release of liquefied natural gas (LNG) may be as cold as . Passive or neutral plumes – Plumes which are neither lighter or heavier than air. Air pollution dispersion models There are five types of air pollution dispersion models, as well as some hybrids of the five types: Box model – The box model is the simplest of the model types. It assumes the airshed (i.e., a given volume of atmospheric air in a geographical region) is in the shape of a box. It also assumes that the air pollutants inside the box are homogeneously distributed and uses that assumption to estimate the average pollutant concentrations anywhere within the airshed. Although useful, this model is very limited in its ability to accurately predict dispersion of air pollutants over an airshed because the assumption of homogeneous pollutant distribution is much too simple. Gaussian model – The Gaussian model is perhaps the oldest (circa 1936) and perhaps the most commonly used model type. It assumes that the air pollutant dispersion has a Gaussian distribution, meaning that the pollutant distribution has a normal probability distribution. Gaussian models are most often used for predicting the dispersion of continuous, buoyant air pollution plumes originating from ground-level or elevated sources. Gaussian models may also be used for predicting the dispersion of non-continuous air pollution plumes (called puff models). The primary algorithm used in Gaussian modeling is the Generalized Dispersion Equation For A Continuous Point-Source Plume. Lagrangian model – a Lagrangian dispersion model mathematically follows pollution plume parcels (also called particles) as the parcels move in the atmosphere and they model the motion of the parcels as a random walk process. The Lagrangian model then calculates the air pollution dispersion by computing the statistics of the trajectories of a large number of the pollution plume parcels. A Lagrangian model uses a moving frame of reference as the parcels move from their initial location. It is said that an observer of a Lagrangian model follows along with the plume. Eulerian model – an Eulerian dispersion model is similar to a Lagrangian model in that it also tracks the movement of a large number of pollution plume parcels as they move from their initial location. The most important difference between the two models is that the Eulerian model uses a fixed three-dimensional Cartesian grid as a frame of reference rather than a moving frame of reference. It is said that an observer of an Eulerian model watches the plume go by. Dense gas model – Dense gas models are models that simulate the dispersion of dense gas pollution plumes (i.e., pollution plumes that are heavier than air). The three most commonly used dense gas models are: The DEGADIS model developed by Dr. Jerry Havens and Dr. Tom Spicer at the University of Arkansas under commission by the US Coast Guard and US EPA. The SLAB model developed by the Lawrence Livermore National Laboratory funded by the US Department of Energy, the US Air Force and the American Petroleum Institute. The HEGADAS model developed by Shell Oil's research division. Air pollutant emission Types of air pollutant emission sources – named for their characteristics Sources, by shape – there are four basic shapes which an emission source may have. They are: Point source – single, identifiable source of air pollutant emissions (for example, the emissions from a combustion furnace flue gas stack). Point sources are also characterized as being either elevated or at ground-level. A point source has no geometric dimensions. Line source – one-dimensional source of air pollutant emissions (for example, the emissions from the vehicular traffic on a roadway). Area source – two-dimensional source of diffuse air pollutant emissions (for example, the emissions from a forest fire, a landfill or the evaporated vapors from a large spill of volatile liquid). Volume source – three-dimensional source of diffuse air pollutant emissions. Essentially, it is an area source with a third (height) dimension (for example, the fugitive gaseous emissions from piping flanges, valves and other equipment at various heights within industrial facilities such as oil refineries and petrochemical plants). Another example would be the emissions from an automobile paint shop with multiple roof vents or multiple open windows. Sources, by motion Stationary source – flue gas stacks are examples of stationary sources Mobile source – buses are examples of mobile sources Sources, by urbanization level – whether the source is within a city or not is relevant in that urban areas constitute a so-called heat island and the heat rising from an urban area causes the atmosphere above an urban area to be more turbulent than the atmosphere above a rural area Urban source – emission is in an urban area Rural source – emission is in a rural area Sources, by elevation Surface or ground-level source Near surface source Elevated source Sources, by duration Puff or intermittent source – short term sources (for example, many accidental emission releases are short term puffs) Continuous source – long term source (for example, most flue gas stack emissions are continuous) Characterization of atmospheric turbulence Effect of turbulence on dispersion – turbulence increases the entrainment and mixing of unpolluted air into the plume and thereby acts to reduce the concentration of pollutants in the plume (i.e., enhances the plume dispersion). It is therefore important to categorize the amount of atmospheric turbulence present at any given time. This type of dispersion is scale dependent. Such that, for flows where the cloud of pollutant is smaller than the largest eddies present, there will be mixing. There is no limit on the size on mixing motions in the atmosphere and therefore bigger clouds will experience larger and stronger mixing motions. And hence, this type of dispersion is scale dependent. The Pasquill atmospheric stability classes Pasquill atmospheric stability classes – oldest and, for a great many years, the most commonly used method of categorizing the amount of atmospheric turbulence present was the method developed by Pasquill in 1961. He categorized the atmospheric turbulence into six stability classes named A, B, C, D, E and F with class A being the most unstable or most turbulent class, and class F the most stable or least turbulent class. Table 1 lists the six classes Table 2 provides the meteorological conditions that define each class. The stability classes demonstrate a few key ideas. Solar radiation increases atmospheric instability through warming of the Earth's surface so that warm air is below cooler (and therefore denser) air promoting vertical mixing. Clear nights push conditions toward stable as the ground cools faster establishing more stable conditions and inversions. Wind increases vertical mixing, breaking down any type of stratification and pushing the stability class towards neutral (D). Table 1: The Pasquill stability classes Table 2: Meteorological conditions that define the Pasquill stability classes Incoming solar radiation is based on the following: strong (> 700 W m−2), moderate (350–700 W m−2), slight (< 350 W m−2) Other parameters that can define the stability class The stability class can be defined also by using the Temperature gradient fluctuations in wind direction Richardson number Bulk Richardson number Monin–Obukhov length Advanced methods of categorizing atmospheric turbulence Advanced air pollution dispersion models – they do not categorize atmospheric turbulence by using the simple meteorological parameters commonly used in defining the six Pasquill classes as shown in Table 2 above. The more advanced models use some form of Monin–Obukhov similarity theory. Some examples include: AERMOD – US EPA's most advanced model, no longer uses the Pasquill stability classes to categorize atmospheric turbulence. Instead, it uses the surface roughness length and the Monin–Obukhov length. ADMS 4 – United Kingdom's most advanced model, uses the Monin-Obukhov length, the boundary layer height and the windspeed to categorize the atmospheric turbulence. Miscellaneous other terminology (Work on this section is continuously in progress) Building effects or downwash: When an air pollution plume flows over nearby buildings or other structures, turbulent eddies are formed in the downwind side of the building. Those eddies cause a plume from a stack source located within about five times the height of a nearby building or structure to be forced down to the ground much sooner than it would if a building or structure were not present. The effect can greatly increase the resulting near-by ground-level pollutant concentrations downstream of the building or structure. If the pollutants in the plume are subject to depletion by contact with the ground (particulates, for example), the concentration increase just downstream of the building or structure will decrease the concentrations further downstream. Deposition of the pollution plume components to the underlying surface can be defined as either dry or wet deposition: Dry deposition is the removal of gaseous or particulate material from the pollution plume by contact with the ground surface or vegetation (or even water surfaces) through transfer processes such as absorption and gravitational sedimentation. This may be calculated by means of a deposition velocity, which is related to the resistance of the underlying surface to the transfer. Wet deposition is the removal of pollution plume components by the action of rain. The wet deposition of radionuclides in a pollution plume by a burst of rain often forms so called hot spots of radioactivity on the underlying surface. Inversion layers: Normally, the air near the Earth's surface is warmer than the air above it because the atmosphere is heated from below as solar radiation warms the Earth's surface, which in turn then warms the layer of the atmosphere directly above it. Thus, the atmospheric temperature normally decreases with increasing altitude. However, under certain meteorological conditions, atmospheric layers may form in which the temperature increases with increasing altitude. Such layers are called inversion layers. When such a layer forms at the Earth's surface, it is called a surface inversion. When an inversion layer forms at some distance above the earth, it is called an inversion aloft (sometimes referred to as a capping inversion). The air within an inversion aloft is very stable with very little vertical motion. Any rising parcel of air within the inversion soon expands, thereby adiabatically cooling to a lower temperature than the surrounding air and the parcel stops rising. Any sinking parcel soon compresses adiabatically to a higher temperature than the surrounding air and the parcel stops sinking. Thus, any air pollution plume that enters an inversion aloft will undergo very little vertical mixing unless it has sufficient momentum to completely pass through the inversion aloft. That is one reason why an inversion aloft is sometimes called a capping inversion. Mixing height: When an inversion aloft is formed, the atmospheric layer between the Earth's surface and the bottom of the inversion aloft is known as the mixing layer and the distance between the Earth's surface and the bottom of inversion aloft is known as the mixing height. Any air pollution plume dispersing beneath an inversion aloft will be limited in vertical mixing to that which occurs beneath the bottom of the inversion aloft (sometimes called the lid). Even if the pollution plume penetrates the inversion, it will not undergo any further significant vertical mixing. As for a pollution plume passing completely through an inversion layer aloft, that rarely occurs unless the pollution plume's source stack is very tall and the inversion lid is fairly low. See also Air pollution dispersion models ADMS 3 (Atmospheric Dispersion Modelling System) – advanced atmospheric pollution dispersion model for calculating concentrations of atmospheric pollutants emitted both continuously from point, line, volume and area sources, or intermittently from point sources. AUSTAL AERMOD CANARY (By Quest) CALPUFF DISPERSION21 FLACS ISC3 MERCURE NAME (dispersion model) Panache PHAST PUFF-PLUME SIRANE Others Bibliography of atmospheric dispersion modeling AP 42 Compilation of Air Pollutant Emission Factors Atmospheric dispersion modeling Roadway air dispersion modeling Useful conversions and formulas for air dispersion modeling List of atmospheric dispersion models Yamartino method Air pollution forecasting References Further reading www.crcpress.com External links Air Quality Models (on the US EPA's website) The Model Documententation System (MDS) of the European Topic Centre on Air and Climate Change (part of the European Environment Agency) Atmospheric dispersion modeling Air pollution Industrial emissions control Environmental engineering Air pollution
Outline of air pollution dispersion
[ "Chemistry", "Engineering", "Environmental_science" ]
3,043
[ "Industrial emissions control", "Chemical engineering", "Atmospheric dispersion modeling", "Civil engineering", "Environmental engineering", "Chemical process engineering", "Environmental modelling" ]
13,236,221
https://en.wikipedia.org/wiki/Retinoic%20acid-inducible%20orphan%20G%20protein-coupled%20receptor
The Retinoic Acid-Inducible orphan G-protein-coupled receptors (RAIG) are a group of four closely related G protein-coupled receptors whose expression is induced by retinoic acid. The exact function of these proteins has not been determined but they may provide a mechanism by which retinoic acid can influence G protein signal transduction cascades. In addition, RAIG receptors interact with members of the frizzled class of G protein-coupled receptors and appear to activate the Wnt signaling pathway. References External links G protein-coupled receptors
Retinoic acid-inducible orphan G protein-coupled receptor
[ "Chemistry" ]
115
[ "G protein-coupled receptors", "Signal transduction" ]
13,236,337
https://en.wikipedia.org/wiki/Di-positronium
Di-positronium, or dipositronium, is an exotic molecule consisting of two atoms of positronium. It was predicted to exist in 1946 by John Archibald Wheeler, and subsequently studied theoretically, but was not observed until 2007 in an experiment performed by David Cassidy and Allen Mills at the University of California, Riverside. The researchers made the positronium molecules by firing intense bursts of positrons into a thin film of porous silicon dioxide. Upon slowing down in the silica, the positrons captured ordinary electrons to form positronium atoms. Within the silica, these were long lived enough to interact, forming molecular di-positronium. Advances in trapping and manipulating positrons, and spectroscopy techniques have enabled studies of Ps–Ps interactions. In 2012, Cassidy et al. were able to produce the excited molecular positronium angular momentum state. See also Hydrogen molecule Hydrogen molecular ion Positronium Protonium Exotic atom Biexciton — solid-state analog References External links Molecules of Positronium Observed in the Laboratory for the First Time , press release, University of California, Riverside, September 12, 2007. Mirror particles form new matter, Jonathan Fildes, BBC News, September 12, 2007. Antimatter Exotic atoms Molecular physics Quantum electrodynamics Substances discovered in the 2000s
Di-positronium
[ "Physics", "Chemistry" ]
277
[ "Matter", "Antimatter", "Molecular physics", "Exotic atoms", "Quantum mechanics", "Subatomic particles", " molecular", "nan", "Nuclear physics", "Atomic", "Molecular physics stubs", "Atoms", "Quantum physics stubs", " and optical physics" ]
13,236,983
https://en.wikipedia.org/wiki/Advanced%20Extremely%20High%20Frequency
Advanced Extremely High Frequency (AEHF) is a constellation of communications satellites operated by the United States Space Force. They are used to relay secure communications for the United States Armed Forces, the British Armed Forces, the Canadian Armed Forces, the Netherlands Armed Forces and the Australian Defence Force. The system consists of six satellites in geostationary orbits. The final satellite was launched on 26 March 2020. AEHF is backward compatible with, and replaces, the older Milstar system and will operate at 44 GHz uplink (extremely high frequency (EHF) band) and 20 GHz downlink (super high frequency (SHF) band). The AEHF system is a joint service communications system that provides survivable, global, secure, protected, and jam-resistant communications for high-priority military ground, sea and air assets. Overview AEHF satellites use many narrow spot beams directed towards the Earth to relay communications to and from users. Crosslinks between the satellites allow them to relay communications directly rather than via a ground station. The satellites are designed to provide jam-resistant communications with a low probability of interception. They incorporate frequency-hopping radio technology, as well as phased array antennas that can adapt their radiation patterns in order to block out potential sources of jamming. AEHF incorporates the existing Milstar low data-rate and medium data-rate signals, providing 75–2400 bit/s and 4.8 kbit/s–1.544 Mbit/s respectively. It also incorporates a new signal, allowing data rates of up to 8.192 Mbit/s. When complete, the space segment of the AEHF system will consist of six satellites, which provides coverage of the surface of the Earth between latitudes of 65° north and 65° south. For northern polar regions, the Enhanced Polar System acts as an adjunct to AEHF to provide EHF coverage. The initial contract for the design and development of the AEHF satellites was awarded to Lockheed Martin Space Systems and Northrop Grumman Space Technology in November 2001, and covered the System Development and Demonstration phase of the program. The contract covered the construction and launch of three satellites, and the construction of a mission control segment. The contract was managed by the MILSATCOM Program Office of the Space and Missile Systems Center. Like the Milstar system, AEHF are operated by the 4th Space Operations Squadron, located at Schriever Space Force Base and the 148th Space Operations Squadron at Vandenberg SFB. It extends the "cross-links" among AEHF of earlier Milstar satellites, which makes it much less vulnerable to attacks on ground stations. As a geosynchronous satellite over the equator, it still needs to be supplemented with additional systems optimized for polar coverage in high latitudes. In the April 2009 Defense Department budget request, Secretary of Defense Robert Gates said he planned to cancel the Transformational Satellite Communications System, still in the design phase, in favor of additional AEHF capacity. Individual AEHF satellites, exclusive of launch expenses, cost US$850 million. Bands Prior to the AEHF, United States and allied military satellite communications systems fell into one of three categories: Wideband: maximum bandwidth among fixed and semifixed earth stations Protected: survivable against electronic warfare and other attacks, even if bandwidth is sacrificed Narrowband: principally for tactical use, sacrificing bandwidth for simplicity, reliability, and light weight of terrestrial equipment AEHF, however, converges the role of its wideband Defense Satellite Communications System (DSCS) and protected MILSTAR predecessors, while increasing bandwidth over both. There will still need to be specialized satellite communications for extremely high data rate space sensors, such as geospatial and signals intelligence satellites, but their downlinked data will typically go to a specialized receiver and be processed into smaller amounts; the processed data will flow through AEHF. Launch and positioning AEHF satellites are sent into space using an Evolved Expendable Launch Vehicle (EELV). The payload weight at launch is approximately ; by the time it expends propellants to achieve proper orbit, its weight is approximately . The satellites will operate in geosynchronous orbit (GEO) orbit; it takes over 100 days for the orbital adjustments to reach its stable geo-position after launch. Electronics Uplinks and crosslinks are in the extremely high frequency (EHF) while the downlinks use the super high frequency (SHF). The variety of frequencies used, as well as the desire to have tightly focused downlinks for security, require a range of antennas, seen in the picture: 2 SHF downlink phased arrays 2 satellite-to-satellite crosslinks 2 uplink/downlink nulling antennas 1 uplink EHF phased array 6 uplink/downlink gimbaled dish antenna 1 uplink/downlink Earth coverage horns Phased array technology is new in communications satellites, but increases reliability by removing the mechanical movement required for gimbaled, motor-driven antennas. The low gain Earth coverage antennas send information anywhere in a third of the Earth covered by each satellite's footprint. Phased array antennas provide super high-gain earth coverages, enabling worldwide unscheduled access for all users, including small portable terminals and submarines. The six medium resolution coverage antennas (MRCA), are highly directional "spot" coverage; they can be time-shared to cover up to 24 targets. The two high-resolution coverage area antennas enable operations in the presence of in-beam jamming; the nulling antennas are part of the electronic defense that helps discriminate true signals from electronic attack. Another change from existing satellites is using solid-state transmitters rather than the traveling wave tubes used in most high-power military SHF/EHF applications. TWTs have a fixed power output; the newer devices allow varying the transmitted power, both for lowering the probability of intercept and for overall power efficiency. The payload flight software contains approximately 500,000 lines of real-time, distributed, embedded code executing simultaneously on 25 on-board processors. Services AEHF provides individual digital data streams from rates of 75 bits/second to 8 Megabits/second. These include and go beyond MILSTAR's low data rate (LDR) and medium data rate (MDR) as well as the actually fairly slow high data rate (HDR) for submarines. The faster links are designated extended data rates (XDR). While there are a number of ground terminals, the airborne terminal has been part of the Family of Advanced Beyond Line-of-Sight-Terminal (FAB-T) project. Other ground stations include the Single-Channel Antijam Man-Portable Terminal (SCAMP), Secure Mobile Anti-jam Reliable Tactical Terminal (SMART-T), and Submarine High Data Rate (Sub HDR) system. With Boeing as the prime contractor and L-3 Communications and Rockwell Collins as major subcontractors, the first FAB-T (Increment 1) was delivered, for use on the B-2 Spirit aircraft, in February 2009. It is planned for other aircraft including the B-52, RC-135, E-4, and E-6 aircraft. Other installations will go into fixed and transportable command posts. It successfully interoperated with legacy communications using a command post terminal and the Army Single Channel Anti-jam Man Portable Terminal, Satellites AEHF-1 (USA-214) The first satellite, USA-214, was successfully launched by an Atlas V 531 launch vehicle on 14 August 2010, from Space Launch Complex 41 at the Cape Canaveral Air Force Station (CCAFS). This occurred four years behind schedule; when the contract was awarded in 2000 the first launch was expected to have taken place in 2006. The program was restructured in October 2004, when the National Security Agency (NSA) did not deliver key cryptographic equipment to the payload contractor in time to meet the launch schedule. Successful launch The Atlas V launch vehicle successfully placed the satellite into a supersynchronous-apogee transfer orbit with a perigee of 275 km, an apogee of 50,000 km, an inclination of 22.1°. Failure of the kick motor, and recovery using the Hall-effect thrusters The satellite vehicle's liquid apogee engine (LAE) provided by IHI failed to raise the orbit after two attempts. To solve the problem, the perigee altitude was raised to 4700 km with twelve firings of the smaller Aerojet Rocketdyne-provided Reaction Engine Assembly thrusters, originally intended for attitude control during the LAE engine burns. From this altitude, the solar panels were deployed and the orbit was raised toward the operational orbit over the course of nine months using the 0.27 Newton Hall thrusters, also provided by Aerojet Rocketdyne, a form of electric propulsion which is highly efficient, but low thrust. This took much longer than initially intended due to the lower starting altitude for the HCT maneuvers. This led to program delays, as the second and third satellite vehicle LAEs were analyzed. A Government Accountability Office (GAO) report released in July 2011 stated that the blocked fuel line in the liquid apogee engine was most likely caused by a piece of cloth inadvertently left in the line during the manufacturing process. While this is believed to have been the primary cause of the failure, a U.S. Department of Defense Selected Acquisition Report adds that fuel loading procedures and unmet thermal control requirements could also have contributed. The remaining satellites were declared flight-ready a month prior to the release of the GAO report. AEHF-2 (USA-235) Like the first AEHF satellite, the second (AEHF-2) was launched on an Atlas V flying in the 531 configuration. The launch from Space Launch Complex 41 at Cape Canaveral took place on 4 May 2012. After three months of maneuvering, it reached its proper position and the testing procedures were started. Completion of checkout of AEHF-2 was announced on 14 November 2012 and control turned over to the 14th Air Force for operations for an expected 14-year service life through 2026. AEHF-3 (USA-246) The third AEHF satellite was launched from Cape Canaveral on 18 September 2013 at 08:10 UTC. The two-hour window to launch the satellite opened at 07:04 UTC and the launch occurred as soon as weather-related clouds and high-altitude winds cleared sufficiently to meet the launch criteria. AEHF-4 (USA-288) The fourth AEHF satellite was launched on 17 October 2018 from Cape Canaveral at 04:15 UTC using an Atlas V 551 rocket operated by the United Launch Alliance (ULA). AEHF-5 (USA-292) The fifth AEHF satellite was launched on 8 August 2019 from Cape Canaveral at 10:13 UTC using an Atlas V 551 rocket. A secondary payload named TDO-1 accompanied the AEHF-5 satellite into orbit. AEHF-6 (USA-298) The sixth AEHF satellite was launched on 26 March 2020 at 20:18 UTC by an Atlas V 551 from Cape Canaveral Space Force Station (CCSFS), SLC-41. It was the first launch of a U.S. Space Force mission since the establishment of the new military service. See also Wideband Global SATCOM system (WGS) References External links AEHF-1 Launch, SLC-41, CCAFS, 14 August 2010 @ 7:07 am EDT AEHF-2 Launch, SLC-41, CCAFS, 04 May 2012 @ 2:42 pm EDT AEHF-3 Launch, SLC-41, CCAFS, 18 September 2013 @ 4:10 am EDT Military communications Communications satellite constellations Military satellites Telecommunications equipment Military space program of the United States AEHF Equipment of the United States Space Force Military equipment introduced in the 2010s
Advanced Extremely High Frequency
[ "Engineering" ]
2,477
[ "Military communications", "Telecommunications engineering" ]
13,237,178
https://en.wikipedia.org/wiki/G.8261
ITU-T Recommendation G.8261/Y.1361 (formerly G.pactiming) "Timing and Synchronization Aspects in Packet Networks" specifies the upper limits of allowable network jitter and wander, the minimum requirements that network equipment at the TDM interfaces at the boundary of these packet networks can tolerate, and the minimum requirements for the synchronization function of network equipment. Usage Packet networks have been inherently asynchronous. However, as the communications industry moves toward an all IP core and edge network, there is a need to provide synchronization functionality to traditional TDM-based applications. This is essential for the interworking with PSTN. The goal is provide a Primary Reference Clock (PRC) traceable clock for the TDM applications. External links ITU-T G.8261 recommendation publication Electronics standards Synchronization Packets (information technology)
G.8261
[ "Technology", "Engineering" ]
189
[ "Computing stubs", "Telecommunications engineering", "Synchronization", "Computer network stubs" ]
13,237,359
https://en.wikipedia.org/wiki/SWIFT%20J1756.9%E2%88%922508
SWIFT J1756.9−2508 is a millisecond pulsar with a rotation frequency of 182 Hz (period of 5.5 ms). It was discovered in 2007 by the Swift Gamma-Ray Burst Explorer and found to have a companion with a mass between 0.0067 and 0.030 solar masses. It is thought that the companion is the remnant of a former companion star, now stripped down to a planetary-mass core. The pulsar is accreting mass from this companion, resulting in occasional violent outbursts from the accumulated material on the neutron star. Planetary system SWIFT J1756.9-2508's only known planet is notable for its orbital period of less than an hour, about 54 minutes and 43 seconds. References External links Universe Today, Pulsar Has Almost Completely Devoured a Star SIMBAD, "SWIFT J1756.9-2508" (accessed 2010-11-06) Accreting millisecond pulsars X-ray binaries Sagittarius (constellation) ? Hypothetical planetary systems
SWIFT J1756.9−2508
[ "Astronomy" ]
225
[ "Sagittarius (constellation)", "Constellations" ]
13,238,006
https://en.wikipedia.org/wiki/WAGO%20GmbH
WAGO GmbH & Co. KG (, ) is a German company based in Minden, Germany that manufactures components for electrical connection technology and electronic components for automation technology. History 1950s WAGO was founded on April 27, 1951 as WAGO Klemmenwerk GmbH in Minden, after brothers-in-law Heinrich Nagel and Friedrich Hohorst purchased a patent for spring clamp technology. The company is named after the inventors Wagner and Olbricht from whom the patent (Patent No. 838778) was purchased. In the same year, WAGO presented the first spring terminals at the Hannover Messe, which however faced less reception due to ductility problems. 1960s–2000s In 1961, Wolfgang Hohorst, son of the founder Hohorst, joined the company. At that time, the company had 20 employees. In 1966, the company changed the material of the terminal housings (from thermoset to Polyamide 6,6) and developed additional components such as connectors and solderable terminals for printed circuit boards, which enabled the company to enter the lighting industry. A few years later, in 1973, WAGO introduced the box terminal for use in electrical installations, which was the first spring-loaded terminal to be certified by VDE. In 1975, the English company Bowthorpe Electric acquired a majority stake in WAGO. In 1977, the company developed the spring-cage terminal block under the product name Cage Clamp. This invention was also the basis for the company's subsequent product groups. The cage clamp has since become an industrial standard. With the launch of the box terminal and the cage clamp, the company expanded internationally. Until the mid-1990s WAGO established subsidiaries in France, Switzerland, Austria, USA, Japan, the UK and former East Germany (Sondershausen), Czech Republic and in India. In 1994, the company revenue amounted to 204 million DM. In 1998, Wolfgang Hohorst founded the WAGO Foundation with the goal of promoting education and training for young people interested in technology. After the Fall of the Berlin Wall, WAGO announced that they set up a new plant in Sondershausen, Thuringia, which had a special role amongst the newly founded production sites. One year after the announcement, operations in Sondershausen first started in a rented facility; since 1993, Wago has been producing in a newly built factory. In the following years, WAGO continuously invested into expanding operations in Sondershausen, with 200 million EUR invested and 13 expansions until 2022. In 1999, a logistics centre was established, with which customers in Central Europe would be supplied directly from Sondershausen. In 2022, WAGO announced the latest investment into the logistics centre of around 40 million EUR. 2000s–present In 2003, the company shares were bought back from Spirent PLC (formerly Bowthorpe), making WAGO into a family owned business again. In the 2000s, WAGO further developed existing products, notably, they improved the miniaturisation of the picoMAX connector system in 2010, in which glass fiber-reinforced plastics PAA-GF were used. In the same year, Sven Hohorst took over the position of managing director from his father Wolfgang Hohorst. In summer 2012, WAGO expanded its headquarters investing 8 million EUR on a development centre in Minden. In 2015, WAGO acquired a majority stake in the company M&M Software. The purchase of the previous development partner served the strategic orientation of the automation division. In the same year, WAGO generated a revenue of 718,7 million EUR and had 5,996 employees. In 2016, WAGO's communication centre, a customer and training centre, and a new stamping plant were completed. After one and a half years of construction and a total of 53 million EUR invested, this marked WAGO's latest investments around the headquarters in Minden. In the beginning of 2021, Sven Hohorst retired from the operational business and moved to the advisory board. Heiner Lang took over as Chairman of the Management Board. Corporate structure WAGO Holding GmbH is the parent company of the WAGO group. It primarily performs holding functions for its subsidiaries and second-tier subsidiaries. WAGO GmbH & Co. KG (formerly WAGO Kontakttechnik GmbH (under Swiss law) & Co. KG) is responsible for the operating business of the group, with all its subsidiaries. WAGO is owned by the Hohorst family. The headquarters of the WAGO Group is located in Minden. The firm's German production sites are in Minden and Sondershausen. Other production sites are located in Wrocław (Poland), Domdidier (Switzerland), Germantown (United States), Noida (India), Tianjin (China), Tokyo (Japan) and Tremblay-en-France (France). As of 2022, WAGO has about 9,000 employees. 2022 the company revenue amounted to 1.37 billion EUR. Products WAGO manufactures components for electrical connection and decentralised automation technology as well as interface electronics. The company specialises in developing products in the field of spring clamp technology. WAGO products are used in the automotive industry, building and lighting technology, and in mechanical and plant engineering. The first product was patented in 1951 with patent No. 838778. This was a terminal with a non-self-supporting contact insert for solid conductors. The clamping force was transmitted to the insulating housing. This was followed in 1957 by a self-supporting spring terminal with helical springs for all types of conductors (Patent No. 1095914). Further important product developments include the box terminal (1973), the terminal block cage clamp (1977), the push-in cage clamp (2003), and the miniaturised picoMAX system. References External links Official website Minden-Lübbecke Electronics companies established in 1951 Automata (mechanical) Electronics companies of Germany
WAGO GmbH
[ "Engineering" ]
1,241
[ "Automata (mechanical)", "Automation" ]
13,238,290
https://en.wikipedia.org/wiki/MUMPS%20%28software%29
MUMPS (MUltifrontal Massively Parallel sparse direct Solver) is a software application for the solution of large sparse systems of linear algebraic equations on distributed memory parallel computers. It was developed in European project PARASOL (1996–1999) by CERFACS, IRIT-ENSEEIHT and RAL. The software implements the multifrontal method, which is a version of Gaussian elimination for large sparse systems of equations, especially those arising from the finite element method. It is written in Fortran 90 with parallelism by MPI and it uses BLAS and ScaLAPACK kernels for dense matrix computations. Since 1999, MUMPS has been supported by CERFACS, IRIT-ENSEEIHT, and INRIA. The importance of MUMPS lies in the fact that it is a supported free implementation of the multifrontal method. References External links WinMUMPS, files for compiling MUMPS on Windows Free software programmed in Fortran Numerical software Public-domain software with source code
MUMPS (software)
[ "Mathematics" ]
208
[ "Numerical software", "Mathematical software" ]
13,238,669
https://en.wikipedia.org/wiki/International%20Behavioural%20and%20Neural%20Genetics%20Society
The International Behavioural and Neural Genetics Society (IBANGS) is a learned society that was founded in 1996. The goal of IBANGS is "promote and facilitate the growth of research in the field of neural behavioral genetics". Profile Mission The IBANGS mission statement is to promote the field of neurobehavioural genetics by: organizing annual meetings to promote excellence in research on behavioural and neural genetics publishing a scholarly journal, Genes, Brain and Behavior in collaboration with Wiley-Blackwell Awards Each year IBANGS recognizes top scientists in the field of neurobehavioral genetics with: The IBANGS Distinguished Investigator Award for distinguished lifetime contributions to behavioral neurogenetics The IBANGS Young Scientist Award for promising young scientists Travel Awards to attend an IBANGS Annual Meeting for students, postdocs, and junior faculty, financed by a meeting grant from the National Institute on Alcohol Abuse and Alcoholism A Distinguished Service Award for exceptional contributions to the field is given on a more irregular basis and has been awarded only three times, to Benson Ginsburg (2001), Wim Crusio (2011), and John C. Crabbe (2015). History IBANGS was founded in 1996 as the European Behavioural and Neural Genetics Society, with Hans-Peter Lipp as its founding president. The name and scope of EBANGS were changed to "International" at the first meeting of the society in Orléans, France in 1997. IBANGS is a founding member of the Federation of European Neuroscience Societies. The current president is Karla Kaun (2022–2025). Previous presidents have been: References External links Behavioral neuroscience Neuroscience organizations Scientific organizations established in 1996 Behavioural genetics societies International scientific organizations
International Behavioural and Neural Genetics Society
[ "Biology" ]
352
[ "Behavioural sciences", "Behavior", "Behavioral neuroscience" ]
13,239,461
https://en.wikipedia.org/wiki/Sandsinker
SandSinka is a lead-free fishing sinker made from biodegradable plastics that is filled with burley or sand or both. This sinker can be used as a float. A standard sinker. A sinker that allows extra weight to be on the line for casting, which will increase distance, then once the sinker is immersed in water it disperses the weight leaving you with a light line. It can also allow for burley to be added, which by design, is directly above the hook. Sandsinkers are lead-free fishing sinkers made of fabric and filled with sand. Although they do not cast as easily or as far for surf fishing, they are a healthy alternative to lead for fishing from jetties or any situation where casting distance is not a prime consideration. External links https://sandsinka.com Sandsinka. A new biodegradable sinker that uses sand or burley for the ballast. Environmentally friendly. Developed 2021 Sandsinkers: A Simple Way to Make Sinkers with Fabric and Sand and Without Lead Retrieved from November 3, 2008 Fishing equipment Weights
Sandsinker
[ "Physics" ]
229
[ "Weights", "Physical objects", "Matter" ]
13,240,808
https://en.wikipedia.org/wiki/Curriculum-based%20measurement
Curriculum-based measurement, or CBM, is also referred to as a general outcomes measures (GOMs) of a student's performance in either basic skills or content knowledge. Early history CBM began in the mid-1970s with research headed by Stan Deno at the University of Minnesota. Over the course of 10 years, this work led to the establishment of measurement systems in reading, writing, and spelling that were: (a) easy to construct, (b) brief in administration and scoring, (c) had technical adequacy (reliability and various types of validity evidence for use in making educational decisions), and (d) provided alternate forms to allow time series data to be collected on student progress. This focus in the three language arts areas eventually was expanded to include mathematics, though the technical research in this area continues to lag that published in the language arts areas. An even later development was the application of CBM to middle-secondary areas: Espin and colleagues at the University of Minnesota developed a line of research addressing vocabulary and comprehension (with the maze) and by Tindal and colleagues at the University of Oregon developed a line of research on concept-based teaching and learning. Increasing importance Early research on the CBM quickly moved from monitoring student progress to its use in screening, normative decision-making, and finally benchmarking. Indeed, with the implementation of the No Child Left Behind Act in 2001, and its focus on large-scale testing and accountability, CBM has become increasingly important as a form of standardized measurement that is highly related to and relevant for understanding student's progress toward and achievement of state standards. Key feature Probably the key feature of CBM is its accessibility for classroom application and implementation. It was designed to provide an experimental analysis of the effects from interventions, which includes both instruction and curriculum. This is one of the most important conundrums to surface on CBM: To evaluate the effects of a curriculum, a measurement system needs to provide an independent "audit" and not be biased to only that which is taught. The early struggles in this arena referred to this difference as mastery monitoring (curriculum-based which was embedded in the curriculum and therefore forced the metric to be the number (and rate) of units traversed in learning) versus experimental analysis which relied on metrics like oral reading fluency (words read correctly per minute) and correct word or letter sequences per minute (in writing or spelling), both of which can serve as GOMs. In mathematics, the metric is often digits correct per minute. N.B. The metric of CBM is typically rate-based to focus on "automaticity" in learning basic skills. Recent advancements The most recent advancements of CBM have occurred in three areas. First, they have been applied to students with low incidence disabilities. This work is best represented by Zigmond in the Pennsylvania Alternate Assessment and Tindal in the Oregon and Alaska Alternate Assessments. The second advancement is the use of generalizability theory with CBM, best represented by the work of John Hintze, in which the focus is parceling the error term into components of time, grade, setting, task, etc. Finally, Yovanoff, Tindal, and colleagues at the University of Oregon have applied Item Response Theory (IRT) to the development of statistically calibrated equivalent forms in their progress monitoring system. Critique Curriculum-based measurement emerged from behavioral psychology and yet several behaviorists have become disenchanted with the lack of the dynamics of the process. See also Response to Intervention Special Education Norm-referenced test List of state achievement tests in the U.S. References Further reading Fletcher, J.M.; Francis, D.J.; Morris, R.D. & Lyon, G.R. (2005). Evidence-based assessment of learning disabilities in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34(3), 506–22. Fuchs, L.S. & Fuchs, D. (1999). Monitoring student progress toward the development of reading competence: A review of three forms of classroom-based assessment. School Psychology Review, 28(4), 659–71. Hosp, M.; Hosp, J. & Howell, K. (2007). The ABCs of CBM: A Practical guide to curriculum-based measurement. New York: Guilford Press. Martínez, R.S.; Nellis, L.M. & Prendergast, K.A. (2006). Closing the achievement gap series: Part II: Response to intervention (RTI) – basic elements, practical applications, and policy recommendations (Education Policy Brief: Vol. 4, No. 11). Bloomington: Indiana University, School of Education, Center for Evaluation and Education Policy. Jones, K.M. & Wickstrom, K.F. (2002). Done in sixty seconds: Further analysis of the brief assessment model for academic problems. School Psychology Review, 31(4), 554–68. Shinn, M.R. (2002). Best practices in using curriculum-based measurement in a problem-solving model. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology IV (pp. 671–93). Bethesda, MD: National Association of School Psychologists. Special education Student assessment and evaluation Behaviorism
Curriculum-based measurement
[ "Biology" ]
1,107
[ "Behavior", "Behaviorism" ]
13,241,768
https://en.wikipedia.org/wiki/Outage%20management%20system
An outage management system (OMS) is a computer system used by operators of electric distribution systems to assist in restoration of power. Major functions of an OMS Major functions usually found in an OMS include: Prediction of location of transformer, fuse, recloser or breaker that opened upon failure. Prioritizing restoration efforts and managing resources based upon criteria such as locations of emergency facilities, size of outages, and duration of outages. Providing information on extent of outages and number of customers impacted to management, media and regulators. Calculation of estimation of restoration times. Management of crews assisting in restoration. Calculation of crews required for restoration. OMS principles and integration requirements At the core of a modern outage management system is a detailed network model of the distribution system. The utility's geographic information system (GIS) is usually the source of this network model. By combining the locations of outage calls from customers, a rules engine is used to predict the locations of outages. For instance, since the distribution system is primarily tree-like or radial in design, all calls in particular area downstream of a fuse could be inferred to be caused by a single fuse or circuit breaker upstream of the calls. The outage calls are usually taken by call takers in a call center utilizing a customer information system (CIS). Another common way for outage calls to enter into the CIS (and thus the OMS) is by integration with an interactive voice response (IVR) system. The CIS is also the source for all the customer records which are linked to the network model. Customers are typically linked to the transformer serving their residence or business. It is important that every customer be linked to a device in the model so that accurate statistics are derived on each outage. Customers not linked to a device in the model are referred to as "fuzzies". More advanced automatic meter reading (AMR) systems can provide outage detection and restoration capability and thus serve as virtual calls indicating customers who are without power. However, unique characteristics of AMR systems such as the additional system loading and the potential for false positives requires that additional rules and filter logic must be added to the OMS to support this integration. Outage management systems are also commonly integrated with SCADA systems which can automatically report the operation of monitored circuit breakers and other intelligent devices such as SCADA reclosers. Another system that is commonly integrated with an outage management system is a mobile data system. This integration provides the ability for outage predictions to automatically be sent to crews in the field and for the crews to be able to update the OMS with information such as estimated restoration times without requiring radio communication with the control center. Crews also transmit details about what they did during outage restoration. It is important that the outage management system electrical model be kept up to current so that it can accurately make outage predictions and also accurately keep track of which customers are out and which are restored. By using this model and by tracking which switches, breakers and fuses are open and which are closed, network tracing functions can be used to identify every customer who is out, when they were first out and when they were restored. Tracking this information is the key to accurately reporting outage statistics. (P.-C. Chen, et al., 2014) OMS benefits OMS benefits include: Reduced outage durations due to faster restoration based upon outage location predictions. Reduced outage duration averages due to prioritizing Improved customer satisfaction due to increase awareness of outage restoration progress and providing estimated restoration times. Improved media relations by providing accurate outage and restoration information. Fewer complaints to regulators due to ability to prioritize restoration of emergency facilities and other critical customers. Reduced outage frequency due to use of outage statistics for making targeted reliability improvements. OMS based distribution reliability improvements An OMS supports distribution system planning activities related to improving reliability by providing important outage statistics. In this role, an OMS provides the data needed for the calculation of measurements of the system reliability. Reliability is commonly measured by performance indices defined by the IEEE P1366-2003 standard. The most frequently used performance indices are SAIDI, CAIDI, SAIFI and MAIFI. An OMS also support the improvement of distribution reliability by providing historical data that can be mined to find common causes, failures and damages. By understanding the most common modes of failure, improvement programs can be prioritized with those that provide the largest improvement on reliability for the lowest cost. While deploying an OMS improves the accuracy of the measured reliability indices, it often results an apparent degradation of reliability due to improvements over manual methods that almost always underestimate the frequency of outages, the size of outage and the duration of outages. To compare reliability in years before an OMS deployment to the years after requires adjustments to be made to the pre-deployment years measurements to be meaningful. References Sastry, M.K.S. (2007), "Integrated Outage Management System: an effective solution for power utilities to address customer grievances", International Journal of Electronic Customer Relationship Management, vol. 1, no. 1, pages: 30-40 Burke, J. (2000), "Using outage data to improve reliability", Computer Applications in Power, IEEE volume 13, issue 2, April 2000 Page(s):57 - 60 Frost, Keith (2007), "Utilizing Real-Time Outage Data for External and Internal Reporting", Power Engineering Society General Meeting, 2007. IEEE 24–28 June 2007 pages 1 – 2 Hall, D.F. (2001), "Outage management systems as integrated elements of the distribution enterprise", Transmission and Distribution Conference and Exposition, 2001 IEEE/PES volume 2, 28 October - 2 November 2001, pages 1175 - 1177 Kearney, S. (1998), "How outage management systems can improve customer service", Transmission & Distribution Construction, Operation & Live-Line Maintenance Proceedings, 1998. ESMO '98. 1998 IEEE 8th International Conference on 26–30 April 1998, pages 172 – 178 Nielsen, T.D. (2002), "Improving outage restoration efforts using rule-based prediction and advanced analysis", IEEE Power Engineering Society Winter Meeting, 2002, volume 2, 27–31 January 2002, pages 866 - 869 Nielsen, T. D. (2007), "Outage Management Systems Real-Time Dashboard Assessment Study", Power Engineering Society General Meeting, 2007. IEEE, 24–28 June 2007, pages 1 – 3 Robinson, R.L.; Hall, D.F.; Warren, C.A.; Werner, V.G. (2006), "Collecting and categorizing information related to electric power distribution interruption events: customer interruption data collection within the electric power distribution industry", Power Engineering Society General Meeting, 2006. IEEE 18–22 June 2006, page 5. P.C. Chen, T. Dokic, and M. Kezunovic, "The Use of Big Data for Outage Management in Distribution Systems," International Conference on Electricity Distribution (CIRED) Workshop, 2014. Electric power
Outage management system
[ "Physics", "Engineering" ]
1,466
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
13,244,131
https://en.wikipedia.org/wiki/IEEE%20Reynold%20B.%20Johnson%20Information%20Storage%20Systems%20Award
The IEEE Reynold B. Johnson Information Storage Systems Award is a Technical Field Award of the IEEE given each year to an individual, multiple recipients, or team up to three in number that has made outstanding contributions to information storage systems. The award is named in honor of Reynold B. Johnson. The award was established in 1991. The award includes a bronze medal, certificate, and honorarium. It was last awarded in 2015. Recipients 2015: Dov Moran and Amir Ban and Simon Litsyn 2014: John K. Ousterhout and Mendel Rosenblum 2013: Michael L. Kazar 2012: Naoya Takahashi 2011: (no award) 2010: Moshe Yanai 2009: Marshall Kirk McKusick 2008: Alan Jay Smith 2007: David Hitz and James Lau 2006: Jaishankar Menon 2005: François B. Dolivo 2004: Bruce Gurney and Virgil S. Speriosu 2003: Neal Bertram 2002: Christopher Henry Bajorek 2001: Tu Chen 2000: Mark Kryder 1999: David Patterson and Randy Katz and Garth Gibson 1998: Jean-Pierre Lazzari 1997: Alan Shugart 1996: Nobutake Imamura 1995: James U. Lemke 1994: Charles Denis Mee 1993: John M. Harker See also List of computer-related awards List of computer science awards External links Biographies of recipients from 2002-2015 References Reynold B. Johnson Information Storage Systems Award Computer-related awards Information science awards
IEEE Reynold B. Johnson Information Storage Systems Award
[ "Technology" ]
301
[ "Science and technology awards", "Information science awards" ]
13,244,136
https://en.wikipedia.org/wiki/World%20Rainforest%20Movement
The World Rainforest Movement (WRM) is an international initiative created to strengthen the global movement in defense of forests, in order to fight deforestation and forest degradation. It was founded in 1986 by activists from around the world. WRM believes that this goal can only be achieved by fighting for social and ecological justice, by respecting the collective rights of traditional communities and the right to self-determination of peoples who depend on the forests for their livelihoods. For this reason, WRM's actions are oriented to support the struggles of indigenous peoples and peasant communities in defense of their territories. WRM's International Secretariat is composed of a small team with members from different countries. The head office is in Uruguay. Main areas of work Expansion of monoculture tree plantations for the production of timber, cellulose, palm oil, rubber or biomass. Industrial tree plantations pose a major threat to communities beyond tropical forest areas. Impacts of corporations that extract timber, minerals, water and fossil fuels from forest territories, and of the infrastructure that supports this exploitation. Initiatives that are presented as "solutions" but in fact only exacerbate forest loss and climate change. These include certification of forest management concessions, monoculture tree plantations, carbon offsets, environmental compensation programmes, among others. New trends related to corporate tactics and national and international policies that facilitate the appropriation of community forests. Local struggles and resistance strategies of movements, organisations and communities in the defence of their territories and forests. The differentiated impacts that women face when their lands are encroached and appropriated: sexual violence, harassment, persecution and deprivation of livelihood, among others. Activities Mutual learning and support for community struggles Visiting communities that are struggling against the destruction of their forests for tree plantations and other corporate projects, to exchange experiences and to jointly decide on forms of support. Supporting meetings elaborated collectively with people from communities, organisations and social movements on the causes of forest destruction, global trends, threats and local resistance. Promoting exchanges between activists and organisations that resist against similar threats to their livelihoods. Creating spaces of trust and political connection to strengthen communities' struggles. Showing solidarity with local and community struggles, based on demands presented by the organisations, communities and activists involved. Production and dissemination of information and analyses Participating in debates and international campaigns to give visibility to community struggles and to expose the private and state tactics of land grabbing. Producing analyses and exposing violations – in local and international spaces – on the impacts of false solutions to the destruction of forests and climate change for communities. Producing analyses about new trends and international policies related to climate and biodiversity with forest dwellers threatened by these initiatives. Facilitating the flow of information among groups in different regions of the world, for example with translations of texts, petitions and action alerts into local languages. Publishing the WRM bulletin, an e-newsletter, since 1997. It exposes struggles, threats and resistance in forests, as well as false policy solutions at international and local level. Articles are written by activists and organizations from all over the world. The bulletin is distributed to more than 10,000 individuals and organizations in 131 countries around the world. Producing diverse materials for activists and communities on specific topics. Maintaining an online library with WRM materials since 1996, available in Spanish, French, English and Portuguese. Some are also translated into other languages, such as Bahasa Indonesian, Lingala, Malagasy, Swahili and Thai. References External links WRM site Forests Indigenous peoples and the environment Food sovereignty Forestry Peasants Organizations based in Uruguay 1986 establishments in Uruguay Environmental organizations established in 1986 Forest conservation organizations
World Rainforest Movement
[ "Biology" ]
716
[ "Forests", "Ecosystems" ]
13,244,260
https://en.wikipedia.org/wiki/Computer%20Underground%20Digest
The Computer Underground Digest (CuD) was a weekly online newsletter on early Internet cultural, social, and legal issues published by Gordon Meyer and Jim Thomas from March 1990 to March 2000. History Meyer and Thomas were Criminal Justice professors at Northern Illinois University, and intended the newsletter to cover topical social and legal issues generated during the rise of the telecommunications and the Internet. It existed primarily as an email mailing list and on USENET, though its archives were later provided on a website. The newsletter came to prominence when it published legal commentary and updates concerning the "hacker crackdowns" and federal indictments of Leonard Rose and Craig Neidorf of Phrack. The CuD published commentary from its membership on subjects including the legal and social implications of the growing Internet (and later the web), book reviews of topical publications, and many off-topic postings by its readership. Overtaken by the growth of online forums on the web, it ceased publication in March, 2000. See also Phrack Cult of the Dead Cow References External links Computer Underground Digest CuD on textfiles.com Defunct computer magazines published in the United States Weekly magazines published in the United States Computer security procedures Magazines established in 1990 Magazines disestablished in 2000 Professional and trade magazines Safety engineering Online magazines published in the United States
Computer Underground Digest
[ "Engineering" ]
265
[ "Safety engineering", "Systems engineering", "Computer security procedures", "Cybersecurity engineering" ]
13,244,658
https://en.wikipedia.org/wiki/Flying%20primate%20hypothesis
In evolutionary biology, the flying primate hypothesis is that megabats, a subgroup of Chiroptera (also known as flying foxes), form an evolutionary sister group of primates. The hypothesis began with Carl Linnaeus in 1758, and was again advanced by J.D. Smith in 1980. It was proposed in its modern form by Australian neuroscientist Jack Pettigrew in 1986 after he discovered that the connections between the retina and the superior colliculus (a region of the midbrain) in the megabat Pteropus were organized in the same way found in primates, and purportedly different from all other mammals. This was followed up by a longer study published in 1989, in which this was supported by the analysis of many other brain and body characteristics. Pettigrew suggested that flying foxes, colugos, and primates were all descendants of the same group of early arboreal mammals. The megabat flight and the colugo gliding could be both seen as locomotory adaptations to a life high above the ground. The flying primate hypothesis met resistance from many zoologists. Its biggest challenges were not centered on the argument that megabats and primates are evolutionarily related, which reflects earlier ideas (such as the grouping of primates, tree shrews, colugos, and bats under the same taxonomic group, the Superorder Archonta). Rather, many biologists resisted the implication that megabats and microbats (or echolocating bats) formed distinct branches of mammalian evolution, with flight having evolved twice. This implication was borne out of the fact that microbats do not resemble primates in any of the neural characteristics studied by Pettigrew, instead resembling primitive mammals such as Insectivora in these respects. The advanced brain characters demonstrated in Pteropus could not, therefore, be generalized to imply that all bats are similar to primates. More recently, the flying primate hypothesis was soundly rejected when scientists compared the DNA of bats to that of primates. These genetic studies support the monophyly of bats. Neurological studies Soon after Pettigrew's study, work on another genus of megabat (Rousettus) disputed the existence of an advanced pattern of connections between the retina and the superior colliculus. However, this conclusion was later criticised on methodological grounds. Later studies have sought further evidence of unique characteristics linking the megabat and primate brains. These studies have had limited success in identifying unique links between megabats and present-day primates, instead concluding that the megabat brain has characteristics that may resemble those likely to have existed in primitive primate brains. Nonetheless, modern neuroanatomical studies have repeatedly supported the existence of very significant differences between the brains of megabats and microbats, which is one of the anchors of the "flying primate" hypothesis. Biochemical studies The implication that bats are diphyletic has been fiercely disputed by many zoologists, not only based on the unlikelihood that wings would have evolved twice in mammals, but also on biochemical studies of molecular evolution, which indicate that bats are monophyletic. However, other studies have disputed the validity of these conclusions. In particular, Pettigrew argued incorrectly that phylogenies based solely on DNA data can be subject to an artifact named the "base-compositional bias" Further studies did not find base-compositional bias sufficient to discount support for the monophyly of bats. See also Winged monkeys References External links Jack Pettigrew's criticism of the molecular evidence Primatology Bats Evolutionary biology Biological hypotheses 1758 in science
Flying primate hypothesis
[ "Biology" ]
748
[ "Biological hypotheses", "Evolutionary biology" ]