id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
4,141,838
https://en.wikipedia.org/wiki/Vector%20Map
The Vector Map (VMAP), also called Vector Smart Map, is a vector-based collection of geographic information system (GIS) data about Earth at various levels of detail. Level 0 (low resolution) coverage is global and entirely in the public domain. Level 1 (global coverage at medium resolution) is only partly in the public domain. There are ongoing discussions about making most of the information available in the public domain. Description Coordinate reference system: Geographic coordinates stored in decimal degrees with southern and western hemispheres using negative values for latitude and longitude, respectively. Horizontal Datum: World Geodetic System 1984 (WGS 84). Vertical Datum: Mean Sea Level. Thematic data layers Features and data attributes are tagged utilizing the international Feature and Attribute Coding Catalogue (FACC). major road networks railroad networks hydrologic drainage systems utility networks (cross-country pipelines and communication lines) major airports elevation contours coastlines international boundaries populated places index of geographical names Levels of resolution The vector map product are usually seen as being of three different types: low resolution (level 0), medium resolution (level 1) and high resolution (level 2). Level Zero (VMAP0) Level 0 provides worldwide coverage of geo-spatial data and is equivalent to a small scale (1:1,000,000). The data are offered either on CD-ROM or as direct download, as they have been moved to the public domain. Data are structured following the Vector Product Format (VPF), compliant with standards MIL-V-89039 and MIL-STD 2407. Data sets The entire coverage has been divided into four data sets: North America (NOAMER) v0noa Europe and North Asia (EURNASIA) v0eur South America, Africa, and Antarctica (SOAMAFR) v0soa South Asia and Australia (SASAUS) v0sas Level One (VMAP1) Level 1 data are equivalent to a medium scale resolution (1:250,000). Level 1 tiles follow the MIL-V-89033 standard. Horizontal accuracy: 125–500m Vertical accuracy: 0.5–2 Contour Interval (for example: if contour interval 50 m, accuracy will be 25 to 100m) Data sets VMAP Level 1 is divided in 234 geographical tiles. Only 57 of them are currently (2006) available for download from NGA. Among the available datasets, coverage can be found for parts of Costa Rica, Libya, United States, Mexico, Iraq, Russia, Panama, Colombia and Japan. Level Two (VMAP2) Level 2 data are equivalent to a large scale resolution. Level 2 tiles follow the MIL-V-89032 standard. Horizontal accuracy: 50–200m Vertical accuracy: 0.5–2 Contour Interval (for example: if contour interval 50 m, accuracy will be 25–100m) Debate about availability of data The USA Freedom of Information Act and the Electronic Freedom of Information Act guarantee access to virtually all GIS data created by the US government. Following the trend of the United States, much of the VMAP data has been offered to the public domain. But many countries consider mapping and cartography a state monopoly; for such countries, the VMAP Level1 data are kept out of the public domain. However, some data may be commercialised by national mapping agencies, sometimes as a consequence of privatisation. Various public groups are making efforts to have all VMAP1 data moved to the public domain in accordance with FOIA. Further steps have been taken by the Free World Maps Foundation and others to have the data licensed under the GNU General Public License, while remaining copyrighted, as an alternative to the public domain. This is an ongoing debate (as of 2006). Copyrights VMAP0 The U.S. government has released the data into public domain, with the following conditions imposed (quotation from VMAP0 Copyright Statement): The VMAP0 download page states: However, all is not quite what it seems. There is a 'readme1.txt' file located in the v0eur, v0sas, and v0soa directories. This file contains information saying that layers: Boundaries Coverage and the Reference Library, are copyrighted to the Environmental Systems Research Institute. If these copyrighted layers are not used there is no violation of any copyrights. Tools to read and convert VMAP data VPFView (V2.1) - developed by NIMA, is available from NGA or USGS (as part of the NIMAMUSE package); this tool can render simple plots and export GIS data to other GIS file formats "OGR with OGDI driver": this free software tool can convert VMAP format to standard GIS file formats such as SHAPE, PostGIS etc. History 1991–1993: The National Imagery and Mapping Agency (NIMA) develops the Digital Chart of the World (DCW) for the US Defense Mapping Agency (DMA) with themes including Political/Ocean Populated Places, Railroads, Roads, Utilities, Drainage, Hypsography, Land Cover, Ocean Features, Physiography, Aeronautical, Cultural Landmarks, Transportation Structure and Vegetation. One of the sources for the data was the Operational Navigation Chart that compiles military mapping from Australia, Canada, United Kingdom, and the United States. VMAP (level 0) is a slightly more detailed reiteration of the DCW. VMAP (level 1) has much higher resolution data. 2004 The National Imagery and Mapping Agency (NIMA) is renamed to National Geospatial-Intelligence Agency which will include other mapping agencies such as the Defense Mapping Agency (DMA), the Central Imagery Office (CIO) and the Defense Dissemination Program Office (DDPO). All VMAP data will subsequently be distributed through the NGA. See also Natural Earth, free, high-quality global map data Digital Chart of the World GSHHS, a high-resolution shoreline data set GADM, a high-resolution database of country administrative areas Digital Elevation Model GIS DIGEST VRF and VPF are related and compatible with a few exceptions Vector tiles References Processing of VMAP0 data with free GIS software: SRTM and VMAP0 data in OGR and GRASS. GRASS Newsletter, 3:2-6, 2005 (M. Neteler) External links Vector Map description- at National Geospatial Intelligence Agency free java viewer fast java viewer from Idevio SVG Maps converted from VMAP VMAP0 data in ESRI shapefile format, ready to download VMAP1 data in ESRI shapefile format, ready to download VMAP0 layer documentation Geographic information systems
Vector Map
[ "Technology" ]
1,398
[ "Information systems", "Geographic information systems" ]
4,142,023
https://en.wikipedia.org/wiki/Bongkrek%20acid
Bongkrek acid (also known as bongkrekic acid) is a respiratory toxin produced in fermented coconut or corn contaminated by the bacterium Burkholderia gladioli pathovar cocovenenans. It is a highly toxic, heat-stable, colorless, odorless, and highly unsaturated tricarboxylic acid that inhibits the ADP/ATP translocase, also called the mitochondrial ADP/ATP carrier, preventing ATP from leaving the mitochondria to provide metabolic energy to the rest of the cell. Bongkrek acid, when consumed through contaminated foods, mainly targets the liver, brain, and kidneys along with symptoms that include vomiting, diarrhea, urinary retention, abdominal pain, and excessive sweating. Most of the outbreaks are found in Indonesia and China where fermented coconut and corn-based foods are consumed. Discovery and history In 1895, there was a food-poisoning outbreak in Java, Indonesia. The outbreak was caused by the consumption of Indonesian traditional food called tempe bongkrek. During this time, tempe bongkrek served as a main source of protein in Java due to its inexpensiveness. Tempe bongkrek is made by extracting the coconut meat by-product from coconut milk into a form of cake, which is then fermented with R. oligosporus mold. The first outbreak of the bongkrek poisoning by tempe bongkrek was recorded by Dutch researchers; however no further research to find the cause of the poisoning was conducted in 1895. During 1930s, Indonesian government went through an economic depression, and this condition caused some of the people to make tempe bongkrek by themselves, instead of buying it directly from well-trained producers. As a result, the poisonings occurred frequently, reaching 10 to 12 a year. Dutch scientists W. K. Mertens and A. G. van Veen from the Eijkman Institute of Jakarta, started to find the cause of the poisoning in the early 1930s. They successfully identified the source of poisoning as a bacterium called Burkholderia cocovenenans (formerly known as Pseudomonas cocovenenans). This bacterium produces a poisonous substance called bongkrek acid. B. cocovenenans is commonly found in plants and soil, which can be taken up by coconuts and corn, leading to the synthesis of bongkrek acid during the fermentation of such foods. Since 1975, consumption of contaminated tempe bongkrek has caused more than 3000 cases of bongkrek acid poisoning. In Indonesia, the overall reported mortality rate has turned out to be 60%. Due to the severity of the situation, the production of tempe bongkrek has been banned since 1988. Synthesis There were multiple attempts to synthesize bongkrek acid using different numbers of fragments since the first total synthesis of the acid by E.J. Corey in 1984. One of the unique attempts to synthesize bongkrek acid was done by Shindo's group from Kyushu University in 2009. Unlike other attempts such as the one from Lev's group, Shindo's group used three fragments to synthesize bongkrek acid. The Fragments 1, 2, and 3 were individually synthesized in the lab. After the synthesis of each fragment required for bongkrek acid synthesis, the fragments 2 and 3 were first coupled together through Julia olefination in the presence of KHMDS. The resulting intermediate, abbreviated as A in the scheme below, was then coupled with the fragment 1 through Suzuki coupling. After forming intermediate B, bongkrek acid was finally synthesized by treating it with methanol (primary alcohol) through Jones reagent and acid deprotection of the methoxymethyl ester. The first total synthesis of bongkrek acid by E.J. Corey required 32 steps; however Shindo successfully reduced the steps into a total of 18 steps by efficiently utilizing Julia olefination and Suzuki coupling along with a higher yield by 6.4%. Mechanism of action Adenine nucleotide translocator, abbreviated as ANT, provides ATP from mitochondria to the cytosol in exchanging of cytosolic ADP. The way bongkrek acid works is that it interrupts the transport process of the cytosolic ADP in the inner membrane of mitochondria by inhibiting the mitochondrial ANT. The interesting part of this inner membrane of mitochondria is that the ANT forms the internal membrane channel of the mitochondrial permeability transition pore, known as MPTP. Bongkrek acid is permeable through this membrane and binds to the surface of ANT, inhibiting ANT’s translocation. Once bongkrek acid binds to the surface of ANT, the acid forms hydrogen bonding interactions with ANT protein residues. The hydrogen bonding interactions are mainly formed with the oxygens from the carboxylic acid fragments of bongkrek acid. The most prominent contribution to the hydrogen bonding interaction comes from the interaction with the side chain amino group, Arg-197. Another prominent contribution of binding bongkrek acid with ANT is the electrostatic interaction between the acid and the ANT’s amino acid, Lys-30. As a result, the hydrogen bonding interactions and the salt bridge put bongkrek acid in the center of the ANT active site, inhibiting the action of the translocase. Mitochondrial synthesis of ATP requires ADP transport from the cytosol into the mitochondrial matrix through the ANT, meaning it plays a critical role in providing energy for the cells in the first place. ADP/ATP exchange heavily depends on the transition between two distinct conformation states of ANT: cytosolic state (c-state) and matrix state (m-state). In the c-state, the active site of ANT faces toward the cytosol, where it attracts the cytosol ADP, and in the m-state, the active site of ANT faces toward the mitochondrial matrix, where it can release the cytosol ADP and attracts the synthesized ATP. The interaction between the acid and the ANT causes the conformational change of the ANT. Bongkrek acid locks ANT in the m-state. The structure of bongkrek acid-ANT shows that there are six transmembrane alpha helices covering up the active site of the ANT, preventing the binding of adenosine nucleotides. This means ANT can’t receive ADP from the cytosol, ultimately preventing the synthesis of ATP. Symptoms of poisoning and treatments After consumption of bongkrek acid-contaminated corn-based or coconut-based foods, the latency period is expected to be between 1 and 10 hours. The symptoms of bongkrek acid poisoning are like other mitochondrial toxins. The common symptoms of bongkrek acid poisoning are dizziness, somnolence, excessive sweating, palpitations, abdominal pain, vomiting, diarrhea, hematochezia, hematuria, and urinary retention. Death usually occurs 1 to 20 hours after the onset of the symptoms of bongkrek acid poisoning. Another common symptom of bongkrek acid poisoning is limb soreness. In the first reported BA poisoning case in Africa, 12/17 people were reported to have limb soreness as one of their main symptoms. A fatal dose for humans can be as low as 1 to 1.5 mg, and other source also states that oral LD50 is 3.16 mg per kg body weight. Due to lack of studies on the toxicokinetics of bongkrek acid, there are no specific treatments or antidotes for bongkrek acid. The commonly used protocol to treat bongkrek acid poisoning is to remove the toxins that are not absorbed by the adenine nucleotide translocase (ANT) and to provide treatments that are specific to the symptoms that patients are having. Due to the lack of specific treatments and antidotes for the toxins, the timing is critical to reverse the severe physiological effects. References Toxicology Carboxylic acids Alkene derivatives Ethers ADP/ATP translocase inhibitors Bacterial toxins
Bongkrek acid
[ "Chemistry", "Environmental_science" ]
1,719
[ "Toxicology", "Carboxylic acids", "Functional groups", "Organic compounds", "Ethers" ]
4,142,132
https://en.wikipedia.org/wiki/Rubin%20vase
The Rubin vase (sometimes known as Rubin's vase, the Rubin face or the figure–ground vase) is a famous example of ambiguous or bi-stable (i.e., reversing) two-dimensional forms developed around 1915 by the Danish psychologist Edgar Rubin. The depicted version of Rubin's vase can be seen as the black profiles of two people looking towards each other or as a white vase, but not both. Another example of a bistable figure Rubin included in his Danish-language, two-volume book was the Maltese cross. Rubin presented in his doctoral thesis (1915) a detailed description of the visual figure-ground relationship, an outgrowth of the visual perception and memory work in the laboratory of his mentor, Georg Elias Müller. One element of Rubin's research may be summarized in the fundamental principle, "When two fields have a common border, and one is seen as figure and the other as ground, the immediate perceptual experience is characterized by a shaping effect which emerges from the common border of the fields and which operates only on one field or operates more strongly on one than on the other". The effect The visual effect generally presents the viewer with two shape interpretations, each of which is consistent with the retinal image, but only one of which can be maintained at a given moment. This is because the bounding contour will be seen as belonging to the figure shape, which appears interposed against a formless background. If the latter region is interpreted instead as the figure, then the same bounding contour will be seen as belonging to it. Explanation These types of stimuli are both interesting and useful because they provide an excellent and intuitive demonstration of the figure–ground distinction the brain makes during visual perception. Rubin's figure–ground distinction, since it involved higher-level cognitive pattern matching, in which the overall picture determines its mental interpretation, rather than the net effect of the individual pieces, influenced the Gestalt psychologists, who discovered many similar percepts themselves. Normally the brain classifies images by which object surrounds which – establishing depth and relationships. If one object surrounds another object, the surrounded object is seen as figure, and the presumably further away (and hence background) object is the ground, and reversed. This makes sense, since if a piece of fruit is lying on the ground, one would want to pay attention to the "figure" and not the "ground". However, when the contours are not so unequal, ambiguity starts to creep into the previously simple inequality, and the brain must begin "shaping" what it sees; it can be shown that this shaping overrides and is at a higher level than feature recognition processes that pull together the face and the vase images – one can think of the lower levels putting together distinct regions of the picture (each region of which makes sense in isolation), but when the brain tries to make sense of it as a whole, contradictions ensue, and patterns must be discarded. Construction The distinction is exploited by devising an ambiguous picture, whose contours match seamlessly the contours of another picture (sometimes the same picture; a practice M. C. Escher used on occasion). The picture should be "flat" and have little (if any) texture to it. The stereotypical example has a vase in the center, and a face matching its contour (since it is symmetrical, there is a matching face on the other side). See also Pareidolia References Further reading A Psychology of Picture Perception, John M. Kennedy. 1974, Jossey-Bass Publishers, The art and science of visual illusions, Nicholas Wade. 1982 Routledge & Kegan Paul Ltd. Visual Space Perception, William H. Ittelson. 1969, Springer Publishing Company, LOCCCN 60-15818 "Vase or face? A neural correlates of shape-selective grouping processes in the human brain." Uri Hasson, Talma Hendler, Dafna Ben Bashat, Rafael Malach. Journal of Cognitive Neuroscience, Vol 13(6), Aug 2001. pp. 744–753. ISSN 0898-929X (Print) External links Rubin's People Inside the Wall People trapped inside a Wall Illusionworks.com article Rubin has invented nothing The Rubin's vase before Rubin (fr) Optical illusions
Rubin vase
[ "Physics" ]
892
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
4,142,269
https://en.wikipedia.org/wiki/Potassium%20hydrogen%20phthalate
Potassium hydrogen phthalate, often called simply KHP, is an acidic salt compound. It forms white powder, colorless crystals, a colorless solution, and an ionic solid that is the monopotassium salt of phthalic acid. KHP is slightly acidic, and it is often used as a primary standard for acid–base titrations because it is solid and air-stable, making it easy to weigh accurately. It is not hygroscopic. It is also used as a primary standard for calibrating pH meters because, besides the properties just mentioned, its pH in solution is very stable. It also serves as a thermal standard in thermogravimetric analysis. KHP dissociates completely in water, giving the potassium cation (K+) and hydrogen phthalate anion (HP− or Hphthalate−) KHP ->[{}\atop\ce{H2O}] K+ + HP− and then, acting as a weak acid, hydrogen phthalate reacts reversibly with water to give hydronium (H3O+) and phthalate ions. HP− + H2O P2− + H3O+ KHP can be used as a buffering agent in combination with hydrochloric acid (HCl) or sodium hydroxide (NaOH). The buffering region is dependent upon the pKa, and is typically +/- 1.0 pH units of the pKa. The pKa of KHP is 5.4, so its pH buffering range would be 4.4 to 6.4; however, due to the presence of the second acidic group that bears the potassium ion, the first pKa also contributes to the buffering range well below pH 4.0, which is why KHP is a good choice for use as a reference standard for pH 4.00. KHP is also a useful standard for total organic carbon (TOC) testing. Most TOC analyzers are based on the oxidation of organics to carbon dioxide and water, with subsequent quantitation of the carbon dioxide. Many TOC analysts suggest testing their instruments with two standards: one typically easy for the instrument to oxidize (KHP), and one more difficult to oxidize. For the latter, benzoquinone is suggested. References Carboxylic acids Phthalates Potassium compounds
Potassium hydrogen phthalate
[ "Chemistry" ]
497
[ "Carboxylic acids", "Functional groups" ]
4,142,398
https://en.wikipedia.org/wiki/Interstellar%20Boundary%20Explorer
Interstellar Boundary Explorer (IBEX or Explorer 91 or SMEX-10) is a NASA satellite in Earth orbit that uses energetic neutral atoms (ENAs) to image the interaction region between the Solar System and interstellar space. The mission is part of NASA's Small Explorer program and was launched with a Pegasus-XL launch vehicle on 19 October 2008. The mission is led by Dr. David J. McComas (IBEX principal investigator), formerly of the Southwest Research Institute (SwRI) and now with Princeton University. The Los Alamos National Laboratory and the Lockheed Martin Advanced Technology Center built the IBEX-Hi and IBEX-Lo sensors respectively. The Orbital Sciences Corporation manufactured the satellite bus and was the location for spacecraft environmental testing. The nominal mission baseline duration was two years after commissioning, and the prime ended in early 2011. The spacecraft and sensors are still healthy and the mission is continuing in its extended mission. IBEX is in a Sun-oriented spin-stabilized orbit around the Earth. In June 2011, IBEX was shifted to a new, more efficient, much more stable orbit. It does not come as close to the Moon in the new orbit, and expends less fuel to maintain its position. The spacecraft is equipped with two large aperture imagers which detect ENAs with energies from 10 eV to 2 keV (IBEX-Lo) and 300 eV to 6 keV (IBEX-Hi). The mission was originally planned to be a 24-month operations period. The mission has since been extended, with the spacecraft still in operation . Spacecraft The spacecraft is built on an octagonal base, roughly high and across. The dry mass is and the instrument payload comprises . The fully fueled mass is , and the entire flight system launch mass, including the ATK Star 27 solid rocket motor, is . The spacecraft itself has a hydrazine attitude control system. Power is produced by a solar array with a capability of 116 watts, and nominal power use is 66 W (16 W for the payload). Communications are via two hemispherical antennas with a nominal downlink data rate of 320 kbps and an uplink rate of 2 kbps. Science goal The Interstellar Boundary Explorer (IBEX) mission science goal is to discover the nature of the interactions between the solar wind and the interstellar medium at the edge of the Solar System. IBEX has achieved this goal by generating full sky maps of the intensity (integrated over the line-of-sight) of ENAs in a range of energies every six months. Most of these ENAs are generated in the heliosheath, which is the region of interaction. Mission Launch The IBEX satellite was mated to its Pegasus XL launch vehicle at Vandenberg Air Force Base, California, and the combined vehicle was then suspended below the Lockheed L-1011 Stargazer mother airplane and flown to Kwajalein Atoll in the central Pacific Ocean. Stargazer arrived at Kwajalein Atoll on 12 October 2008. The IBEX satellite was carried into space on 19 October 2008, by the Pegasus XL launch vehicle. The launch vehicle was released from Stargazer, which took off from Kwajalein Atoll, at 17:47:23 UTC. By launching from this site close to the equator, the Pegasus launch vehicle lifted as much as more mass to orbit than it would have with a launch from the Kennedy Space Center in Florida. Mission profile The IBEX satellite initially launched into a highly elliptical transfer orbit with a low perigee and used a solid fuel rocket motor as its final boost stage at apogee in order to raise its perigee greatly and to achieve its desired high-altitude elliptical orbit. IBEX is in a highly eccentric elliptical terrestrial orbit, which ranges from a perigee of about to an apogee of about . Its original orbit was about — that is, about 80% of the distance to the Moon — which has changed primarily due to an intentional adjustment to prolong the spacecraft's useful life. This very high orbit allows the IBEX satellite to move out of the Earth's magnetosphere when making scientific observations. This extreme altitude is critical due to the amount of charged-particle interference that would occur while taking measurements within the magnetosphere. When within the magnetosphere of the Earth (), the satellite also performs other functions, including telemetry downlinks. Orbit adjusted In June 2011, IBEX shifted to a new orbit that raised its perigee to more than . The new orbit has a period of one third of a lunar month, which, with the correct phasing, avoids taking the spacecraft too close to the Moon, whose gravity can negatively affect IBEX's orbit. The now spacecraft uses less fuel to maintain a stable orbit, increasing its useful lifespan to more than 40 years. Instruments The heliospheric boundary of the Solar System is being imaged by measuring the location and magnitude of charge-exchange collisions occurring in all directions. The satellite's payload consists of two energetic neutral atom (ENA) imagers, IBEX-Hi and IBEX-Lo. Each consists of a collimator that limits their fields of view (FoV) a conversion surface to convert neutral hydrogen and oxygen into ions, an electrostatic analyzer (ESA) to suppress ultraviolet light and to select ions of a specific energy range, and a detector to count particles and identify the type of each ion. Both of these sensors are a single-pixel camera with a field of view of roughly 7° x 7°. The IBEX-Hi instrument is recording particle counts in a higher energy band (300 eV to 6 keV) than the IBEX-Lo energy band (10 eV to 2 keV). The scientific payload also includes a Combined Electronics Unit (CEU) that controls the voltages on the collimator and the ESA, and it reads and records data from the particle detectors of each sensor. Communication Compared to other space observatories, IBEX has a low data transfer rate due to the limited requirements of the mission. Data collection IBEX is collecting energetic neutral atom (ENA) emissions that are traveling through the Solar System to Earth and cannot be measured by conventional telescopes. These ENAs are created on the boundary of our Solar System by the interactions between solar wind particles and interstellar medium particles. On average IBEX-Hi detects about 500 particles per day, and IBEX-Lo, less than 100. By 2012, over 100 scientific papers related to IBEX were published, described by the principal investigator as "an incredible scientific harvest". Data availability As the IBEX data is validated, the IBEX data is made available in a series of data releases on the SwRI IBEX Public Data website. In addition, the data is periodically sent to the NASA Space Physics Data Facility (SPDF), which is the official archive site for IBEX data. SPDF data can be searched at the Heliophysics Data Portal. Science results Initial data revealed a previously unpredicted "very narrow ribbon that is two to three times brighter than anything else in the sky". Initial interpretations suggest that "the interstellar environment has far more influence on structuring the heliosphere than anyone previously believed". It is unknown what is creating the energetic neutral atoms (ENA) ribbon. The Sun is currently traveling through the Local Interstellar Cloud, and the heliosphere's size and shape are key factors in determining its shielding power from cosmic rays. Should IBEX detect changes in the shape of the ribbon, that could show how the heliosphere is interacting with the Local Fluff. It has also observed ENAs from the Earth's magnetosphere. In October 2010, significant changes were detected in the ribbon after six months, based on the second set of IBEX observations. It went on to detect neutral atoms from outside the Solar System, which were found to differ in composition from the Sun. Surprisingly, IBEX discovered that the heliosphere has no bow shock, and it measured its speed relative to the local interstellar medium (LISM) as , improving on the previous measurement of by Ulysses. Those speeds equate to 25% less pressure on the Sun's heliosphere than previously thought. In July 2013, IBEX results revealed a 4-lobed tail on the Solar System's heliosphere. See also Interstellar Mapping and Acceleration Probe (IMAP), a follow-on mission to IBEX David J. McComas, Principal Investigator of IBEX (Princeton University) References External links IBEX Public Data from IBEX Science Team Heliophysics Data Portal by NASA's Heliophysics Division IBEX Mission Profile by NASA's Solar System Exploration Satellites orbiting Earth Astronomical surveys Explorers Program Spacecraft launched in 2008 Articles containing video clips Spacecraft launched by Pegasus rockets Geospace monitoring satellites
Interstellar Boundary Explorer
[ "Astronomy" ]
1,805
[ "Astronomical surveys", "Astronomical objects", "Works about astronomy" ]
4,142,564
https://en.wikipedia.org/wiki/HTML%20email
HTML email is the use of a subset of HTML to provide formatting and semantic markup capabilities in email that are not available with plain text: Text can be linked without displaying a URL, or breaking long URLs into multiple pieces. Text is wrapped to fit the width of the viewing window, rather than uniformly breaking each line at 78 characters (defined in RFC 5322, which was necessary on older text terminals). It allows in-line inclusion of images, tables, as well as diagrams or mathematical formulae as images, which are otherwise difficult to convey (typically using ASCII art). Adoption Most graphical email clients support HTML email, and many default to it. Many of these clients include both a GUI editor for composing HTML emails and a rendering engine for displaying received HTML emails. Since its conception, a number of people have vocally opposed all HTML email (and even MIME itself), for a variety of reasons. For instance, the ASCII Ribbon Campaign advocated that all email should be sent in ASCII text format. Proponents placed ASCII art in their signature blocks, meant to look like an awareness ribbon, along with a message or link to an advocacy site The campaign was unsuccessful and was abandoned in 2013. While still considered inappropriate in many newsgroup postings and mailing lists, HTML adoption for personal and business mail has only increased over time. Some of those who strongly opposed it when it first came out now see it as mostly harmless. According to surveys by online marketing companies, adoption of HTML-capable email clients is now nearly universal, with less than 3% reporting that they use text-only clients. The majority of users prefer to receive HTML emails over plain text. Compatibility Email software that complies with RFC 2822 is only required to support plain text, not HTML formatting. Sending HTML formatted emails can therefore lead to problems if the recipient's email client does not support it. In the worst case, the recipient will see the HTML code instead of the intended message. Among those email clients that do support HTML, some do not render it consistently with W3C specifications, and many HTML emails are not compliant either, which may cause rendering or delivery problems. In particular, the <head> tag, which is used to house CSS style rules for an entire HTML document, is not well supported, sometimes stripped entirely, causing in-line style declarations to be the de facto standard, even though in-line style declarations are inefficient and fail to take good advantage of HTML's ability to separate style from content. Although workarounds have been developed, this has caused no shortage of frustration among newsletter developers, spawning the grassroots Email Standards Project, which grades email clients on their rendering of an Acid test, inspired by those of the Web Standards Project, and lobbies developers to improve their products. To persuade Google to improve rendering in Gmail, for instance, they published a video montage of grimacing web developers, resulting in attention from an employee. Style Some senders may excessively rely upon large, colorful, or distracting fonts, making messages more difficult to read. For those especially bothered by this formatting, some user agents make it possible for the reader to partially override the formatting (for instance, Mozilla Thunderbird allows specifying a minimum font size); however, these capabilities are not globally available. Further, the difference in optical appearance between the sender and the reader can help to differentiate the author of each section, improving readability. Multi-part formats Many email servers are configured to automatically generate a plain text version of a message and send it along with the HTML version, to ensure that it can be read even by text-only email clients, using the Content-Type: multipart/alternative, as specified in RFC 1521. The message itself is of type multipart/alternative, and contains two parts, the first of type text/plain, which is read by text-only clients, and the second with text/html, which is read by HTML-capable clients. The plain text version may be missing important formatting information, however. (For example, a mathematical equation may lose a superscript and take on an entirely new meaning.) Many mailing lists deliberately block HTML email, either stripping out the HTML part to just leave the plain text part or rejecting the entire message. The order of the parts is significant. RFC1341 states that: In general, user agents that compose multipart/alternative entities should place the body parts in increasing order of preference, that is, with the preferred format last. For multipart emails with html and plain-text versions, that means listing the plain-text version first and the html version after it, otherwise the client may default to showing the plain-text version even though an html version is available. Message size HTML email is larger than plain text. Even if no special formatting is used, there will be the overhead from the tags used in a minimal HTML document, and if formatting is heavily used it may be much higher. Multi-part messages, with duplicate copies of the same content in different formats, increase the size even further. The plain text section of a multi-part message can be retrieved by itself, though, using IMAP's FETCH command. Although the difference in download time between plain text and mixed message mail (which can be a factor of ten or more) was of concern in the 1990s (when most users were accessing email servers through slow modems), on a modern connection the difference is negligible for most people, especially when compared to images, music files, or other common attachments. Security vulnerabilities HTML allows a link to be hidden, but shown as any arbitrary text, such as a user-friendly target name. This can be used in phishing attacks, in which users are fooled into accessing a counterfeit web site and revealing personal details (like bank account numbers) to a scammer. If an email contains inline content from an external server, such as a picture, retrieving it requires a request to that external server which identifies where the picture will be displayed and other information about the recipient. Web bugs are specially created images (usually unique for each individual email) intended to track that email and let the creator know that the email has been opened. Among other things, that reveals that an email address is real, and can be targeted in the future. Some phishing attacks rely on particular features of HTML: Brand impersonation with procedurally-generated graphics (such graphics can look like a trademarked image but evade security scanning because there is no file) Text containing invisible Unicode characters or with a zero-height font to confuse security scanning Victim-specific URI, where a malicious link encodes special information which allows a counterfeit site to be personalized (appearing as the victim's account) so as to be more convincing. Displaying HTML content frequently involves the client program calling on special routines to parse and render the HTML-coded text; deliberately mis-coded content can then exploit mistakes in those routines to create security violations. Requests for special fonts, etc, can also impact system resources. During periods of increased network threats, the US Department of Defense has converted user's incoming HTML email to text email. The multipart type is intended to show the same content in different ways, but this is sometimes abused; some email spam takes advantage of the format to trick spam filters into believing that the message is legitimate. They do this by including innocuous content in the text part of the message and putting the spam in the HTML part (that which is displayed to the user). Most email spam is sent in HTML for these reasons, so spam filters sometimes give higher spam scores to HTML messages. In 2018 a vulnerability (EFAIL) of the HTML processing of many common email clients was disclosed, in which decrypted text of PGP or S/MIME encrypted email parts can be caused to be sent as an attribute to an external image address, if the external image is requested. This vulnerability was present in Thunderbird, macOS Mail, Outlook, and later, Gmail and Apple Mail. See also Enriched text – an HTML-like system for email using MIME Email production References External links https://www.caniemail.com/ Email Internet terminology HTML
HTML email
[ "Technology" ]
1,725
[ "Computing terminology", "Internet terminology" ]
4,142,733
https://en.wikipedia.org/wiki/NinJo
NinJo is a meteorological software system. It is a community project of the German Weather Service, the Meteorological Service of Canada, the Danish Meteorological Institute, MeteoSwiss, and the German Bundeswehr. It consists of modules for monitoring weather events, editing point forecasts and viewing meteorological data. An additional batch component is able to render graphical products off-line, these may, for example, be visualized by a web service. Essentially it is a client—server system an implemented fully with the programming language Java. NinJo was initiated by the German Weather Service (Deutscher Wetterdienst, DWD) and the German army (Bundeswehr Geo Information Service, BGIS) in 2000. Since 2006, NinJo has been used operationally. NinJo is licensed for weather services, organisations and universities not taking part in the development consortium. Description NinJo is a client-server system with interactive displays on the client side fed by batch applications implemented on the server. The system is programmed entirely in Java and can easily be extended by further layers and applications according to user-specific requirements. The workstation fed by the servers can be installed on different operating systems (e.g. Unix, Linux and Microsoft Windows), avoiding importing the source code onto the specific operating system. The NinJo Server imports a variety of meteorological data, such as METAR reports, weather radar and weather satellite images and numerical weather prediction (NWP) outputs, through dedicated file handling programs, and make them accessible to the client displays. The client is a NinJo workstation which presents data in separate layers. Users can add as many layers to a NinJo scene as they want with all layers show time-synchronised data for the same map area. The layers show geo-referenced data, not fix images, so the screen display is always done directly from the data and interactive probing using the mouse is giving the values of the original data, not a scale extracted one. The data are stored in native format, rather than stored in a common internal format, avoiding degradation in zooms and always keeping the full details and resolution of the original data. The layers are independent, can be added and removed from the scenes separately, and be set visible or invisible. Layers can be arranged in any order the users want enabling them to arrange all data types according to their specific needs. Scenes can be set for: Visualisation of weather products Monitoring the state of data input Production of weather warnings Interactive editing of texts Configuration of NinJo batch products Different tools are available for enhancing or interrogating the displays. For example, it is possible to do vertical cross-sections in a layered scene, extracting the vertical structure of NWP or radars data. References External links Science software Graphic software in meteorology Weather prediction Bundeswehr Meteorological Service of Canada
NinJo
[ "Physics" ]
583
[ "Weather", "Weather prediction", "Physical phenomena" ]
4,142,907
https://en.wikipedia.org/wiki/Cokin
Cokin is a French manufacturer of optical filters for photography. The system allows filters such as rectangular graduated neutral density filters which are versatile in use. History Cokin are particularly noted for their "Creative Filter System". It was invented by photographer Jean Coquin and introduced in 1978. Based primarily around square filters, these require a holder which is attached to the lens via a simple adapter ring of the appropriate size. Unlike screw-thread circular filters, which are each tied to lenses of a specific diameter, those in the system can be used with any lens, provided they are large enough to cover it sufficiently. (Only the adapter ring may need changing). Production The system includes a wide range of filters including color correction, plain and coloured graduated filters, diffraction, diffusion and polarizers. The material is a polymer, CR-39 sometimes advertised as "organic glass". Cokin produce various differently-sized versions of the Creative Filter System. The smallest is "A" ("Amateur", 67mm wide). The larger "P" ("Professional", 84mm wide) system covers cases where "A" filters are too small to cover the lens (or would cause problems at wider angles). The still-larger "X-Pro" filters are 130mm wide. The "A" and "P" sizes in particular are de facto standards, with many other manufacturers producing compatible filters and holders. Cokin also produce a system for 100mm-wide filters which they refer to as "Z-Pro". "X-Pro" and "Z-Pro" are designed for larger cameras. References External links Cokin UK website Optical filters French companies established in 1978 Photography equipment manufacturers of France French brands
Cokin
[ "Chemistry" ]
352
[ "Optical filters", "Filters" ]
4,142,944
https://en.wikipedia.org/wiki/Fermat%27s%20theorem%20%28stationary%20points%29
In mathematics, Fermat's theorem (also known as interior extremum theorem) is a method to find local maxima and minima of differentiable functions on open sets by showing that every local extremum of the function is a stationary point (the function's derivative is zero at that point). It belongs to the mathematical field of real analysis and is named after French mathematician Pierre de Fermat. By using Fermat's theorem, the potential extrema of a function , with derivative , are found by solving an equation involving . Fermat's theorem gives only a necessary condition for extreme function values, as some stationary points are inflection points (not a maximum or minimum). The function's second derivative, if it exists, can sometimes be used to determine whether a stationary point is a maximum or minimum. Statement One way to state Fermat's theorem is that, if a function has a local extremum at some point and is differentiable there, then the function's derivative at that point must be zero. In precise mathematical language: Let be a function and suppose that is a point where has a local extremum. If is differentiable at , then . Another way to understand the theorem is via the contrapositive statement: if the derivative of a function at any point is not zero, then there is not a local extremum at that point. Formally: If is differentiable at , and , then is not a local extremum of . Corollary The global extrema of a function f on a domain A occur only at boundaries, non-differentiable points, and stationary points. If is a global extremum of f, then one of the following is true: boundary: is in the boundary of A non-differentiable: f is not differentiable at stationary point: is a stationary point of f Extension In higher dimensions, exactly the same statement holds; however, the proof is slightly more complicated. The complication is that in 1 dimension, one can either move left or right from a point, while in higher dimensions, one can move in many directions. Thus, if the derivative does not vanish, one must argue that there is some direction in which the function increases – and thus in the opposite direction the function decreases. This is the only change to the proof or the analysis. The statement can also be extended to differentiable manifolds. If is a differentiable function on a manifold , then its local extrema must be critical points of , in particular points where the exterior derivative is zero. Applications Fermat's theorem is central to the calculus method of determining maxima and minima: in one dimension, one can find extrema by simply computing the stationary points (by computing the zeros of the derivative), the non-differentiable points, and the boundary points, and then investigating this set to determine the extrema. One can do this either by evaluating the function at each point and taking the maximum, or by analyzing the derivatives further, using the first derivative test, the second derivative test, or the higher-order derivative test. Intuitive argument Intuitively, a differentiable function is approximated by its derivative – a differentiable function behaves infinitesimally like a linear function or more precisely, Thus, from the perspective that "if f is differentiable and has non-vanishing derivative at then it does not attain an extremum at " the intuition is that if the derivative at is positive, the function is increasing near while if the derivative is negative, the function is decreasing near In both cases, it cannot attain a maximum or minimum, because its value is changing. It can only attain a maximum or minimum if it "stops" – if the derivative vanishes (or if it is not differentiable, or if one runs into the boundary and cannot continue). However, making "behaves like a linear function" precise requires careful analytic proof. More precisely, the intuition can be stated as: if the derivative is positive, there is some point to the right of where f is greater, and some point to the left of where f is less, and thus f attains neither a maximum nor a minimum at Conversely, if the derivative is negative, there is a point to the right which is lesser, and a point to the left which is greater. Stated this way, the proof is just translating this into equations and verifying "how much greater or less". The intuition is based on the behavior of polynomial functions. Assume that function f has a maximum at x0, the reasoning being similar for a function minimum. If is a local maximum then, roughly, there is a (possibly small) neighborhood of such as the function "is increasing before" and "decreasing after" . As the derivative is positive for an increasing function and negative for a decreasing function, is positive before and negative after . does not skip values (by Darboux's theorem), so it has to be zero at some point between the positive and negative values. The only point in the neighbourhood where it is possible to have is . The theorem (and its proof below) is more general than the intuition in that it does not require the function to be differentiable over a neighbourhood around . It is sufficient for the function to be differentiable only in the extreme point. Proof Proof 1: Non-vanishing derivatives implies not extremum Suppose that f is differentiable at with derivative K, and assume without loss of generality that so the tangent line at has positive slope (is increasing). Then there is a neighborhood of on which the secant lines through all have positive slope, and thus to the right of f is greater, and to the left of f is lesser. The schematic of the proof is: an infinitesimal statement about derivative (tangent line) at implies a local statement about difference quotients (secant lines) near which implies a local statement about the value of f near Formally, by the definition of derivative, means that In particular, for sufficiently small (less than some ), the quotient must be at least by the definition of limit. Thus on the interval one has: one has replaced the equality in the limit (an infinitesimal statement) with an inequality on a neighborhood (a local statement). Thus, rearranging the equation, if then: so on the interval to the right, f is greater than and if then: so on the interval to the left, f is less than Thus is not a local or global maximum or minimum of f. Proof 2: Extremum implies derivative vanishes Alternatively, one can start by assuming that is a local maximum, and then prove that the derivative is 0. Suppose that is a local maximum (a similar proof applies if is a local minimum). Then there exists such that and such that we have for all with . Hence for any we have Since the limit of this ratio as gets close to 0 from above exists and is equal to we conclude that . On the other hand, for we notice that but again the limit as gets close to 0 from below exists and is equal to so we also have . Hence we conclude that Cautions A subtle misconception that is often held in the context of Fermat's theorem is to assume that it makes a stronger statement about local behavior than it does. Notably, Fermat's theorem does not say that functions (monotonically) "increase up to" or "decrease down from" a local maximum. This is very similar to the misconception that a limit means "monotonically getting closer to a point". For "well-behaved functions" (which here means continuously differentiable), some intuitions hold, but in general functions may be ill-behaved, as illustrated below. The moral is that derivatives determine infinitesimal behavior, and that continuous derivatives determine local behavior. Continuously differentiable functions If f is continuously differentiable on an open neighborhood of the point , then does mean that f is increasing on a neighborhood of as follows. If and then by continuity of the derivative, there is some such that for all . Then f is increasing on this interval, by the mean value theorem: the slope of any secant line is at least as it equals the slope of some tangent line. However, in the general statement of Fermat's theorem, where one is only given that the derivative at is positive, one can only conclude that secant lines through will have positive slope, for secant lines between and near enough points. Conversely, if the derivative of f at a point is zero ( is a stationary point), one cannot in general conclude anything about the local behavior of f – it may increase to one side and decrease to the other (as in ), increase to both sides (as in ), decrease to both sides (as in ), or behave in more complicated ways, such as oscillating (as in , as discussed below). One can analyze the infinitesimal behavior via the second derivative test and higher-order derivative test, if the function is differentiable enough, and if the first non-vanishing derivative at is a continuous function, one can then conclude local behavior (i.e., if is the first non-vanishing derivative, and is continuous, so ), then one can treat f as locally close to a polynomial of degree k, since it behaves approximately as but if the k-th derivative is not continuous, one cannot draw such conclusions, and it may behave rather differently. Pathological functions The function oscillates increasingly rapidly between and as x approaches 0. Consequently, the function oscillates increasingly rapidly between 0 and as x approaches 0. If one extends this function by defining then the extended function is continuous and everywhere differentiable (it is differentiable at 0 with derivative 0), but has rather unexpected behavior near 0: in any neighborhood of 0 it attains 0 infinitely many times, but also equals (a positive number) infinitely often. Continuing in this vein, one may define , which oscillates between and . The function has its local and global minimum at , but on no neighborhood of 0 is it decreasing down to or increasing up from 0 – it oscillates wildly near 0. This pathology can be understood because, while the function is everywhere differentiable, it is not continuously differentiable: the limit of as does not exist, so the derivative is not continuous at 0. This reflects the oscillation between increasing and decreasing values as it approaches 0. See also Optimization (mathematics) Maxima and minima Derivative Extreme value arg max Adequality Notes References External links Theorems in real analysis Differential calculus Articles containing proofs Theorems in calculus
Fermat's theorem (stationary points)
[ "Mathematics" ]
2,202
[ "Theorems in mathematical analysis", "Theorems in calculus", "Calculus", "Theorems in real analysis", "Differential calculus", "Articles containing proofs" ]
4,143,738
https://en.wikipedia.org/wiki/International%20Year%20of%20Astronomy
The International Year of Astronomy (IYA2009) was a year-long celebration of astronomy that took place in 2009 to coincide with the 400th anniversary of the first recorded astronomical observations with a telescope by Galileo Galilei and the publication of Johannes Kepler's Astronomia nova in the 17th century. The Year was declared by the 62nd General Assembly of the United Nations. A global scheme, laid out by the International Astronomical Union (IAU), was also endorsed by UNESCO, the UN body responsible for educational, scientific, and cultural matters. The IAU coordinated the International Year of Astronomy in 2009. This initiative was an opportunity for the citizens of Earth to gain a deeper insight into astronomy's role in enriching all human cultures. Moreover, served as a platform for informing the public about the latest astronomical discoveries while emphasizing the essential role of astronomy in science education. IYA2009 was sponsored by Celestron and Thales Alenia Space. Significance of 1609 On 25 September 1608, Hans Lippershey, a spectacle-maker from Middelburg, traveled to The Hague, the then capital of the Netherlands, to demonstrate to the Dutch government a new device he was trying to patent: a telescope. Although Hans was not awarded the patent, Galileo heard of this story and decided to use the "Dutch perspective glass" and point it towards the heavens. In 1609, Galileo Galilei first turned one of his telescopes to the night sky and made astounding discoveries that changed mankind's conception of the world: mountains and craters on the Moon, a plethora of stars invisible to the naked eye, and moons around Jupiter. Astronomical observatories around the world promised to reveal how planets and stars are formed, how galaxies assemble and evolve, and what the structure and shape of our Universe actually are. In the same year, Johannes Kepler published his work Astronomia nova, in which he described the fundamental laws of planetary motions. However Galileo was not the first to observe the Moon through a telescope and make a drawing of it. Thomas Harriot observed and detailed the Moon some months before Galileo. "It's all about publicity. Galileo was extremely good at irritating people and also using creative writing to communicate what he was learning in a way that made people think," says Pamela Gay in an interview with Skepticality in 2009. Intended purpose Vision The vision of IYA2009 was to help people rediscover their place in the Universe through the sky, and thereby engage a personal sense of wonder and discovery. IYA2009 activities took place locally, nationally, regionally and internationally. National Nodes were formed in each country to prepare activities for 2009. These nodes established collaborations between professional and amateur astronomers, science centres and science communicators. More than 100 countries were involved, and well over 140 participated eventually. To help coordinate this huge global programme and to provide an important resource for the participating countries, the IAU established a central Secretariat and the IYA2009 website as the principal IYA2009 resource for public, professionals and media alike. Aims Astronomy, perhaps the oldest science in history, has played an important role in most, if not all, cultures over the ages. The International Year of Astronomy 2009 (IYA2009) was intended to be a global celebration of astronomy and its contributions to society and culture, stimulating worldwide interest not only in astronomy, but in science in general, with a particular slant towards young people. The IYA2009 marked the monumental leap forward that followed Galileo's first use of the telescope for astronomical observations, and portrays astronomy as a peaceful global scientific endeavour that unites amateur and professional astronomers in an international and multicultural family that works together to find answers to some of the most fundamental questions that humankind has ever asked. The aim of the Year was to stimulate worldwide interest in astronomy and science under the central theme "The Universe, Yours to Discover." Several committees were formed to oversee the vast majority of IYA2009 activities ("sidewalk astronomy" events in planetariums and public observatories), which spun local, regional and national levels. These committees were collaborations between professional and amateur astronomers, science centres and science communicators. Individual countries were undertaking their own initiatives as well as assessing their own national needs, while the IAU acted as the event's coordinator and catalyst on a global scale. The IAU plan was to liaise with, and involve, as many as possible of the ongoing outreach and education efforts throughout the world, including those organized by amateur astronomers. Goals The major goals of IYA2009 were to: Increase scientific awareness; Promote widespread access to new knowledge and observing experiences; Empower astronomical communities in developing countries; Support and improve formal and informal science education; Provide a modern image of science and scientists; Facilitate new networks and strengthen existing ones; Improve the gender-balanced representation of scientists at all levels and promote greater involvement by underrepresented minorities in scientific and engineering careers; Facilitate the preservation and protection of the world's cultural and natural heritage of dark skies in places such as urban oases, national parks and astronomical sites. As part of the scheme, IYA2009 helped less-well-established organizations from the developing world to become involved with larger organizations and deliver their contributions, linked via a huge global network. This initiative also aimed at reaching economically disadvantaged children across the globe and enhancing their understanding of the world. The Secretariat The central hub of the IAU activities for the IYA2009 was the IYA2009 Secretariat. This was established to coordinate activities during the planning, execution and evaluation of the Year. The Secretariat was based in the European Southern Observatory headquarters in the town of Garching near Munich, Germany. The Secretariat was to liaise continuously with the National Nodes, Task Groups, Partners and Organizational Associates, the media and the general public to ensure the progress of the IYA2009 at all levels. The Secretariat and the website were the major coordination and resource centers for all the participating countries, but particularly for those developing countries that lack the national resources to mount major events alone. Cornerstone projects The International Year of Astronomy 2009 was supported by eleven Cornerstone projects. These are global programs of activities centered on specific themes and are some of the projects that helped to achieve IYA2009's main goals; whether it is the support and promotion of women in astronomy, the preservation of dark-sky sites around the world or educating and explaining the workings of the Universe to millions, the eleven Cornerstones were the key elements in the success of IYA2009. 100 Hours of Astronomy 100 Hours of Astronomy (100HA) is a worldwide astronomy event that ran 2–5 April 2009 and was part of the scheduled global activities of the International Year of Astronomy 2009. The main goal of 100HA was to have as many people throughout the world as possible looking through a telescope just as Galileo did for the first time 400 years ago. The event included special webcasts, students and teachers activities, a schedule of events at science centers, planetariums and science museums as well as 24 hours of sidewalk astronomy, which allowed the opportunity for public observing sessions to as many people as possible. Galileoscope The Galileoscope was a worldwide astronomy event that ran 2–5 April 2009, where the program was to share a personal experience of practical astronomical observations with as many people as possible across the world. It was collaborating with the US IYA2009 National Node to develop a simple, accessible, easy-to-assemble and easy-to-use telescope that can be distributed by the millions. In theory, every participant in an IYA2009 event should be able to take home one of these little telescopes, enabling them to observe with an instrument similar to Galileo's one. Cosmic Diary The Cosmic Diary, a worldwide astronomy event that ran 2–5 April, was not about the science of astronomy, but about what it is like to be an astronomer. Professionals were to blog in texts and images about their life, families, friends, hobbies and interests, as well as their work, latest research findings and the challenges they face. The bloggers represented a vibrant cross-section of working astronomers from all around the world. They wrote in many different languages, from five continents. They have also written feature article "explanations" about their specialist fields, which were highlighted in the website. NASA, ESA and ESO all had sub-blogs as part of the Cosmic Diary Cornerstone. The Portal to the Universe The Portal to the Universe (PTTU) was a worldwide astronomy event that ran 2–5 April 2009, to provide a global, one-stop portal for online astronomy contents, serving as an index, aggregator and a social-networking site for astronomy content providers, laypeople, press, educators, decision-makers and scientists. PTTU was to feature news, image, event and video aggregation; a comprehensive directory of observatories, facilities, astronomical societies, amateur astronomy societies, space artists, science communication universities; and Web 2.0 collaborative tools, such as the ranking of different services according to popularity, to promote interaction within the astronomy multimedia community. In addition, a range of "widgets" (small applications) were to be developed to tap into existing "live data". Modern technology and the standardisation of metadata made it possible to tie all the suppliers of such information together with a single, semi-automatically updating portal. She Is an Astronomer Promoting gender equality and empowering women is one of the United Nations Millennium Development Goals. She Is an Astronomer was a worldwide astronomy event that ran 2–5 April 2009, to promote gender equality in astronomy (and science in general), tackling bias issues by providing a web platform where information and links about gender balance and related resources are collected. The aim of the project was to provide neutral, informative and accessible information to female professional and amateur astronomers, students, and those who are interested in the gender equality problem in science. Providing this information was intended to help increase the interest of young girls in studying and pursuing a career in astronomy. Another objective of the project was to build and maintain an Internet-based, easy-to-handle forum and database, where people regardless of geographical location could read about the subject, ask questions and find answers. There was also to be the option to discuss astronomy-sector-specific problems, such as observing times and family duties. Dark Skies Awareness Dark Skies Awareness was a worldwide astronomy event that ran from 2 to 5 April 2009. The IAU collaborated with the U.S. National Optical Astronomy Observatory (NOAO), representatives of the International Dark-Sky Association (IDA), the Starlight Initiative, and other national and international partners in dark-sky and environmental education on several related themes. The focus was on three main citizen-scientist programs to measure local levels of light pollution. These programs were to take the form of "star hunts" or "star counts", providing people with a fun and direct way to acquire heightened awareness about light pollution through firsthand observations of the night sky. Together, the three programs were to cover the entire International Year of Astronomy 2009, namely GLOBE at Night (in March), the Great World Wide Star Count (in October) and How Many Stars (January, February, April through September, November and December). UNESCO and the IAU were working together to implement a research and education collaboration as part of UNESCO's thematic initiative, Astronomy and World Heritage as a worldwide astronomy event that also ran 2–5 April 2009. The main objective was to establish a link between science and culture on the basis of research aimed at acknowledging the cultural and scientific values of properties connected with astronomy. This programme provides an opportunity to identify properties related to astronomy located around the world, to preserve their memory and save them from progressive deterioration. Support from the international community is needed to implement this activity and to promote the recognition of astronomical knowledge through the nomination of sites that celebrate important achievements in science. Galileo Teacher Training Program The Galileo Teacher Training Program (GTTP): the International Year of Astronomy 2009 provided an opportunity to engage the formal education community in the excitement of astronomical discovery as a vehicle for improving the teaching of science in classrooms around the world. To help training teachers in effective astronomy communication and to sustain the legacy of IYA2009, the IAU – in collaboration with the National Nodes and leaders in the field such as the Global Hands-On Universe project, the US National Optical Astronomy Observatory and the Astronomical Society of the Pacific – embarked on a unique global effort to empower teachers by developing the Galileo Teacher Training Program (GTTP). The GTTP goal was to create a worldwide network of certified "Galileo Ambassadors" by 2012. These Ambassadors were to train "Galileo Master Teachers" in the effective use and transfer of astronomy education tools and resources into classroom science curricula. The Galileo Teachers were to be equipped to train other teachers in these methodologies, leveraging the work begun during IYA2009 in classrooms everywhere. Through workshops, online training tools and basic education kits, the products and techniques developed by this program could be adapted to reach locations with few resources of their own, as well as computer-connected areas that could take advantage of access to robotic optical and radio telescopes, webcams, astronomy exercises, cross-disciplinary resources, image processing and digital universes (web and desktop planetariums). Among GTTP partners, the Global Hands-On Universe project was a leader. Universe Awareness Universe Awareness (UNAWE) was a worldwide astronomy event that also ran during 2–5 April 2009, as an international program to introduce very young children in under-privileged environments to the scale and beauty of the Universe. Universe Awareness noted the multicultural origins of modern astronomy in an effort to broaden children's minds, awaken their curiosity in science and stimulate global citizenship and tolerance. Using the sky and children's natural fascination with it as common ground, UNAWE was to create an international awareness of their place in the Universe and their place on Earth. From Earth to the Universe The Cornerstone project From Earth to the Universe (FETTU) is a worldwide public science event that began in June 2008, and still ongoing through 2011. This project has endeavored to bring astronomy images and their science to a wider audience in non-traditional informal learning venues. In placing these astronomy exhibitions in public parks, metro stations, art centers, hospitals, shopping malls and other accessible locations, it has been hoped that individuals who might normally ignore or even dislike astronomy, or science in general, will be engaged. Developing Astronomy Globally The Developing Astronomy Globally was a worldwide astronomy event that ran during 2–5 April 2009, as a Cornerstone project to acknowledge that astronomy needs to be developed in three key areas: professionally (universities and research); publicly (communication, media, and amateur groups) and educationally (schools and informal education structures). The focus was to be on regions that do not already have strong astronomical communities. The implementation was to be centred on training, development and networking in each of these three key areas. This Cornerstone was using the momentum of IYA2009 to help establish and enhance regional structures and networks that work on the development of astronomy around the world. These networks were to support the current and future development work of the IAU and other programmes, plus ensure that developing regions could benefit from IYA2009 and the work of the other Cornerstone projects. It was to also address the question of the contribution of astronomy to development. Galilean Nights The Galilean Nights was a worldwide astronomy event that also ran 2–5 April 2009, as a project to involve both amateur and professional astronomers around the globe, taking to the streets their telescopes and pointing them as Galileo did 400 years ago. The sources of interest were Jupiter and its moons, the Sun, the Moon and many others celestial marvels. The event was scheduled to take place on 22–24 October 2009. Astronomers were to share their knowledge and enthusiasm for space by encouraging as many people as possible to look through a telescope at planetary neighbours. See also International Year of Astronomy commemorative coin International Astronomical Union (IAU) History of the telescope 365 Days of Astronomy 400 Years of the Telescope (documentary) Galileoscope Global Hands-On Universe National Astronomy Week (NAW) StarPeace Project The World At Night (TWAN) World Year of Physics 2005 White House Astronomy Night References External links of IYA2009 (includes all events and projects) of the International Astronomical Union (IAU) United Nations observances Astronomy events 2009 in international relations 2009 in science 2009 in the United Nations Observances about science Astronomy education events
International Year of Astronomy
[ "Astronomy" ]
3,381
[ "Astronomy education", "Astronomy education events", "Astronomy events" ]
4,143,960
https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor
Fibroblast growth factors (FGF) are a family of cell signalling proteins produced by the macrophages. They are involved in a wide variety of processes, most notably as crucial elements for normal development in animal cells. Any irregularities in their function will lead to a range of developmental defects. These growth factors typically act as a systemic or locally circulating molecules of extracellular origin that activate cell surface receptors. A defining property of FGFs is that they bind to heparin and to heparan sulfate. Thus, some are sequestered in the extracellular matrix of tissues that contains heparan sulfate proteoglycans, and released locally upon injury or tissue remodeling. Families In humans, 23 members of the FGF family have been identified, all of which are structurally related signaling molecules: Members FGF1 through FGF10 all bind fibroblast growth factor receptors (FGFRs). FGF1 is also known as acidic fibroblast growth factor, and FGF2 is also known as basic fibroblast growth factor. Members FGF11, FGF12, FGF13, and FGF14, also known as FGF homologous factors 1-4 (FHF1-FHF4), have been shown to have distinct functions compared to the FGFs. Although these factors possess remarkably similar sequence homology, they do not bind FGFRs and are involved in intracellular processes unrelated to the FGFs. This group is also known as the intracellular fibroblast growth factor subfamily (iFGF). Human FGF18 is involved in cell development and morphogenesis in various tissues including cartilage. Human FGF20 was identified based on its homology to Xenopus FGF-20 (XFGF-20). FGF15 through FGF23 were described later and functions are still being characterized. FGF15 is the mouse ortholog of human FGF19 (there is no human FGF15) and, where their functions are shared, they are often described as FGF15/19. In contrast to the local activity of the other FGFs, FGF15/19, FGF21 and FGF23 have hormonal systemic effects. Receptors The mammalian fibroblast growth factor receptor family has 4 members, FGFR1, FGFR2, FGFR3, and FGFR4. The FGFRs consist of three extracellular immunoglobulin-type domains (D1-D3), a single-span trans-membrane domain and an intracellular split tyrosine kinase domain. FGFs interact with the D2 and D3 domains, with the D3 interactions primarily responsible for ligand-binding specificity (see below). Heparan sulfate binding is mediated through the D3 domain. A short stretch of acidic amino acids located between the D1 and D2 domains has auto-inhibitory functions. This 'acid box' motif interacts with the heparan sulfate binding site to prevent receptor activation in the absence of FGFs. Alternate mRNA splicing gives rise to 'b' and 'c' variants of FGFRs 1, 2 and 3. Through this mechanism, seven different signalling FGFR sub-types can be expressed at the cell surface. Each FGFR binds to a specific subset of the FGFs. Similarly, most FGFs can bind to several different FGFR subtypes. FGF1 is sometimes referred to as the 'universal ligand' as it is capable of activating all 7 different FGFRs. In contrast, FGF7 (keratinocyte growth factor, KGF) binds only to FGFR2b (KGFR). The signalling complex at the cell surface is believed to be a ternary complex formed between two identical FGF ligands, two identical FGFR subunits, and either one or two heparan sulfate chains. History A mitogenic growth factor activity was found in pituitary extracts by Armelin in 1973 and further work by Gospodarowicz as reported in 1974 described a more defined isolation of proteins from cow brain extract which, when tested in a bioassay that caused fibroblasts to proliferate, led these investigators to apply the name "fibroblast growth factor." In 1975, they further fractionated the extract using acidic and basic pH and isolated two slightly different forms that were named "acidic fibroblast growth factor" (FGF1) and "basic fibroblast growth factor" (FGF2). These proteins had a high degree of sequence homology among their amino acid chains, but were determined to be distinct proteins. Not long after FGF1 and FGF2 were isolated, another group of investigators isolated a pair of heparin-binding growth factors that they named HBGF-1 and HBGF-2, while a third group isolated a pair of growth factors that caused proliferation of cells in a bioassay containing blood vessel endothelium cells, which they called ECGF1 and ECGF2. These independently discovered proteins were eventually demonstrated to be the same sets of molecules, namely FGF1, HBGF-1 and ECGF-1 were all the same acidic fibroblast growth factor described by Gospodarowicz, et al., while FGF2, HBGF-2, and ECGF-2 were all the same basic fibroblast growth factor. Functions FGFs are multifunctional proteins with a wide variety of effects; they are most commonly mitogens but also have regulatory, morphological, and endocrine effects. They have been alternately referred to as "pluripotent" growth factors and as "promiscuous" growth factors due to their multiple actions on multiple cell types. Promiscuous refers to the biochemistry and pharmacology concept of how a variety of molecules can bind to and elicit a response from single receptor. In the case of FGF, four receptor subtypes can be activated by more than twenty different FGF ligands. Thus the functions of FGFs in developmental processes include mesoderm induction, anterior-posterior patterning, limb development, neural induction and neural development, and in mature tissues/systems angiogenesis, keratinocyte organization, and wound healing processes. FGF is critical during normal development of both vertebrates and invertebrates and any irregularities in their function leads to a range of developmental defects. FGFs secreted by hypoblasts during avian gastrulation play a role in stimulating a Wnt signaling pathway that is involved in the differential movement of Koller's sickle cells during formation of the primitive streak. Left, angiography of the newly formed vascular network in the region of the front wall of the left ventricle. Right, analysis quantifying the angiogenic effect. While many FGFs can be secreted by cells to act on distant targets, some FGF act locally within a tissue, and even within a cell. Human FGF2 occurs in low molecular weight (LMW) and high molecular weight (HMW) isoforms. LMW FGF2 is primarily cytoplasmic and functions in an autocrine manner, whereas HMW FGF2s are nuclear and exert activities through an intracrine mechanism. One important function of FGF1 and FGF2 is the promotion of endothelial cell proliferation and the physical organization of endothelial cells into tube-like structures. They thus promote angiogenesis, the growth of new blood vessels from the pre-existing vasculature. FGF1 and FGF2 are more potent angiogenic factors than vascular endothelial growth factor (VEGF) or platelet-derived growth factor (PDGF). FGF1 has been shown in clinical experimental studies to induce angiogenesis in the heart. As well as stimulating blood vessel growth, FGFs are important players in wound healing. FGF1 and FGF2 stimulate angiogenesis and the proliferation of fibroblasts that give rise to granulation tissue, which fills up a wound space/cavity early in the wound-healing process. FGF7 and FGF10 (also known as keratinocyte growth factors KGF and KGF2, respectively) stimulate the repair of injured skin and mucosal tissues by stimulating the proliferation, migration and differentiation of epithelial cells, and they have direct chemotactic effects on tissue remodelling. During the development of the central nervous system, FGFs play important roles in neural stem cell proliferation, neurogenesis, axon growth, and differentiation. FGF signaling is important in promoting surface area growth of the developing cerebral cortex by reducing neuronal differentiation and hence permitting the self-renewal of cortical progenitor cells, known as radial glial cells, and FGF2 has been used to induce artificial gyrification of the mouse brain. Another FGF family member, FGF8, regulates the size and positioning of the functional areas of the cerebral cortex (Brodmann areas). FGFs are also important for maintenance of the adult brain. Thus, FGFs are major determinants of neuronal survival both during development and during adulthood. Adult neurogenesis within the hippocampus e.g. depends greatly on FGF2. In addition, FGF1 and FGF2 seem to be involved in the regulation of synaptic plasticity and processes attributed to learning and memory, at least in the hippocampus. The 15 exparacrine FGFs are secreted proteins that bind heparan sulfate and can, therefore, be bound to the extracellular matrix of tissues that contain heparan sulfate proteoglycans. This local action of FGF proteins is classified as paracrine signalling, most commonly through the JAK-STAT signalling pathway or the receptor tyrosine kinase (RTK) pathway. Members of the FGF19 subfamily (FGF15, FGF19, FGF21, and FGF23) bind less tightly to heparan sulfates, and so can act in an endocrine fashion on far-away tissues, such as intestine, liver, kidney, adipose, and bone. For example: FGF15 and FGF19 (FGF15/19) are produced by intestinal cells but act on FGFR4-expressing liver cells to downregulate the key gene (CYP7A1) in the bile acid synthesis pathway. FGF23 is produced by bone but acts on FGFR1-expressing kidney cells to regulate the synthesis of vitamin D and phosphate homeostasis. Structure The crystal structures of FGF1 have been solved and found to be related to interleukin 1-beta. Both families have the same beta trefoil fold consisting of 12-stranded beta-sheet structure, with the beta-sheets are arranged in 3 similar lobes around a central axis, 6 strands forming an anti-parallel beta-barrel. In general, the beta-sheets are well-preserved and the crystal structures superimpose in these areas. The intervening loops are less well-conserved - the loop between beta-strands 6 and 7 is slightly longer in interleukin-1 beta. Clinical applications Dysregulation of the FGF signalling system underlies a range of diseases associated with the increased FGF expression. Inhibitors of FGF signalling have shown clinical efficacy. Some FGF ligands (particularly FGF2) have been demonstrated to enhance tissue repair (e.g. skin burns, grafts, and ulcers) in a range of clinical settings. See also Receptor tyrosine kinase Granulocyte-colony stimulating factor (G-CSF) Granulocyte-macrophage colony stimulating factor (GM-CSF) Nerve growth factor (NGF) Neurotrophins Erythropoietin (EPO) Thrombopoietin (TPO) Myostatin (GDF8) Growth differentiation factor 9 (GDF9) Gyrification Neurogenesis References External links FGF5 in Hair Tonic Products FGF1 in Cosmetic Products Protein domains Fibroblast growth factor Morphogens
Fibroblast growth factor
[ "Biology" ]
2,565
[ "Protein domains", "Morphogens", "Induced stem cells", "Protein classification" ]
4,144,007
https://en.wikipedia.org/wiki/MIK%20%28character%20set%29
MIK (МИК) is an 8-bit Cyrillic code page used with DOS. It is based on the character set used in the Bulgarian Pravetz 16 IBM PC compatible system. Kermit calls this character set "BULGARIA-PC" / "bulgaria-pc". In Bulgaria, it was sometimes incorrectly referred to as code page 856 (which clashes with IBM's definition for a Hebrew code page). This code page is known by Star printers and FreeDOS as Code page 3021 (Earlier it was known by FreeDOS as code page 30033 (now used for a code page 857 variant which contains the Crimean Tatar hryvnia sign), but it was renumbered to match the Star Printer code page). This is the most widespread DOS/OEM code page used in Bulgaria, rather than CP 808, CP 855, CP 866 or CP 872. Almost every DOS program created in Bulgaria, which has Bulgarian strings in it, was using MIK as encoding, and many such programs are still in use. Character set Each character is shown with its equivalent Unicode code point and its decimal code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII. Notes for implementors of mapping tables to Unicode Implementors of mapping tables to Unicode should note that the MIK Code page unifies some characters: Binary character manipulations The MIK code page maintains in alphabetical order all Cyrillic letters which enables very easy character manipulation in binary form: 10xx xxxx - is a Cyrillic Letter 100x xxxx - is an Upper-case Cyrillic Letter 101x xxxx - is a Lower-case Cyrillic Letter In such case testing and character manipulating functions as: IsAlpha(), IsUpper(), IsLower(), ToUpper() and ToLower(), are bit operations and sorting is by simple comparison of character values. See also Hardware code page References External links https://www.unicode.org/Public/MAPPINGS/VENDORS/IBM/IBM_conversions.html Unicode Consortium's mappings between IBM's code pages and Unicode http://www.cl.cam.ac.uk/~mgk25/unicode.html#conv UTF-8 and Unicode FAQ for Unix/Linux by Markus Kuhn DOS code pages Character encoding
MIK (character set)
[ "Technology" ]
504
[ "Natural language and computing", "Character encoding" ]
4,144,434
https://en.wikipedia.org/wiki/P-bodies
In cellular biology, P-bodies, or processing bodies, are distinct foci formed by phase separation within the cytoplasm of a eukaryotic cell consisting of many enzymes involved in mRNA turnover. P-bodies are highly conserved structures and have been observed in somatic cells originating from vertebrates and invertebrates, plants and yeast. To date, P-bodies have been demonstrated to play fundamental roles in general mRNA decay, nonsense-mediated mRNA decay, adenylate-uridylate-rich element mediated mRNA decay, and microRNA (miRNA) induced mRNA silencing. Not all mRNAs which enter P-bodies are degraded, as it has been demonstrated that some mRNAs can exit P-bodies and re-initiate translation. Purification and sequencing of the mRNA from purified processing bodies showed that these mRNAs are largely translationally repressed upstream of translation initiation and are protected from 5' mRNA decay. P-bodies were originally proposed to be the sites of mRNA degradation in the cell and involved in decapping and digestion of mRNAs earmarked for destruction. Later work called this into question suggesting P bodies store mRNA until needed for translation. In neurons, P-bodies are moved by motor proteins in response to stimulation. This is likely tied to local translation in dendrites. History P-bodies were first described in the scientific literature by Bashkirov et al. in 1997, in which they describe "small granules… discrete, prominent foci" as the cytoplasmic location of the mouse exoribonuclease mXrn1p. It wasn’t until 2002 that a glimpse into the nature and importance of these cytoplasmic foci was published., when researchers demonstrated that multiple proteins involved with mRNA degradation localize to the foci. Their importance was recognized after experimental evidence was obtained pointing to P-bodies as the sites of mRNA degradation in the cell. The researchers named these structures processing bodies or "P bodies". During this time, many descriptive names were used also to identify the processing bodies, including "GW-bodies" and "decapping-bodies"; however "P-bodies" was the term chosen and is now widely used and accepted in the scientific literature. Recently evidence has been presented suggesting that GW-bodies and P-bodies may in fact be different cellular components. The evidence being that GW182 and Ago2, both associated with miRNA gene silencing, are found exclusively in multivesicular bodies or GW-bodies and are not localized to P-bodies. Also of note, P-bodies are not equivalent to stress granules and they contain largely non-overlapping proteins. The two structures support overlapping cellular functions but generally occur under different stimuli. Hoyle et al. suggests a novel site termed EGP bodies, or stress granules, may be responsible for mRNA storage as these sites lack the decapping enzyme. Associations with microRNA microRNA mediated repression occurs in two ways, either by translational repression or stimulating mRNA decay. miRNA recruit the RISC complex to the mRNA to which they are bound. The link to P-bodies comes by the fact that many, if not most, of the proteins necessary for miRNA gene silencing are localized to P-bodies, as reviewed by Kulkarni et al. (2010). These proteins include, but are not limited to, the scaffold protein GW182, Argonaute (Ago), decapping enzymes and RNA helicases. The current evidence points toward P-bodies as being scaffolding centers of miRNA function, especially due to the evidence that a knock down of GW182 disrupts P-body formation. However, there remain many unanswered questions about P-bodies and their relationship to miRNA activity. Specifically, it is unknown whether there is a context dependent (stress state versus normal) specificity to the P-body's mechanism of action. Based on the evidence that P-bodies sometimes are the site of mRNA decay and sometimes the mRNA can exit the P-bodies and re-initiate translation, the question remains of what controls this switch. Another ambiguous point to be addressed is whether the proteins that localize to P-bodies are actively functioning in the miRNA gene silencing process or whether they are merely on standby. Protein composition In 2017, a new method to purify processing bodies was published. Hubstenberger et al. used fluorescence-activated particle sorting (a method based on the ideas of fluorescence-activated cell sorting) to purify processing bodies from human epithelial cells. From these purified processing bodies they were able to use mass spectrometry and RNA sequencing to determine which proteins and RNAs are found in processing bodies, respectively. This study identified 125 proteins that are significantly associated with processing bodies. Notably this work provided the most compelling evidence up to this date that P-bodies might not be the sites of degradation in the cell and instead used for storage of translationally repressed mRNA. This observation was further supported by single molecule imaging of mRNA by the Chao group in 2017. In 2018, Youn et al. took a proximity labeling approach called BioID to identify and predict the processing body proteome. They engineered cells to express several processing body-localized proteins as fusion proteins with the BirA* enzyme. When the cells are incubated with biotin, BirA* will biotinylate proteins that are nearby, thus tagging the proteins within processing bodies with a biotin tag. Streptavidin was then used to isolate the tagged proteins and mass spectrometry to identify them. Using this approach, Youn et al. identified 42 proteins that localize to processing bodies. References Further reading Molecular biology Biochemistry
P-bodies
[ "Chemistry", "Biology" ]
1,193
[ "Biochemistry", "nan", "Molecular biology" ]
4,144,576
https://en.wikipedia.org/wiki/Prins%20reaction
The Prins reaction is an organic reaction consisting of an electrophilic addition of an aldehyde or ketone to an alkene or alkyne followed by capture of a nucleophile or elimination of an H+ ion. The outcome of the reaction depends on reaction conditions. With water and a protic acid such as sulfuric acid as the reaction medium and formaldehyde the reaction product is a 1,3-diol (3). When water is absent, the cationic intermediate loses a proton to give an allylic alcohol (4). With an excess of formaldehyde and a low reaction temperature the reaction product is a dioxane (5). When water is replaced by acetic acid the corresponding esters are formed. History The original reactants employed by Dutch chemist in his 1919 publication were styrene (scheme 2), pinene, camphene, eugenol, isosafrole and anethole. These procedures have been optimized. Hendrik Jacobus Prins discovered two new organic reactions during his doctoral research in the year of 1911–1912. The first one is the addition of polyhalogen compound to olefins and the second reaction is the acid catalyzed addition of aldehydes to olefin compounds. The early studies on Prins reaction are exploratory in nature and did not attract much attention until 1937. The development of petroleum cracking in 1937 increased the production of unsaturated hydrocarbons. As a consequence, commercial availability of lower olefin coupled with an aldehyde produced from oxidation of low boiling paraffin increased the curiosity to study the olefin-aldehyde condensation. Later on, Prins reaction emerged as a powerful C-O and C-C bond forming technique in the synthesis of various molecules in organic synthesis. In 1937 the reaction was investigated as part of a quest for di-olefins to be used in synthetic rubber. Reaction mechanism The reaction mechanism for this reaction is depicted in scheme 5. The carbonyl reactant (2) is protonated by a protic acid and for the resulting oxonium ion 3 two resonance structures can be drawn. This electrophile engages in an electrophilic addition with the alkene to the carbocationic intermediate 4. Exactly how much positive charge is present on the secondary carbon atom in this intermediate should be determined for each reaction set. Evidence exists for neighbouring group participation of the hydroxyl oxygen or its neighboring carbon atom. When the overall reaction has a high degree of concertedness, the charge built-up will be modest. The three reaction modes open to this oxocarbenium intermediate are: in blue: capture of the carbocation by water or any suitable nucleophile through 5 to the 1,3-adduct 6. in black: proton abstraction in an elimination reaction to unsaturated compound 7. When the alkene carries a methylene group, elimination and addition can be concerted with transfer of an allyl proton to the carbonyl group which in effect is an ene reaction in scheme 6. in green: capture of the carbocation by additional carbonyl reactant. In this mode the positive charge is dispersed over oxygen and carbon in the resonance structures 8a and 8b. Ring closure leads through intermediate 9 to the dioxane 10. An example is the conversion of styrene to 4-phenyl-m-dioxane. in gray: only in specific reactions and when the carbocation is very stable the reaction takes a shortcut to the oxetane 12. The photochemical Paternò–Büchi reaction between alkenes and aldehydes to oxetanes is more straightforward. Variations Many variations of the Prins reaction exist because it lends itself easily to cyclization reactions and because it is possible to capture the oxo-carbenium ion with a large array of nucleophiles. The halo-Prins reaction is one such modification with replacement of protic acids and water by lewis acids such as stannic chloride and boron tribromide. The halogen is now the nucleophile recombining with the carbocation. The cyclization of certain allyl pulegones in scheme 7 with titanium tetrachloride in dichloromethane at −78 °C gives access to the decalin skeleton with the hydroxyl group and chlorine group predominantly in cis configuration (91% cis). This observed cis diastereoselectivity is due to the intermediate formation of a trichlorotitanium alkoxide making possible an easy delivery of chlorine to the carbocation ion from the same face. The trans isomer is preferred (98% cis) when the switch is made to a tin tetrachloride reaction at room temperature. The Prins-pinacol reaction is a cascade reaction of a Prins reaction and a pinacol rearrangement. The carbonyl group in the reactant in scheme 8 is masked as a dimethyl acetal and the hydroxyl group is masked as a triisopropylsilyl ether (TIPS). With lewis acid stannic chloride the oxonium ion is activated and the pinacol rearrangement of the resulting Prins intermediate results in ring contraction and referral of the positive charge to the TIPS ether which eventually forms an aldehyde group in the final product as a mixture of cis and trans isomers with modest diastereoselectivity. The key oxo-carbenium intermediate can be formed by other routes than simple protonation of a carbonyl. In a key step of the synthesis of exiguolide, it was formed by protonation of a vinylogous ester: See also Heteropoly acid References External links Prins reaction in Alkaloid total synthesis Link Prins reaction @ organic-chemistry.org Addition reactions Carbon-carbon bond forming reactions Name reactions
Prins reaction
[ "Chemistry" ]
1,238
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
4,144,577
https://en.wikipedia.org/wiki/Disability-adjusted%20life%20year
Disability-adjusted life years (DALYs) are a measure of overall disease burden, expressed as the number of years lost due to ill-health, disability, or early death. It was developed in the 1990s as a way of comparing the overall health and life expectancy of different countries. DALYs have become more common in the field of public health and health impact assessment (HIA). They include not only the potential years of life lost due to premature death but also equivalent years of 'healthy' life lost by virtue of being in states of poor health or disability. In so doing, mortality and morbidity are combined into a single, common metric. Calculation Disability-adjusted life years are a societal measure of the disease or disability burden in populations. DALYs are calculated by combining measures of life expectancy as well as the adjusted quality of life during a burdensome disease or disability for a population. DALYs are related to the quality-adjusted life year (QALY) measure; however, QALYs only measure the benefit with and without medical intervention and therefore do not measure the total burden. Also, QALYs tend to be an individual measure and not a societal measure. Traditionally, health liabilities were expressed using one measure, the years of life lost (YLL) due to dying early. A medical condition that did not result in dying younger than expected was not counted. The burden of living with a disease or disability is measured by the years lost due to disability (YLD) component, sometimes also known as years lost due to disease or years lived with disability/disease. DALYs are calculated by taking the sum of these two components: DALY = YLL + YLD The DALY relies on an acceptance that the most appropriate measure of the effects of chronic illness is time, both time lost due to premature death and time spent disabled by disease. One DALY, therefore, is equal to one year of healthy life lost. How much a medical condition affects a person is called the disability weight (DW). This is determined by disease or disability and does not vary with age. Tables have been created of thousands of diseases and disabilities, ranging from Alzheimer's disease to loss of finger, with the disability weight meant to indicate the level of disability that results from the specific condition. Examples of the disability weight are shown on the right. Some of these are "short term", and the long-term weights may be different. The most noticeable change between the 2004 and 2010 figures for disability weights above are for blindness as it was considered the weights are a measure of health rather than well-being (or welfare) and a blind person is not considered to be ill. "In the terminology, the term disability is used broadly to refer to departures from optimal health in any of the important domains of health." At the population level, the disease burden as measured by DALYs is calculated by adding YLL to YLD. YLL uses the life expectancy at the time of death. YLD is determined by the number of years disabled weighted by level of disability caused by a disability or disease using the formula: YLD = I × DW × L In this formula, I = number of incident cases in the population, DW = disability weight of specific condition, and L = average duration of the case until remission or death (years). There is also a prevalence (as opposed to incidence) based calculation for YLD. Number of years lost due to premature death is calculated by YLL = N × L where N = number of deaths due to condition, L = standard life expectancy at age of death. Life expectancies are not the same at different ages. For example, in the Paleolithic era, life expectancy at birth was 33 years, but life expectancy at the age of 15 was an additional 39 years (total 54). Historically Japanese life expectancy statistics have been used as the standard for measuring premature death, as the Japanese have the longest life expectancies. Other approaches have since emerged, include using national life tables for YLL calculations, or using the reference life table derived by the GBD study. Age weighting The World Health Organization (WHO) used age weighting and time discounting at 3 percent in DALYs prior to 2010 but discontinued using them starting in 2010. There are two components to this differential accounting of time: age-weighting and time-discounting. Age-weighting is based on the theory of human capital. Commonly, years lived as a young adult are valued more highly than years spent as a young child or older adult, as these are years of peak productivity. Age-weighting receives considerable criticism for valuing young adults at the expense of children and the old. Some criticize, while others rationalize, this as reflecting society's interest in productivity and receiving a return on its investment in raising children. This age-weighting system means that somebody disabled at 30 years of age, for ten years, would be measured as having a higher loss of DALYs (a greater burden of disease), than somebody disabled by the same disease or injury at the age of 70 for ten years. This age-weighting function is by no means a universal methodology in studies, but is common when using DALYs. Cost-effectiveness studies using , for example, do not discount time at different ages differently. This age-weighting function applies only to the calculation of DALYs lost due to disability. Years lost to premature death are determined from the age at death and life expectancy. The Global Burden of Disease Study (GBD) 2001–2002 counted disability adjusted life years equally for all ages, but the GBD 1990 and GBD 2004 studies used the formula where is the age at which the year is lived and is the value assigned to it relative to an average value of 1. In these studies, future years were also discounted at a 3% rate to account for future health care losses. Time discounting, which is separate from the age-weighting function, describes preferences in time as used in economic models. The effects of the interplay between life expectancy and years lost, discounting, and social weighting are complex, depending on the severity and duration of illness. For example, the parameters used in the GBD 1990 study generally give greater weight to deaths at any year prior to age 39 than afterward, with the death of a newborn weighted at 33 DALYs and the death of someone aged 5–20 weighted at approximately 36 DALYs. As a result of numerous discussions, by 2010 the World Health Organization had abandoned the ideas of age weighting and time discounting. They had also substituted the idea of prevalence for incidence (when a condition started) because this is what surveys measure. Economic applications The methodology is not a direct economic measure, in that it does not assign a monetary value to any person or condition, and does not measure how much productive work or money is lost as a result of death and disease. However, HALYs, including DALYs and QALYs, are especially useful in guiding the allocation of health resources as they provide a common numerator, allowing for the expression of utility in terms of dollar/DALY, or dollar/QALY. For example, in Gambia, provision of the pneumococcal conjugate vaccine costs $670 per DALY saved. DALYs can also be used to estimate the 'value of lost welfare' (VLW) as a dollar amount when combined with data on the maximum cost individuals are willing to pay to prevent death, such as multiplying age-specific DALYs by the willingness to pay at that age, summing the values to give the VLW of the total population. For example, the total economic value lost due to stroke was estimated to amount to $2 trillion globally in 2019. These numbers can be compared to other treatments for other diseases, to determine whether investing resources in preventing or treating a different disease would be more efficient in terms of overall health. Examples Australia Cancer (25.1/1,000), cardiovascular (23.8/1,000), mental problems (17.6/1,000), neurological (15.7/1,000), chronic respiratory (9.4/1,000) and diabetes (7.2/1,000) are the main causes of good years of expected life lost to disease or premature death. Despite this, Australia has one of the longest life expectancies in the world. Africa These illustrate the problematic diseases and outbreaks occurring in 2013 in Zimbabwe, shown to have the greatest impact on health disability were typhoid, anthrax, malaria, common diarrhea, and dysentery. PTSD rates Posttraumatic stress disorder (PTSD) DALY estimates from 2004 for the world's 25 most populous countries give Asian/Pacific countries and the United States as the places where PTSD impact is most concentrated (as shown here). Noise-induced hearing loss The disability-adjusted life years attributable to hearing impairment for noise-exposed U.S. workers across all industries was calculated to be 2.53 healthy years lost annually per 1,000 noise-exposed workers. Workers in the mining and construction sectors lost 3.45 and 3.09 healthy years per 1,000 workers, respectively. Overall, 66% of the sample worked in the manufacturing sector and represented 70% of healthy years lost by all workers. History and usage Originally developed by Harvard University for the World Bank in 1990, the World Health Organization subsequently adopted the method in 1996 as part of the Ad hoc Committee on Health Research "Investing in Health Research & Development" report. The DALY was first conceptualized by Christopher J. L. Murray and Lopez in work carried out with the World Health Organization and the World Bank known as the Global Burden of Disease Study, which was undertaken in 1990. It is now a key measure employed by the United Nations World Health Organization in such publications as its Global Burden of Disease. The DALY was also used in the 1993 World Development Report. Criticism Both DALYs and QALYs are forms of HALYs, health-adjusted life years. Some critics have alleged that DALYs are essentially an economic measure of human productive capacity for the affected individual. In response, defenders of DALYs have argued that while DALYs have an age-weighting function that has been rationalized based on the economic productivity of persons at that age, health-related quality of life measures are used to determine the disability weights, which range from 0 to 1 (no disability to 100% disabled) for all disease. These defenders emphasize that disability weights are based not on a person's ability to work, but rather on the effects of the disability on the person's life in general. Hence, mental illness is one of the leading diseases as measured by global burden of disease studies, with depression accounting for 51.84 million DALYs. Perinatal conditions, which affect infants with a very low age-weight function, are the leading cause of lost DALYs at 90.48 million. Measles is fifteenth at 23.11 million. See also Bhutan GNH Index Broad measures of economic progress Disease burden Economics Full cost accounting Green national product Green gross domestic product (Green GDP) Gender-related Development Index Genuine Progress Indicator (GPI) Global burden of disease Global Peace Index Gross National Happiness Gross National Well-being (GNW) Happiness economics Happy Planet Index (HPI) Human Development Index (HDI) ISEW (Index of sustainable economic welfare) Institute for Health Metrics and Evaluation (IHME) Progress (history) Progressive utilization theory Legatum Prosperity Index Leisure satisfaction Living planet index Millennium Development Goals (MDGs) Post-materialism Psychometrics Subjective life satisfaction Where-to-be-born Index Wikiprogress World Values Survey (WVS) World Happiness Report Quality-adjusted life year (QALY) Pharmacoeconomics Healthy Life Years Seven Ages of Man References External links WHO Definition Global health Health economics World Health Organization Pejorative terms for people with disabilities Life expectancy
Disability-adjusted life year
[ "Biology" ]
2,466
[ "Senescence", "Life expectancy" ]
4,144,614
https://en.wikipedia.org/wiki/Very-high-density%20cable%20interconnect
A very-high-density cable interconnect (VHDCI) is a 68-pin connector that was introduced in the SPI-2 document of SCSI-3. The VHDCI connector is a very small connector that allows placement of four wide SCSI connectors on the back of a single PCI card slot. Physically, it looks like a miniature Centronics type connector. It uses the regular 68-contact pin assignment. The male connector (plug) is used on the cable and the female connector ("receptacle") on the device. Other uses Apart from the standardized use with the SCSI interface, several vendors have also used VHDCI connectors for other types of interfaces: Nvidia: for an external PCI Express 8-lane interconnect, and used in Quadro Plex VCS and in Quadro NVS 420 as a display port connector ATI Technologies: on the FireMV 2400 to convey two DVI and two VGA signals on a single connector, and ganging two of these connectors side by side in order to allow the FireMV 2400 to be a low-profile quad display card. The Radeon X1950 XTX Crossfire Edition also used a pair of the connectors to grant more inter-card bandwidth than the PCI Express bus allowed at the time for Crossfire. AMD: Some Visiontek variants of the Radeon HD 7750 use a VHDCI connector alongside a Mini DisplayPort to allow a 5 (breakout to 4 HDMI+1 mDP) display Eyefinity array on a low profile card. VisionTek also released a similar Radeon HD 5570, though it lacked a Mini DisplayPort. Juniper Networks: for their 12- and 48-port 100BASE-TX PICs (physical interface cards). The cable connects to the VHDCI connector on the PIC on one end, via an RJ-21 connector on the other end, to an RJ-45 patch panel. Cisco: 3750 StackWise stacking cables National Instruments: on their high-speed digital I/O cards. AudioScience uses VHDCI to carry multiple analog balanced audio and digital AES/EBU audio streams, and clock and GPIO signals. See also SCSI connector References Electrical signal connectors Analog video connectors Digital display connectors Networking hardware SCSI
Very-high-density cable interconnect
[ "Technology", "Engineering" ]
495
[ "Computing stubs", "Computer networks engineering", "Computer hardware stubs", "Networking hardware" ]
4,144,848
https://en.wikipedia.org/wiki/Knowledge%20integration
Knowledge integration is the process of synthesizing multiple knowledge models (or representations) into a common model (representation). Compared to information integration, which involves merging information having different schemas and representation models, knowledge integration focuses more on synthesizing the understanding of a given subject from different perspectives. For example, multiple interpretations are possible of a set of student grades, typically each from a certain perspective. An overall, integrated view and understanding of this information can be achieved if these interpretations can be put under a common model, say, a student performance index. The Web-based Inquiry Science Environment (WISE), from the University of California at Berkeley has been developed along the lines of knowledge integration theory. Knowledge integration has also been studied as the process of incorporating new information into a body of existing knowledge with an interdisciplinary approach. This process involves determining how the new information and the existing knowledge interact, how existing knowledge should be modified to accommodate the new information, and how the new information should be modified in light of the existing knowledge. A learning agent that actively investigates the consequences of new information can detect and exploit a variety of learning opportunities; e.g., to resolve knowledge conflicts and to fill knowledge gaps. By exploiting these learning opportunities the learning agent is able to learn beyond the explicit content of the new information. The machine learning program KI, developed by Murray and Porter at the University of Texas at Austin, was created to study the use of automated and semi-automated knowledge integration to assist knowledge engineers constructing a large knowledge base. A possible technique which can be used is semantic matching. More recently, a technique useful to minimize the effort in mapping validation and visualization has been presented which is based on Minimal Mappings. Minimal mappings are high quality mappings such that i) all the other mappings can be computed from them in time linear in the size of the input graphs, and ii) none of them can be dropped without losing property i). The University of Waterloo operates a Bachelor of Knowledge Integration undergraduate degree program as an academic major or minor. The program started in 2008. See also Data integration Knowledge value chain References Further reading Linn, M. C. (2006) The Knowledge Integration Perspective on Learning and Instruction. R. Sawyer (Ed.). In The Cambridge Handbook of the Learning Sciences. Cambridge, MA. Cambridge University Press Murray, K. S. (1996) KI: A tool for Knowledge Integration. Proceedings of the Thirteenth National Conference on Artificial Intelligence Murray, K. S. (1995) Learning as Knowledge Integration, Technical Report TR-95-41, The University of Texas at Austin Murray, K. S. (1990) Improving Explanatory Competence, Proceedings of the Twelfth Annual Conference of the Cognitive Science Society Murray, K. S., Porter, B. W. (1990) Developing a Tool for Knowledge Integration: Initial Results. International Journal for Man-Machine Studies, volume 33 Murray, K. S., Porter, B. W. (1989) Controlling Search for the Consequences of New Information during Knowledge Integration. Proceedings of the Sixth International Machine Learning Conference Shen, J., Sung, S., & Zhang, D.M. (2016) Toward an analytic framework of interdisciplinary reasoning and communication (IRC) processes in science. International Journal of Science Education, 37 (17), 2809–2835. Shen, J., Liu, O., & Sung, S. (2014). Designing interdisciplinary assessments in science for college students: An example on osmosis. International Journal of Science Education, 36 (11), 1773–1793. Knowledge representation Learning Machine learning
Knowledge integration
[ "Engineering" ]
736
[ "Artificial intelligence engineering", "Machine learning" ]
4,144,898
https://en.wikipedia.org/wiki/Shiplap
Shiplap is a type of wooden board used commonly as exterior siding in the construction of residences, barns, sheds, and outbuildings. Exterior walls Shiplap is either rough-sawn or milled pine or similarly inexpensive wood between wide with a rabbet on opposite sides of each edge. The rabbet allows the boards to overlap in this area. The profile of each board partially overlaps that of the board next to it creating a channel that gives shadow line effects, provides excellent weather protection and allows for dimensional movement. Useful for its strength as a supporting member, and its ability to form a relatively tight seal when lapped, shiplap is usually used as a type of siding for buildings that do not require extensive maintenance and must withstand cold and aggressive climates. Rough-sawn shiplap is attached vertically in post and beam construction, usually with 51–65 mm (6d–8d) common nails, while milled versions, providing a tighter seal, are more commonly placed horizontally, more suited to two-by-four frame construction. Small doors and shutters such as those found in barns and sheds are often constructed of shiplap cut directly from the walls, with only thin members framing or crossing the back for support. Shiplap is also used indoors for the rough or rustic look that it creates when used as paneling or a covering for a wall or ceiling. Shiplap is often used to describe any rabbeted siding material that overlaps in a similar fashion. Interior design In interior design, shiplap is a style of wooden wall siding characterized by long planks, normally painted white, that are mounted horizontally with a slight gap between them in a manner that evokes exterior shiplap walls. A disadvantage of the style is that the gaps are prone to accumulating dust. Installing shiplap horizontally in a room can help carry the eye around the space, making it feel larger. Installing it vertically helps emphasize the height of the room, making it feel taller. Rectangular shiplap pieces can be placed in a staggered zig-zag layout to add texture and enhance the size of the room. Shiplap can also be installed on the ceiling, to draw the eye upwards. References Wood products Building engineering Building materials Timber framing
Shiplap
[ "Physics", "Technology", "Engineering" ]
467
[ "Timber framing", "Building engineering", "Architecture", "Structural system", "Construction", "Materials", "Civil engineering", "Matter", "Building materials" ]
4,144,933
https://en.wikipedia.org/wiki/List%20of%20RFCs
This is a partial list of RFCs (request for comments memoranda). A Request for Comments (RFC) is a publication in a series from the principal technical development and standards-setting bodies for the Internet, most prominently the Internet Engineering Task Force (IETF). While there are over 9,151 RFCs as of February 2022, this list consists of RFCs that have related articles. A complete list is available from the IETF website. Numerical list Topical list Obsolete RFCs are indicated with struck-through text. References External links RFC-Editor - Document Retrieval - search engine RFC Database - contains various lists of RFCs RFC Bibliographic Listing - Listing of bibliographic entries for all RFCs. Also notes when an RFC has been made obsolete. Internet Standards Internet-related lists
List of RFCs
[ "Technology" ]
166
[ "Computing-related lists", "Internet-related lists" ]
4,144,972
https://en.wikipedia.org/wiki/Mae-Wan%20Ho
Mae-Wan Ho (; 12 November 1941 – 24 March 2016) was a geneticist known for her critical views on genetic engineering and evolution. She authored or co-authored a number of publications, including 10 books, such as The Rainbow and the Worm, the Physics of Organisms (1993, 1998), Genetic Engineering: Dream or Nightmare? (1998, 1999), Living with the Fluid Genome (2003) and Living Rainbow H2O (2012). Ho was criticized for embracing pseudoscience. Biography Ho received a PhD in biochemistry in 1967 from Hong Kong University, was postdoctoral fellow in biochemical genetics, University of California, San Diego, from 1968 to 1972, senior research fellow in Queen Elizabeth College, lecturer in genetics (from 1976) and reader in biology (from 1985) in the Open University, and since retiring in June 2000 visiting professor of biophysics in Catania University, Sicily. Ho died of cancer in March 2016. Institute of Science in Society Ho was a co-founder and director of the Institute of Science in Society (ISIS), an interest group which published fringe articles about climate change, GMOs, homeopathy, traditional Chinese medicine, and water memory. In reviewing the organisation, David Colquhoun accused the ISIS of promoting pseudoscience and specifically criticised Ho's understanding of homeopathy. The institute is on the Quackwatch list of questionable organizations. Genetic engineering Ho, together with Joe Cummins of the University of Western Ontario, has argued that a sterility gene engineered into a crop could be transferred to other crops or wild relatives and that "This could severely compromise the agronomic performance of conventional crops and cause wild relatives to go extinct". They argued that this process could also produce genetic instabilities, which might be "leading to catastrophic breakdown", and stated that there are no data to assure that this has not happened or cannot happen. This concern contrasts with the reason why these sterile plants were developed, which was to prevent the transfer of genes to the environment by preventing any plants that are bred with or that receive these genes from reproducing. Indeed, any gene that caused sterility when transferred to a new species would be eliminated by natural selection and could not spread. Ho expressed concerns about the spread of altered genes through horizontal gene transfer and that the experimental alteration of genetic structures may be out of control. One of her concerns is that the antibiotic resistant gene that was isolated from bacteria and used in some GM crops might cross back from plants by horizontal gene transfer to different species of bacteria, because "If this happened it would leave us unable to treat major illnesses like meningitis and E coli." Her views were published in an opinion article based on a review of others' research. The arguments and conclusions of this article were heavily criticized by prominent plant scientists, and the claims of the article criticized in detail in a response that was published in the same journal, prompting a reply from Ho. A review on the topic published in 2008 in the Annual Review of Plant Biology stated that "These speculations have been extensively rebutted by the scientific community". Ho has also argued that bacteria could acquire the bacterial gene barnase from transgenic plants. This gene kills any cell that expresses it and lacks barstar, the specific inhibitor of barnase activity. In an article entitled Chronicle of An Ecological Disaster Foretold, which was published in an ISIS newsletter, Ho speculated that if a bacterium acquired the barnase gene and survived, this could make the bacteria a more dangerous pathogen. Evolution Ho has claimed that evolution is pluralistic because there are many mechanisms that can produce variation in phenotypes independently of haphazard mutations. Ho has advocated a form of Lamarckian evolution. She has been criticized by the scientific community for setting up straw man arguments in her criticism of natural selection and supporting discredited evolutionary theories. But some of her Lamarckian ideas have since entered the mainstream of the evolutionary literature. The paleontologist Philip Gingerich has noted that Ho's evolutionary ideas are based on vitalistic thinking. Publications Mae-Wan Ho. Living Rainbow H2O, Singapore; River Edge, NJ: World Scientific, 2012. . Mae-Wan Ho. Meaning of Life & the Universe, Singapore; River Edge, NJ: World Scientific, 2017. Mae-Wan Ho. The Rainbow and the Worm, the Physics of Organisms, Singapore; River Edge, NJ: World Scientific, 1998. . Mae-Wan Ho. Genetic engineering: dream or nightmare? Turning the tide on the brave new world of bad science and big business, New York, NY: Continuum, 2000. . Mae-Wan Ho. Living with the fluid genome, London, UK: Institute of Science in Society; Penang, Malaysia: Third World Network, 2003. . Mae-Wan Ho, Sam Burcher, Rhea Gala and Vejko Velkovic. Unraveling AIDS: the independent science and promising alternative therapies, Ridgefield, CT: Vital Health Pub., 2005. . Mae- Wan Ho, Peter Saunders. Beyond Neo-Darwinism: An Introduction to the New Evolutionary Paradigm, London: Academic Press, 1984. References External links Personal website 1941 births 2016 deaths Academics of the Open University Academics of the University of London Non-Darwinian evolution Women geneticists
Mae-Wan Ho
[ "Biology" ]
1,079
[ "Non-Darwinian evolution", "Biology theories" ]
4,145,225
https://en.wikipedia.org/wiki/Great%20disnub%20dirhombidodecahedron
In geometry, the great disnub dirhombidodecahedron, also called Skilling's figure, is a degenerate uniform star polyhedron. It was proven in 1970 that there are only 75 uniform polyhedra other than the infinite families of prisms and antiprisms. John Skilling discovered another degenerate example, the great disnub dirhombidodecahedron, by relaxing the condition that edges must be single. More precisely, he allowed any even number of faces to meet at each edge, as long as the set of faces couldn't be separated into two connected sets (Skilling, 1975). Due to its geometric realization having some double edges where 4 faces meet, it is considered a degenerate uniform polyhedron but not strictly a uniform polyhedron. The number of edges is ambiguous, because the underlying abstract polyhedron has 360 edges, but 120 pairs of these have the same image in the geometric realization, so that the geometric realization has 120 single edges and 120 double edges where 4 faces meet, for a total of 240 edges. The Euler characteristic of the abstract polyhedron is −96. If the pairs of coinciding edges in the geometric realization are considered to be single edges, then it has only 240 edges and Euler characteristic 24. The vertex figure has 4 square faces passing through the center of the model. It may be constructed as the exclusive or (blend) of the great dirhombicosidodecahedron and compound of twenty octahedra. Related polyhedra It shares the same edge arrangement as the great dirhombicosidodecahedron, but has a different set of triangular faces. The vertices and edges are also shared with the uniform compounds of twenty octahedra or twenty tetrahemihexahedra. 180 of the edges are shared with the great snub dodecicosidodecahedron. Dual polyhedron The dual of the great disnub dirhombidodecahedron is called the great disnub dirhombidodecacron. It is a nonconvex infinite isohedral polyhedron. Like the visually identical great dirhombicosidodecacron in Magnus Wenninger's Dual Models, it is represented with intersecting infinite prisms passing through the model center, cut off at a certain point that is convenient for the maker. Wenninger suggested these figures are members of a new class of stellation polyhedra, called stellation to infinity. However, he also acknowledged that strictly speaking they are not polyhedra because their construction does not conform to the usual definitions. Gallery See also List of uniform polyhedra References . http://www.software3d.com/MillersMonster.php External links http://www.orchidpalms.com/polyhedra/uniform/skilling.htm http://www.georgehart.com/virtual-polyhedra/great_disnub_dirhombidodecahedron.html Uniform polyhedra
Great disnub dirhombidodecahedron
[ "Physics" ]
631
[ "Uniform polytopes", "Uniform polyhedra", "Symmetry" ]
4,145,238
https://en.wikipedia.org/wiki/Etizolam
Etizolam (marketed under numerous brand names) is a thienodiazepine derivative which is a benzodiazepine analog. The etizolam molecule differs from a benzodiazepine in that the benzene ring has been replaced by a thiophene ring and triazole ring has been fused, making the drug a thienotriazolodiazepine. Although a thienodiazepine, etizolam is clinically regarded as a benzodiazepine because of its mode of action via the benzodiazepine receptor and directly targeting GABAA allosteric modulator receptors. It possesses anxiolytic, amnesic, anticonvulsant, hypnotic, sedative and skeletal muscle relaxant properties. It was patented in 1972 and first approved for medical use in Japan in 1984. As of April 2021, the export of etizolam has been banned in India. Medical uses Short-term treatment of insomnia. Anxiety disorders such as OCD and general anxiety disorder, mostly as a short-term medication to be used purely on an at-need basis Side effects Long term use may result in blepharospasms, especially in women. Doses of 4 mg or more may cause anterograde amnesia. In rare cases, erythema annulare centrifugum skin lesions have resulted. Tolerance, dependence and withdrawal Abrupt or rapid discontinuation from etizolam, as with benzodiazepines, may result in the appearance of the benzodiazepine withdrawal syndrome, including rebound insomnia. Neuroleptic malignant syndrome, a rare event in benzodiazepine withdrawal, has been documented in a case of abrupt withdrawal from etizolam. This is particularly relevant given etizolam's short half life relative to benzodiazepines such as diazepam resulting in a more rapid drug level decrease in blood plasma levels. In a study that compared the effectiveness of etizolam, alprazolam, and bromazepam for the treatment of generalized anxiety disorder, all three drugs retained their effectiveness over 2 weeks, but etizolam became more effective from 2 weeks to 4 weeks. Administering .5 mg etizolam twice daily did not induce cognitive deficits over 3 weeks when compared to placebo. When multiple doses of etizolam, or lorazepam, were administered to rat neurons, lorazepam caused downregulation of alpha-1 benzodiazepine binding sites (tolerance/dependence), while etizolam caused an increase in alpha-2 benzodiazepine binding sites (reverse tolerance to anti-anxiety effects). Tolerance to the anticonvulsant effects of lorazepam was observed, but no significant tolerance to the anticonvulsant effects of etizolam was observed. Etizolam therefore has a reduced liability to induce tolerance, and dependence, compared with classic benzodiazepines. Etizolam may represent a possible anxiolytic of choice with reduced liability to produce tolerance and dependence after long-term treatment of anxiety and stress syndromes. Pharmacology Etizolam, a thienodiazepine derivative, is absorbed fairly rapidly, with peak plasma levels achieved between 30 minutes and 2 hours. It has a mean elimination half life of about 3.4 hours. Etizolam possesses potent hypnotic properties, and is comparable with other short-acting benzodiazepines. Etizolam acts as a positive allosteric modulator of the GABAA receptor by agonizing the receptor's benzodiazepine site. According to the Italian prescribing information sheet, etizolam belongs to a new class of diazepines, thienotriazolodiazepines. This new class is easily oxidized, rapidly metabolized, and has a lower risk of accumulation, even after prolonged treatment. Etizolam has an anxiolytic action about 6-8 times greater than that of diazepam. Etizolam produces, especially at higher dosages, a reduction in time taken to fall asleep, an increase in total sleep time, and a reduction in the number of awakenings. During tests, there were no substantial changes in deep sleep; however, it may reduce REM sleep. In EEG tests of healthy volunteers, etizolam showed some similar characteristics to tricyclic antidepressants. Etizolam's main metabolites in humans are alpha-hydroxyetizolam and 8-hydroxyetizolam. alpha-Hydroxyetizolam is pharmacologically active and has a half-life of approximately 8.2 hours. Interactions Itraconazole and fluvoxamine slow down the rate of elimination of etizolam, leading to accumulation of etizolam, therefore increasing its pharmacological effects. Carbamazepine speeds up the metabolism of etizolam, resulting in reduced pharmacological effects. Overdose Cases of intentional suicide by overdose using etizolam in combination with GABA agonists have been reported. Although etizolam has a lower LD50 than certain benzodiazepines, the LD50 is still far beyond the prescribed or recommended dose. Flumazenil, a GABA antagonist agent used to reverse benzodiazepine overdoses, inhibits the effect of etizolam as well as classical benzodiazepines such as diazepam and chlordiazepoxide. Etizolam overdose deaths are rising in Scotland,especially among women - for instance, the National Records of Scotland report on drug-related deaths have, "have increased significantly in Scotland in recent years, with a much greater percentage increase in deaths among women than among men". By 2018 1,187 overdoses were officially recorded, a 107% increase from 2008, this means that this has been the highest peak to date. Although, men still outnumber women in drug-related deaths. Society and culture Brand names Etilaam, Sedekopan, Etizest, Etizex, Pasaden or Depas Legal status International drug control conventions In 1990, it was recommended that Etizolam not be placed under international control. However, this attitude has changed due to increased abuse. On December 13, 2019, the World Health Organization recommended Etizolam be placed in Schedule 4 of the 1971 Convention on Psychotropic Substances. This recommendation was followed by the placement of Etizolam into Schedule IV in March 2020. Australia Etizolam is not used medically in Australia but has been found in counterfeit Xanax pills. Denmark Etizolam is controlled in Denmark under the Danish Misuse of Drugs Act. Germany Etizolam was controlled in Germany in July 2013 but is not used medically. Italy Etizolam is licensed for the treatment of anxiety, insomnia and neurosis as a prescription-only medication. India In India, it is a Narcotics prescription-only (NRx) medication used for anxiety disorders, sometimes in combination with other drugs, i.e. the beta blocker propranolol. United Kingdom In the UK, etizolam has been classified as a Class C drug by the May 2017 amendment to The Misuse of Drugs Act 1971 along with several other designer benzodiazepine drugs. United States Etizolam is not authorized by the FDA for medical use in the U.S. As of March 2016, etizolam is a controlled substance in the following states: Alabama, Arkansas, Florida, Georgia (as Schedule IV, whereas all other states listed here prohibit it as a Schedule I substance), Louisiana, Mississippi, Texas, South Carolina, and Virginia. It is controlled in Indiana as of July 1, 2017. It is controlled in Ohio as of February 2018. On December 23, 2022, the DEA announced it had begun consideration on the matter of placing Etizolam under temporary Schedule I status. Later on July 25, 2023, the DEA published a pre-print notice that Etizolam would become temporarily scheduled as a Schedule I controlled substance from 26 July 2023 to 26 July 2025. Misuse Etizolam is a drug of potential misuse. Cases of etizolam dependence have been documented in the medical literature. Since 1991, cases of etizolam misuse and addiction have substantially increased, due to varying levels of accessibility and cultural popularity. Pills being sold as Xanax or other benzodiazepines that are illicitly manufactured may often contain etizolam rather than their listed ingredient See also Alprazolam Brotizolam Clotiazepam Deschloroetizolam Fluetizolam Metizolam Benzodiazepine dependence Benzodiazepine withdrawal syndrome Long-term effects of benzodiazepines References External links Inchem.org - Etizolam 2-Chlorophenyl compounds Designer drugs GABAA receptor positive allosteric modulators Hypnotics Thienotriazolodiazepines
Etizolam
[ "Biology" ]
1,920
[ "Hypnotics", "Behavior", "Sleep" ]
4,145,330
https://en.wikipedia.org/wiki/Cloxazolam
Cloxazolam is a benzodiazepine derivative that has anxiolytic, sedative, and anticonvulsant properties. It is not widely used; as of August 2018 it was marketed in Belgium, Luxembourg, Portugal, Brazil, and Japan. In 2019, it has been retired from the Belgian market. See also Cinazepam Gidazepam References External links Inchem.org - Cloxazolam Anxiolytics Chloroarenes Lactams Oxazolobenzodiazepines Prodrugs 2-Chlorophenyl compounds
Cloxazolam
[ "Chemistry" ]
130
[ "Chemicals in medicine", "Prodrugs" ]
4,145,437
https://en.wikipedia.org/wiki/Year%20zero
A year zero does not exist in the Anno Domini (AD) calendar year system commonly used to number years in the Gregorian calendar (nor in its predecessor, the Julian calendar); in this system, the year is followed directly by year (which is the year of the epoch of the era). However, there is a year zero in both the astronomical year numbering system (where it coincides with the Julian year ), and the ISO 8601:2004 system, a data interchange standard for certain time and calendar information (where year zero coincides with the Gregorian year ; see conversion table). There is also a year zero in most Buddhist and Hindu calendars. History The Anno Domini era was introduced in 525 by Scythian monk Dionysius Exiguus (c. 470 – c. 544), who used it to identify the years on his Easter table. He introduced the new era to avoid using the Diocletian era, based on the accession of Roman emperor Diocletian, as he did not wish to continue the memory of a persecutor of Christians. In the preface to his Easter table, Dionysius stated that the "present year" was "the consulship of Probus Junior" which was also 525 years "since the incarnation of our Lord Jesus Christ". How he arrived at that number is unknown. Dionysius Exiguus did not use "AD" years to date any historical event. This practice began with the English cleric Bede (c. 672–735), who used AD years in his (731), popularizing the era. Bede also used – only once – a term similar to the modern English term "before Christ", though the practice did not catch on for nearly a thousand years, when books by Denis Pétau treating calendar science gained popularity. Bede did not sequentially number days of the month, weeks of the year, or months of the year. However, he did number many of the days of the week using the counting origin one in Ecclesiastical Latin. Previous Christian histories used several titles for dating events: ("in the year of the world") beginning on the purported first day of creation; or ("in the year of Adam") beginning at the creation of Adam five days later (or the sixth day of creation according to the Genesis creation narrative) as used by Africanus; or ("in the year of Abraham") beginning 3,412 years after Creation according to the Septuagint, used by Eusebius of Caesarea; all of which assigned "one" to the year beginning at Creation, or the creation of Adam, or the birth of Abraham, respectively. Bede continued this earlier tradition relative to the AD era. In chapter II of book I of Ecclesiastical History, Bede stated that Julius Caesar invaded Britain "in the year 693 after the building of Rome, but the sixtieth year before the incarnation of our Lord", while stating in chapter III, "in the year of Rome 798, Claudius" also invaded Britain and "within a very few days ... concluded the war in ... the forty-sixth [year] from the incarnation of our Lord". Although both dates are wrong, they are sufficient to conclude that Bede did not include a year zero between BC and AD: 798 − 693 + 1 (because the years are inclusive) = 106, but 60 + 46 = 106, which leaves no room for a year zero. The modern English term "before Christ" (BC) is only a rough equivalent, not a direct translation, of Bede's Latin phrase ("before the time of the lord's incarnation"), which was itself never abbreviated. Bede's singular use of 'BC' continued to be used sporadically throughout the Middle Ages. Neither the concept of nor a symbol for zero existed in the system of Roman numerals. The Babylonian system of the BC era had used the idea of "nothingness" without considering it a number, and the Romans enumerated in much the same way. Wherever a modern zero would have been used, Bede and Dionysius Exiguus did use Latin number words, or the word (meaning "nothing") alongside Roman numerals. Zero was invented in India in the sixth century, and was either transferred or reinvented by the Arabs by about the eighth century. The Arabic numeral for zero (0) did not enter Europe until the thirteenth century. Even then, it was known only to very few, and only entered widespread use in Europe by the seventeenth century. The nomenclature was not widely used in Western Europe until the 9th century, and the to historical year was not uniform throughout Western Europe until 1752. The first extensive use (hundreds of times) of 'BC' occurred in by Werner Rolevinck in 1474, alongside years of the world (). The terms anno Domini, Dionysian era, Christian era, vulgar era, and common era were used interchangeably between the Renaissance and the 19th century, at least in Latin. But vulgar era fell out of use in English at the beginning of the 20th century after vulgar acquired the meaning of "offensively coarse", replacing its original meaning of "common" or "ordinary". Consequently, historians regard all these eras as equal. Historians have never included a year zero. This means that between, for example, and , there are 999 years: 500 years BC, and 499 years AD preceding 500. In common usage anno Domini 1 is preceded by the year 1 BC, without an intervening year zero. Neither the choice of calendar system (whether Julian or Gregorian) nor the name of the era (Anno Domini or Common Era) determines whether a year zero will be used. If writers do not use the convention of their group (historians or astronomers), they must explicitly state whether they include a year 0 in their count of years, otherwise their historical dates will be misunderstood. Astronomy In astronomy, for the year AD 1 and later it is common to assign the same numbers as the Anno Domini notation, which in turn is numerically equivalent to the Common Era notation. But the discontinuity between 1 AD and 1 BC makes it cumbersome to compare ancient and modern dates. So the year before 1 AD is designated 0, the year before 0 is −1, and so on. The letters "AD", "BC", "CE", or "BCE" are omitted. So 1 BC in historical notation is equivalent to 0 in astronomical notation, 2 BC is equivalent to −1, etc. Sometimes positive years are preceded by the + sign. This year numbering notation was introduced by the astronomer Jacques Cassini in 1740. History of astronomical usage In 1627, the German astronomer Johannes Kepler, in his Rudolphine Tables, first used an astronomical year essentially as a year zero. He labeled it Christi and inserted it between years labeled and BC and AD today, on the "mean motion" pages of the Sun, Moon, and planets. In 1702, the French astronomer Philippe de La Hire labeled a year as and placed it at the end of the years labeled (BC), and immediately before the years labeled (AD), on the mean motion pages in his , thus adding the number designation 0 to Kepler's . Finally, in 1740, the transition was completed by French astronomer Jacques Cassini , who is traditionally credited with inventing year zero. In his , Cassini labeled the year simply as 0, and placed it at the end of years labeled (BC), and immediately before years labeled (AD). ISO 8601 ISO 8601:2004 (and previously ISO 8601:2000, but not ISO 8601:1988) explicitly uses astronomical year numbering in its date reference systems. (Because it also specifies the use of the proleptic Gregorian calendar for all years before 1582, some readers incorrectly assume that a year zero is also included in that proleptic calendar, but it is not used with the BC/AD era.) The "basic" format for year 0 is the four-digit form 0000, which equals the historical year 1 BC. Several "expanded" formats are possible: −0000 and +0000, as well as five- and six-digit versions. Earlier years are also negative four-, five- or six-digit years, which have an absolute value one less than the equivalent BC year, hence -0001 = 2 BC. Because only ISO 646 (7-bit ASCII) characters are allowed by ISO 8601, the minus sign is represented by a hyphen-minus. Computing Programming libraries may implement a year zero, an example being the Perl CPAN module DateTime. Indian calendars Most eras used with Hindu and Buddhist calendars, such as the Saka era or the Kali Yuga, begin with the year 0. These calendars mostly use elapsed, expired, or complete years, in contrast with most calendars from other parts of the world which use current years. A complete year had not yet elapsed for any date in the initial year of the epoch, thus the number 1 cannot be used. Instead, during the first year the indication of 0 years (elapsed) is given in order to show that the epoch is less than 1 year old. This is similar to the Western method of stating a person's age – people do not reach age one until one year has elapsed since birth (but their age during the year beginning at birth is specified in months or fractional years, not as age zero). However, if ages were specified in years and months, such a person would be said to be, for example, 0 years and 6 months or 0.5 years old. This is analogous to the way time is shown on a 24-hour clock: during the first hour of a day, the time elapsed is 0 hours, n minutes. See also List of non-standard dates References Chronology Astronomical coordinate systems zero 0 (number) 0s 0s BC Nonexistent things
Year zero
[ "Physics", "Astronomy", "Mathematics" ]
2,081
[ "Chronology", "Physical quantities", "Time", "Astronomical coordinate systems", "Coordinate systems", "Spacetime" ]
4,145,466
https://en.wikipedia.org/wiki/Cinolazepam
Cinolazepam (marketed under the brand name Gerodorm) is a drug which is a benzodiazepine derivative. It possesses anxiolytic, anticonvulsant, sedative and skeletal muscle relaxant properties. Due to its strong sedative properties, it is primarily used as a hypnotic. It was patented in 1978 and came into medical use in 1992. Cinolazepam is not approved for sale in the United States or Canada. References External links Inchem.org - Cinolazepam Secondary alcohols Benzodiazepines Chloroarenes 2-Fluorophenyl compounds Hypnotics Lactams Nitriles
Cinolazepam
[ "Chemistry", "Biology" ]
144
[ "Hypnotics", "Behavior", "Functional groups", "Sleep", "Nitriles" ]
4,145,476
https://en.wikipedia.org/wiki/Doxefazepam
Doxefazepam (marketed under brand name Doxans) is a benzodiazepine medication. It possesses anxiolytic, anticonvulsant, sedative and skeletal muscle relaxant properties. It is used therapeutically as a hypnotic. According to Babbini and colleagues in 1975, this derivative of flurazepam was between 2 and 4 times more potent than the latter while at the same time being half as toxic in laboratory animals. It was patented in 1972 and came into medical use in 1984. Side effects Section 5.5 of the article Doxefazepam in volume 66 of the World Health Organization's (WHO) and International Agency for Research on Cancer's (IARC) IARC Monographs On The Evaluation Of Carcinogenic Risks To Humans, an article describing the carcinogenic/toxic effects of doxefazepam on humans and experimental animals, states that there is "inadequate evidence in humans for the carcinogenicity of doxefazepam" and limited evidence in experimental for the carcinogenicity of doxefazepam," and concluded that the overall evaluation of the substance's carcinogenicity to humans is "not classifiable." See also Benzodiazepine References External links Inchem.org - Doxefazepam IARC Monographs - Doxefazepam Primary alcohols Lactims Benzodiazepines Chloroarenes 2-Fluorophenyl compounds Hypnotics Lactams
Doxefazepam
[ "Biology" ]
321
[ "Hypnotics", "Behavior", "Sleep" ]
4,145,551
https://en.wikipedia.org/wiki/Iron%20fertilization
Iron fertilization is the intentional introduction of iron-containing compounds (like iron sulfate) to iron-poor areas of the ocean surface to stimulate phytoplankton production. This is intended to enhance biological productivity and/or accelerate carbon dioxide () sequestration from the atmosphere. Iron is a trace element necessary for photosynthesis in plants. It is highly insoluble in sea water and in a variety of locations is the limiting nutrient for phytoplankton growth. Large algal blooms can be created by supplying iron to iron-deficient ocean waters. These blooms can nourish other organisms. Ocean iron fertilization is an example of a geoengineering technique. Iron fertilization attempts to encourage phytoplankton growth, which removes carbon from the atmosphere for at least a period of time. This technique is controversial because there is limited understanding of its complete effects on the marine ecosystem, including side effects and possibly large deviations from expected behavior. Such effects potentially include release of nitrogen oxides, and disruption of the ocean's nutrient balance. Controversy remains over the effectiveness of atmospheric sequestration and ecological effects. Since 1990, 13 major large scale experiments have been carried out to evaluate efficiency and possible consequences of iron fertilization in ocean waters. A study in 2017 considered that the method is unproven; the sequestering efficiency was low and sometimes no effect was seen and the amount of iron deposits needed to make a small cut in the carbon emissions would be in the million tons per year. However since 2021, interest is renewed in the potential of iron fertilization, among other from a white paper study of NOAA, the US National Oceanographic and Atmospheric Administration, which rated iron fertilization as having "moderate potential for cost, scalability and how long carbon might be stored compared to other marine sequestration ideas" Approximately 25 per cent of the ocean surface has ample macronutrients, with little plant biomass (as defined by chlorophyll). The production in these high-nutrient low-chlorophyll (HNLC) waters is primarily limited by micronutrients, especially iron. The cost of distributing iron over large ocean areas is large compared with the expected value of carbon credits. Research in the early 2020s suggested that it could only permanently sequester a small amount of carbon. Process Role of iron in carbon sequestration Ocean iron fertilization is an example of a geoengineering technique that involves intentional introduction of iron-rich deposits into oceans, and is aimed to enhance biological productivity of organisms in ocean waters in order to increase carbon dioxide () uptake from the atmosphere, possibly resulting in mitigating its global warming effects. Iron is a trace element in the ocean and its presence is vital for photosynthesis in plants, and in particular phytoplanktons, as it has been shown that iron deficiency can limit ocean productivity and phytoplankton growth. For this reason, the "iron hypothesis" was put forward by Martin in late 1980s where he suggested that changes in iron supply in iron-deficient seawater can bloom plankton growth and have a significant effect on the concentration of atmospheric carbon dioxide by altering rates of carbon sequestration. In fact, fertilization is an important process that occurs naturally in the ocean waters. For instance, upwellings of ocean currents can bring nutrient-rich sediments to the surface. Another example is through transfer of iron-rich minerals, dust, and volcanic ash over long distances by rivers, glaciers, or wind. Moreover, it has been suggested that whales can transfer iron-rich ocean dust to the surface, where planktons can take it up to grow. It has been shown that reduction in the number of sperm whales in the Southern Ocean has resulted in a 200,000 tonnes/yr decrease in the atmospheric carbon uptake, possibly due to limited phytoplankton growth. Carbon sequestration by phytoplankton Phytoplankton is photosynthetic: it needs sunlight and nutrients to grow, and takes up carbon dioxide in the process. Plankton can take up and sequester atmospheric carbon through generating calcium or silicon-carbonate skeletons. When these organisms die they sink to the ocean floor where their carbonate skeletons can form a major component of the carbon-rich deep sea precipitation, thousands of meters below plankton blooms, known as marine snow. Nonetheless, based on the definition, carbon is only considered "sequestered" when it is deposited in the ocean floor where it can be retained for millions of years. However, most of the carbon-rich biomass generated from plankton is generally consumed by other organisms (small fish, zooplankton, etc.) and substantial part of rest of the deposits that sink beneath plankton blooms may be re-dissolved in the water and gets transferred to the surface where it eventually returns to the atmosphere, thus, nullifying any possible intended effects regarding carbon sequestration. Nevertheless, supporters of the idea of iron fertilization believe that carbon sequestration should be re-defined over much shorter time frames and claim that since the carbon is suspended in the deep ocean it is effectively isolated from the atmosphere for hundreds of years, and thus, carbon can be effectively sequestered. Efficiency and concerns Assuming the ideal conditions, the upper estimates for possible effects of iron fertilization in slowing down global warming is about 0.3W/m2 of averaged negative forcing which can offset roughly 15–20% of the current anthropogenic emissions. However, although this approach could be looked upon as an easy option to lower the concentration of in the atmosphere, ocean iron fertilization is still quite controversial and highly debated due to possible negative consequences on marine ecosystems. Research on this area has suggested that fertilization through deposition of large quantities of iron-rich dust into the ocean floor can significantly disrupt the ocean's nutrient balance and cause major complications in the food chain for other marine organisms. Methods There are two ways of performing artificial iron fertilization: ship based direct into the ocean and atmospheric deployment. Ship based deployment Trials of ocean fertilization using iron sulphate added directly to the surface water from ships are described in detail in the experiment section below. Atmospheric sourcing Iron-rich dust rising into the atmosphere is a primary source of ocean iron fertilization. For example, wind blown dust from the Sahara desert fertilizes the Atlantic Ocean and the Amazon rainforest. The naturally occurring iron oxide in atmospheric dust reacts with hydrogen chloride from sea spray to produce iron chloride, which degrades methane and other greenhouse gases, brightens clouds and eventually falls with the rain in low concentration across a wide area of the globe. Unlike ship based deployment, no trials have been performed of increasing the natural level of atmospheric iron. Expanding this atmospheric source of iron could complement ship-based deployment. One proposal is to boost the atmospheric iron level with iron salt aerosol. Iron(III) chloride added to the troposphere could increase natural cooling effects including methane removal, cloud brightening and ocean fertilization, helping to prevent or reverse global warming. Experiments Martin hypothesized that increasing phytoplankton photosynthesis could slow or even reverse global warming by sequestering in the sea. He died shortly thereafter during preparations for Ironex I, a proof of concept research voyage, which was successfully carried out near the Galapagos Islands in 1993 by his colleagues at Moss Landing Marine Laboratories. Thereafter 12 international ocean studies examined the phenomenon: Ironex II, 1995 SOIREE (Southern Ocean Iron Release Experiment), 1999 EisenEx (Iron Experiment), 2000 SEEDS (Subarctic Pacific Iron Experiment for Ecosystem Dynamics Study), 2001 SOFeX (Southern Ocean Iron Experiments - North & South), 2002 SERIES (Subarctic Ecosystem Response to Iron Enrichment Study), 2002 SEEDS-II, 2004 EIFEX (European Iron Fertilization Experiment), A successful experiment conducted in 2004 in a mesoscale ocean eddy in the South Atlantic resulted in a bloom of diatoms, a large portion of which died and sank to the ocean floor when fertilization ended. In contrast to the LOHAFEX experiment, also conducted in a mesoscale eddy, the ocean in the selected area contained enough dissolved silicon for the diatoms to flourish. CROZEX (CROZet natural iron bloom and Export experiment), 2005 A pilot project planned by Planktos, a U.S. company, was cancelled in 2008 for lack of funding. The company blamed environmental organizations for the failure. LOHAFEX (Indian and German Iron Fertilization Experiment), 2009 Despite widespread opposition to LOHAFEX, on 26 January 2009 the German Federal Ministry of Education and Research (BMBF) gave clearance. The experiment was carried out in waters low in silicic acid, an essential nutrient for diatom growth. This affected sequestration efficacy. A portion of the southwest Atlantic was fertilized with iron sulfate. A large phytoplankton bloom was triggered. In the absence of diatoms, a relatively small amount of carbon was sequestered, because other phytoplankton are vulnerable to predation by zooplankton and do not sink rapidly upon death. These poor sequestration results led to suggestions that fertilization is not an effective carbon mitigation strategy in general. However, prior ocean fertilization experiments in high silica locations revealed much higher carbon sequestration rates because of diatom growth. LOHAFEX confirmed sequestration potential depends strongly upon appropriate siting. Haida Salmon Restoration Corporation (HSRC), 2012 - funded by the Old Massett Haida band and managed by Russ George - dumped 100 tonnes of iron sulphate into the Pacific into an eddy west of the islands of Haida Gwaii. This resulted in increased algae growth over . Critics alleged George's actions violated the United Nations Convention on Biological Diversity (CBD) and the London convention on the dumping of wastes at sea which prohibited such geoengineering experiments. On 15 July 2014, the resulting scientific data was made available to the public. John Martin, director of the Moss Landing Marine Laboratories, hypothesized that the low levels of phytoplankton in these regions are due to a lack of iron. In 1989 he tested this hypothesis (known as the Iron Hypothesis) by an experiment using samples of clean water from Antarctica. Iron was added to some of these samples. After several days the phytoplankton in the samples with iron fertilization grew much more than in the untreated samples. This led Martin to speculate that increased iron concentrations in the oceans could partly explain past ice ages. IRONEX I This experiment was followed by a larger field experiment (IRONEX I) where 445 kg of iron was added to a patch of ocean near the Galápagos Islands. The levels of phytoplankton increased three times in the experimental area. The success of this experiment and others led to proposals to use this technique to remove carbon dioxide from the atmosphere. EisenEx In 2000 and 2004, iron sulfate was discharged from the EisenEx. 10 to 20 percent of the resulting algal bloom died and sank to the sea floor. Commercial projects Planktos was a US company that abandoned its plans to conduct 6 iron fertilization cruises from 2007 to 2009, each of which would have dissolved up to 100 tons of iron over a 10,000 km2 area of ocean. Their ship Weatherbird II was refused entry to the port of Las Palmas in the Canary Islands where it was to take on provisions and scientific equipment. In 2007 commercial companies such as Climos and GreenSea Ventures and the Australian-based Ocean Nourishment Corporation, planned to engage in fertilization projects. These companies invited green co-sponsors to finance their activities in return for provision of carbon credits to offset investors' CO2 emissions. LOHAFEX LOHAFEX was an experiment initiated by the German Federal Ministry of Research and carried out by the German Alfred Wegener Institute (AWI) in 2009 to study fertilization in the South Atlantic. India was also involved. As part of the experiment, the German research vessel Polarstern deposited 6 tons of ferrous sulfate in an area of 300 square kilometers. It was expected that the material would distribute through the upper of water and trigger an algal bloom. A significant part of the carbon dioxide dissolved in sea water would then be bound by the emerging bloom and sink to the ocean floor. The Federal Environment Ministry called for the experiment to halt, partly because environmentalists predicted damage to marine plants. Others predicted long-term effects that would not be detectable during short-term observation or that this would encourage large-scale ecosystem manipulation. 2012 A 2012 study deposited iron fertilizer in an eddy near Antarctica. The resulting algal bloom sent a significant amount of carbon into the deep ocean, where it was expected to remain for centuries to millennia. The eddy was chosen because it offered a largely self-contained test system. As of day 24, nutrients, including nitrogen, phosphorus and silicic acid that diatoms use to construct their shells, declined. Dissolved inorganic carbon concentrations were reduced below equilibrium with atmospheric . In surface water, particulate organic matter (algal remains) including silica and chlorophyll increased. After day 24, however, the particulate matter fell to between to the ocean floor. Each iron atom converted at least 13,000 carbon atoms into algae. At least half of the organic matter sank below, . Haida Gwaii project In July 2012, the Haida Salmon Restoration Corporation dispersed of iron sulphate dust into the Pacific Ocean several hundred miles west of the islands of Haida Gwaii. The Old Massett Village Council financed the action as a salmon enhancement project with $2.5 million in village funds. The concept was that the formerly iron-deficient waters would produce more phytoplankton that would in turn serve as a "pasture" to feed salmon. Then-CEO Russ George hoped to sell carbon offsets to recover the costs. The project was accompanied by charges of unscientific procedures and recklessness. George contended that 100 tons was negligible compared to what naturally enters the ocean. Some environmentalists called the dumping a "blatant violation" of two international moratoria. George said that the Old Massett Village Council and its lawyers approved the effort and at least seven Canadian agencies were aware of it. According to George, the 2013 salmon runs increased from 50 million to 226 million fish. However, many experts contend that changes in fishery stocks since 2012 cannot necessarily be attributed to the 2012 iron fertilization; many factors contribute to predictive models, and most data from the experiment are considered to be of questionable scientific value. On 15 July 2014, the data gathered during the project were made publicly available under the ODbL license. Experiments with iron-coated rice husks in Arabian Sea In 2022, a UK/India research team plans to place iron-coated rice husks in the Arabian Sea, to test whether increasing time at the surface can stimulate a bloom using less iron. The iron will be confined within a plastic bag reaching from the surface several kilometers down to the sea bottom. The Centre for Climate Repair at the University of Cambridge, along with India's Institute of Maritime Studies assessed the impact of iron seeding in another experiment. They spread iron-coated rice husks across an area of the Arabian Sea. Iron is a limiting nutrient in many ocean waters. They hoped that the iron would fertilize algae, which would bolster the bottom of the marine food chain and sequester carbon as uneaten algae died. The experiment was demolished by a storm, leaving inconclusive results. Science The maximum possible result from iron fertilization, assuming the most favourable conditions and disregarding practical considerations, is 0.29 W/m2 of globally averaged negative forcing, offsetting 1/6 of current levels of anthropogenic emissions. These benefits have been called into question by research suggesting that fertilization with iron may deplete other essential nutrients in the seawater causing reduced phytoplankton growth elsewhere — in other words, that iron concentrations limit growth more locally than they do on a global scale. Ocean fertilization occurs naturally when upwellings bring nutrient-rich water to the surface, as occurs when ocean currents meet an ocean bank or a sea mount. This form of fertilization produces the world's largest marine habitats. Fertilization can also occur when weather carries wind blown dust long distances over the ocean, or iron-rich minerals are carried into the ocean by glaciers, rivers and icebergs. Role of iron About 70% of the world's surface is covered in oceans. The part of these where light can penetrate is inhabited by algae (and other marine life). In some oceans, algae growth and reproduction is limited by the amount of iron. Iron is a vital micronutrient for phytoplankton growth and photosynthesis that has historically been delivered to the pelagic sea by dust storms from arid lands. This Aeolian dust contains 3–5% iron and its deposition has fallen nearly 25% in recent decades. The Redfield ratio describes the relative atomic concentrations of critical nutrients in plankton biomass and is conventionally written "106 C: 16 N: 1 P." This expresses the fact that one atom of phosphorus and 16 of nitrogen are required to "fix" 106 carbon atoms (or 106 molecules of ). Research expanded this constant to "106 C: 16 N: 1 P: .001 Fe" signifying that in iron deficient conditions each atom of iron can fix 106,000 atoms of carbon, or on a mass basis, each kilogram of iron can fix 83,000 kg of carbon dioxide. The 2004 EIFEX experiment reported a carbon dioxide to iron export ratio of nearly 3000 to 1. The atomic ratio would be approximately: "3000 C: 58,000 N: 3,600 P: 1 Fe". Therefore, small amounts of iron (measured by mass parts per trillion) in HNLC zones can trigger large phytoplankton blooms on the order of 100,000 kilograms of plankton per kilogram of iron. The size of the iron particles is critical. Particles of 0.5–1 micrometer or less seem to be ideal both in terms of sink rate and bioavailability. Particles this small are easier for cyanobacteria and other phytoplankton to incorporate and the churning of surface waters keeps them in the euphotic or sunlit biologically active depths without sinking for long periods. One way to add small amounts of iron to HNLC zones would be Atmospheric Methane Removal. Atmospheric deposition is an important iron source. Satellite images and data (such as PODLER, MODIS, MSIR) combined with back-trajectory analyses identified natural sources of iron–containing dust. Iron-bearing dusts erode from soil and are transported by wind. Although most dust sources are situated in the Northern Hemisphere, the largest dust sources are located in northern and southern Africa, North America, central Asia and Australia. Heterogeneous chemical reactions in the atmosphere modify the speciation of iron in dust and may affect the bioavailability of deposited iron. The soluble form of iron is much higher in aerosols than in soil (~0.5%). Several photo-chemical interactions with dissolved organic acids increase iron solubility in aerosols. Among these, photochemical reduction of oxalate-bound Fe(III) from iron-containing minerals is important. The organic ligand forms a surface complex with the Fe (III) metal center of an iron-containing mineral (such as hematite or goethite). On exposure to solar radiation the complex is converted to an excited energy state in which the ligand, acting as bridge and an electron donor, supplies an electron to Fe(III) producing soluble Fe(II). Consistent with this, studies documented a distinct diel variation in the concentrations of Fe (II) and Fe(III) in which daytime Fe(II) concentrations exceed those of Fe(III). Volcanic ash as an iron source Volcanic ash has a significant role in supplying the world's oceans with iron. Volcanic ash is composed of glass shards, pyrogenic minerals, lithic particles and other forms of ash that release nutrients at different rates depending on structure and the type of reaction caused by contact with water. Increases of biogenic opal in the sediment record are associated with increased iron accumulation over the last million years. In August 2008, an eruption in the Aleutian Islands deposited ash in the nutrient-limited Northeast Pacific. This ash and iron deposition resulted in one of the largest phytoplankton blooms observed in the subarctic. Carbon sequestration Previous instances of biological carbon sequestration triggered major climatic changes, lowering the temperature of the planet, such as the Azolla event. Plankton that generate calcium or silicon carbonate skeletons, such as diatoms, coccolithophores and foraminifera, account for most direct sequestration. When these organisms die their carbonate skeletons sink relatively quickly and form a major component of the carbon-rich deep sea precipitation known as marine snow. Marine snow also includes fish fecal pellets and other organic detritus, and steadily falls thousands of meters below active plankton blooms. Of the carbon-rich biomass generated by plankton blooms, half (or more) is generally consumed by grazing organisms (zooplankton, krill, small fish, etc.) but 20 to 30% sinks below into the colder water strata below the thermocline. Much of this fixed carbon continues into the abyss, but a substantial percentage is redissolved and remineralized. At this depth, however, this carbon is now suspended in deep currents and effectively isolated from the atmosphere for centuries. Analysis and quantification Evaluation of the biological effects and verification of the amount of carbon actually sequestered by any particular bloom involves a variety of measurements, combining ship-borne and remote sampling, submarine filtration traps, tracking buoy spectroscopy and satellite telemetry. Unpredictable ocean currents can remove experimental iron patches from the pelagic zone, invalidating the experiment. The potential of fertilization to tackle global warming is illustrated by the following figures. If phytoplankton converted all the nitrate and phosphate present in the surface mixed layer across the entire Antarctic circumpolar current into organic carbon, the resulting carbon dioxide deficit could be compensated by uptake from the atmosphere amounting to about 0.8 to 1.4 gigatonnes of carbon per year. This quantity is comparable in magnitude to annual anthropogenic fossil fuels combustion of approximately 6 gigatonnes. The Antarctic circumpolar current region is one of several in which iron fertilization could be conducted—the Galapagos islands area another potentially suitable location. Dimethyl sulfide and clouds Some species of plankton produce dimethyl sulfide (DMS), a portion of which enters the atmosphere where it is oxidized by hydroxyl radicals (OH), atomic chlorine (Cl) and bromine monoxide (BrO) to form sulfate particles, and potentially increase cloud cover. This may increase the albedo of the planet and so cause cooling—this proposed mechanism is central to the CLAW hypothesis. This is one of the examples used by James Lovelock to illustrate his Gaia hypothesis. During SOFeX, DMS concentrations increased by a factor of four inside the fertilized patch. Widescale iron fertilization of the Southern Ocean could lead to significant sulfur-triggered cooling in addition to that due to the uptake and that due to the ocean's albedo increase, however the amount of cooling by this particular effect is very uncertain. Financial opportunities Beginning with the Kyoto Protocol, several countries and the European Union established carbon offset markets which trade certified emission reduction credits (CERs) and other types of carbon credit instruments. In 2007 CERs sold for approximately €15–20/ton . Iron fertilization is relatively inexpensive compared to scrubbing, direct injection and other industrial approaches, and can theoretically sequester for less than €5/ton , creating a substantial return. In August, 2010, Russia established a minimum price of €10/ton for offsets to reduce uncertainty for offset providers. Scientists have reported a 6–12% decline in global plankton production since 1980. A full-scale plankton restoration program could regenerate approximately 3–5 billion tons of sequestration capacity worth €50-100 billion in carbon offset value. However, a 2013 study indicates the cost versus benefits of iron fertilization puts it behind carbon capture and storage and carbon taxes. Debate While ocean iron fertilization could represent a potent means to slow global warming, there is a current debate surrounding the efficacy of this strategy and the potential adverse effects of this. Precautionary principle The precautionary principle is a proposed guideline regarding environmental conservation. According to an article published in 2021, the precautionary principle (PP) is a concept that states, "The PP means that when it is scientifically plausible that human activities may lead to morally unacceptable harm, actions shall be taken to avoid or diminish that harm: uncertainty should not be an excuse to delay action." Based on this principle, and because there is little data quantifying the effects of iron fertilization, it is the responsibility of leaders in this field to avoid the harmful effects of this procedure. This school of thought is one argument against using iron fertilization on a wide scale, at least until more data is available to analyze the repercussions of this. Ecological issues Critics are concerned that fertilization will create harmful algal blooms (HAB) as many toxic algae are often favored when iron is deposited into the marine ecosystem. A 2010 study of iron fertilization in an oceanic high-nitrate, low-chlorophyll environment, however, found that fertilized Pseudo-nitzschia diatom spp., which are generally nontoxic in the open ocean, began producing toxic levels of domoic acid. Even short-lived blooms containing such toxins could have detrimental effects on marine food webs. Most species of phytoplankton are harmless or beneficial, given that they constitute the base of the marine food chain. Fertilization increases phytoplankton only in the open oceans (far from shore) where iron deficiency is substantial. Most coastal waters are replete with iron and adding more has no useful effect. Further, it has been shown that there are often higher mineralization rates with iron fertilization, leading to a turn over in the plankton masses that are produced. This results in no beneficial effects and actually causes an increase in CO2. Finally, a 2010 study showed that iron enrichment stimulates toxic diatom production in high-nitrate, low-chlorophyll areas which, the authors argue, raises "serious concerns over the net benefit and sustainability of large-scale iron fertilizations". Nitrogen released by cetaceans and iron chelate are a significant benefit to the marine food chain in addition to sequestering carbon for long periods of time. Ocean acidification A 2009 study tested the potential of iron fertilization to reduce both atmospheric CO2 and ocean acidity using a global ocean carbon model. The study found that, "Our simulations show that ocean iron fertilization, even in the extreme scenario by depleting global surface macronutrient concentration to zero at all time, has a minor effect on mitigating CO2-induced acidification at the surface ocean." Unfortunately, the impact on ocean acidification would likely not change due to the low effects that iron fertilization has on CO2 levels. History Consideration of iron's importance to phytoplankton growth and photosynthesis dates to the 1930s when Dr Thomas John Hart, a British marine biologist based on the in the Southern Ocean speculated - in "On the phytoplankton of the South-West Atlantic and Bellingshausen Sea, 1929-31" - that great "desolate zones" (areas apparently rich in nutrients, but lacking in phytoplankton activity or other sea life) might be iron-deficient. Hart returned to this issue in a 1942 paper entitled "Phytoplankton periodicity in Antarctic surface waters", but little other scientific discussion was recorded until the 1980s, when oceanographer John Martin of the Moss Landing Marine Laboratories renewed controversy on the topic with his marine water nutrient analyses. His studies supported Hart's hypothesis. These "desolate" regions came to be called "high-nutrient, low-chlorophyll regions" (HNLC). John Gribbin was the first scientist to publicly suggest that climate change could be reduced by adding large amounts of soluble iron to the oceans. Martin's 1988 quip four months later at Woods Hole Oceanographic Institution, "Give me a half a tanker of iron and I will give you an ice age," drove a decade of research. The findings suggested that iron deficiency was limiting ocean productivity and offered an approach to mitigating climate change as well. Perhaps the most dramatic support for Martin's hypothesis came with the 1991 eruption of Mount Pinatubo in the Philippines. Environmental scientist Andrew Watson analyzed global data from that eruption and calculated that it deposited approximately 40,000 tons of iron dust into oceans worldwide. This single fertilization event preceded an easily observed global decline in atmospheric and a parallel pulsed increase in oxygen levels. The parties to the London Dumping Convention adopted a non-binding resolution in 2008 on fertilization (labeled LC-LP.1(2008)). The resolution states that ocean fertilization activities, other than legitimate scientific research, "should be considered as contrary to the aims of the Convention and Protocol and do not currently qualify for any exemption from the definition of dumping". An Assessment Framework for Scientific Research Involving Ocean Fertilization, regulating the dumping of wastes at sea (labeled LC-LP.2(2010)) was adopted by the Contracting Parties to the Convention in October 2010 (LC 32/LP 5). Multiple ocean labs, scientists and businesses have explored fertilization. Beginning in 1993, thirteen research teams completed ocean trials demonstrating that phytoplankton blooms can be stimulated by iron augmentation. Controversy remains over the effectiveness of atmospheric sequestration and ecological effects. Ocean trials of ocean iron fertilization took place in 2009 in the South Atlantic by project LOHAFEX, and in July 2012 in the North Pacific off the coast of British Columbia, Canada, by the Haida Salmon Restoration Corporation (HSRC). See also Carbon dioxide sink Iron chelate Ocean pipes Liebig's law of the minimum Iron cycle References Aquatic ecology Planetary engineering Climate engineering Carbon dioxide removal Climate change policy Ecological restoration
Iron fertilization
[ "Chemistry", "Engineering", "Biology" ]
6,405
[ "Planetary engineering", "Ecological restoration", "Geoengineering", "Ecosystems", "Environmental engineering", "Aquatic ecology" ]
4,145,648
https://en.wikipedia.org/wiki/Haplogroup%20L3
Haplogroup L3 is a human mitochondrial DNA (mtDNA) haplogroup. The clade has played a pivotal role in the early dispersal of anatomically modern humans. It is strongly associated with the out-of-Africa migration of modern humans of about 70–50,000 years ago. It is inherited by all modern non-African populations, as well as by some populations in Africa. Origin Haplogroup L3 arose close to 70,000 years ago, near the time of the recent out-of-Africa event. This dispersal originated in East Africa and expanded to West Asia, and further to South and Southeast Asia in the course of a few millennia, and some research suggests that L3 participated in this migration out of Africa. A 2007 estimate for the age of L3 suggested a range of 104–84,000 years ago. More recent analyses, including Soares et al. (2012) arrive at a more recent date, of roughly 70–60,000 years ago. Soares et al. also suggest that L3 most likely expanded from East Africa into Eurasia sometime around 65–55,000 years ago as part of the recent out-of-Africa event, as well as from East Africa into Central Africa from 60 to 35,000 years ago. In 2016, Soares et al. again suggested that haplogroup L3 emerged in East Africa, leading to the Out-of-Africa migration, around 70–60,000 years ago. Haplogroups L6 and L4 form sister clades of L3 which arose in East Africa at roughly the same time but which did not participate in the out-of-Africa migration. The ancestral clade L3'4'6 has been estimated at 110 kya, and the L3'4 clade at 95 kya. The possibility of an origin of L3 in Asia was proposed by Cabrera et al. (2018) based on the similar coalescence dates of L3 and its Eurasian-distributed M and N derivative clades (ca. 70 kya), the distant location in Southeast Asia of the oldest known subclades of M and N, and the comparable age of the paternal haplogroup DE. According to this hypothesis, after an initial out-of-Africa migration of bearers of pre-L3 (L3'4*) around 125 kya, there would have been a back-migration of females carrying L3 from Eurasia to East Africa sometime after 70 kya. The hypothesis suggests that this back-migration is aligned with bearers of paternal haplogroup E, which it also proposes to have originated in Eurasia. These new Eurasian lineages are then suggested to have largely replaced the old autochthonous male and female North-East African lineages. According to other research, though earlier migrations out of Africa of anatomically modern humans occurred, current Eurasian populations descend instead from a later migration from Africa dated between about 65,000 and 50,000 years ago (associated with the migration out of L3). Vai et al. (2019) suggest, from a newly discovered old and deeply-rooted branch of maternal haplogroup N found in early Neolithic North African remains, that haplogroup L3 originated in East Africa between 70,000 and 60,000 years ago, and both spread within Africa and left Africa as part of the Out-of-Africa migration, with haplogroup N diverging from it soon after (between 65,000 and 50,000 years ago) either in Arabia or possibly North Africa, and haplogroup M originating in the Middle East around the same time as N. A study by Lipson et al. (2019) analyzing remains from the Cameroonian site of Shum Laka found them to be more similar to modern-day Pygmy peoples than to West Africans, and suggests that several other groups (including the ancestors of West Africans, East Africans and the ancestors of non-Africans) commonly derived from a human population originating in East Africa between about 80,000-60,000 years ago, which they suggest was also the source and origin zone of haplogroup L3 around 70,000 years ago.<ref>Ancient Human DNA from Shum Laka (Cameroon) in the Context of African Population History, by Lipson Mark et al., 2019 | page=5</ref> Distribution L3 is common in Northeast Africa and some other parts of East Africa, in contrast to others parts of Africa where the haplogroups L1 and L2 represent around two thirds of mtDNA lineages. L3 sublineages are also frequent in the Arabian Peninsula. L3 is subdivided into several clades, two of which spawned the macrohaplogroups M and N that are today carried by most people outside Africa. There is at least one relatively deep non-M, non-N clade of L3 outside Africa, L3f1b6, which is found at a frequency of 1% in Asturias, Spain. It diverged from African L3 lineages at least 10,000 years ago. According to Maca-Meyer et al. (2001), "L3 is more related to Eurasian haplogroups than to the most divergent African clusters L1 and L2". L3 is the haplogroup from which all modern humans outside Africa derive. However, there is a greater diversity of major L3 branches within Africa than outside of it, the two major non-African branches being the L3 offshoots M and N. Subclade distribution L3 has seven equidistant descendants: L3a, L3b'f, L3c'd, L3e'i'k'x, L3h, M, N. Five are African, while two are associated with the Out of Africa event. N – Eurasia possibly due to migration from Africa, and North Africa possibly due to back-migration from Eurasia. M – Asia, the Mediterranean Basin, and parts of Africa due to back-migration. L3a – East Africa. Moderate to high frequencies found among the Sanye, Samburu, Iraqw, Yaaku, El-Molo and other minor indigenous populations from the East African Rift Valley. It is infrequent to nonexistent in Sudan and the Sahel zone. L3a1 – Found across Eastern Africa. Estimated age of 35.8–39.3 ka. L3a2 – Found across Eastern Africa. Estimated age of 48.3–57.7 ka. L3b'f L3b – Spread from East Africa in the upper paleolithic to West-Central Africa. Some subclades spread from Central Africa to East Africa with the Bantu migration. L3b1a – Common subclade. Estimated age of 11.7-14.8 ka. L3b1a2 – Subclade found in Northeast Africa, the Maghreb, and Middle East. Emerged 12–14 ka. L3f – Northeast Africa, Sahel, Arabian peninsula, Iberia. Gaalien, Beja L3f1 L3f1a – Carried by migrants from Eastern Africa into the Sahel and Central Africa. L3f1b – Carried by migrants from Eastern Africa into the Sahel and Central Africa. L3f1b1 – Carried from Central Africa into Southern and Eastern Africa with the Bantu migration. L3f1b1a – Settled from East-Central Africa to Central-West Africa and into North Africa and Berber regions. L3f1b4 – Carried from Central Africa into Southern and Eastern Africa with the Bantu migration. L3f1b6 – Rare, found in Iberia. L3f2 – Primarily distributed in East Africa. Also found in North Africa and Central Africa. L3f3 – Spread from Eastern Africa to Chad and the Sahel around 8–9 ka. Found in the Chad Basin. L3c'd L3c – Extremely rare lineage with only two samples found so far in Eastern Africa and the Near East. L3d – Spread from East Africa in the upper paleolithic to Central Africa. Some subclades spread to East Africa with the Bantu migration. Found among the Fulani, Chadians, Ethiopians, Akan people, Mozambique, Yemenites, Egyptians, Berbers L3d3a1 – Primarily found in Southern Africa. Supplementary data . L3e'i'k'x L3e – Suggested to have originated in the Central Africa/Sudan region about 45,000 years ago during the upper paleolithic period. It is the most common L3 sub-clade in Bantu-speaking populations. L3e is also the most common L3 subclade amongst African Americans and Afro-Brazilians. L3e1 – Spread from West-Central Africa to Southwest Africa with the Bantu migration. Found in Angola (6.8%). Mozambique, Kikuyu of Kenya, as well as in Yemen, and the Tikar of Cameroon, and among the Akan people of Ghana. L3e5 – Originated in the Chad Basin. Found in Algeria, as well as Burkina Faso, Nigeria, South Tunisia, South Morocco and Egypt L3i Almost exclusively found in East Africa. L3i1 L3i1b – Subclade is found in Yemen, Ethiopia, and among Gujarati Indians. L3i2 (former L3w) – Found in the Horn of Africa and Oman. L3k – Rare haplogroup primarily found in North Africa and the Sahel. L3x – Almost exclusively found in East Africa. Found among Ethiopian Oromos, Egyptians L3h – Almost exclusively found in East Africa. L3h1 – Primarily found in East Africa with branches of L3h1b1 sporadically found in the Sahel and North Africa. L3h2 – Found in Northeast Africa and Socotra. Split from other L3h branches as early as 65–69 ka during the middle paleolithic. Ancient and historic samples Haplogroup L3 has been observed in an ancient fossil belonging to the Pre-Pottery Neolithic B culture. L3x2a was observed in a 4,500 year old hunter-gather excavated in Mota, Ethiopia, with the ancient fossil found to be most closely related to modern Southwest Ethiopian populations. Haplogroup L3 has also been found among ancient Egyptian mummies (1/90; 1%) excavated at the Abusir el-Meleq archaeological site in Middle Egypt, with the rest deriving from Eurasian subclades, which date from the Pre-Ptolemaic/late New Kingdom and Ptolemaic periods. The Ancient Egyptian mummies bore Near eastern genomic component most closely related to modern near easterners. Additionally, haplogroup L3 has been observed in ancient Guanche fossils excavated in Gran Canaria and Tenerife on the Canary Islands, which have been radiocarbon-dated to between the 7th and 11th centuries CE. All of the clade-bearing individuals were inhumed at the Gran Canaria site, with most of these specimens found to belong to the L3b1a subclade (3/4; 75%) with the rest from both islands (8/11; 72%) deriving from Eurasian subclades. The Guanche skeletons also bore an autochthonous Maghrebi genomic component that peaks among modern Berbers, which suggests that they originated from ancestral Berber populations inhabiting northwestern Affoundnat a high ncy A variety of L3 have been uncovered in ancient remains associated with the Pastoral Neolithic and Pastoral Iron Age of East Africa. Tree This phylogenetic tree of haplogroup L3 subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research. Most Recent Common Ancestor (MRCA) L1-6 L2-6 L2'3'4'6 L3'4'6 L3'4 L3 L3a L3a1 L3a1a L3a1b L3a2 L3a2a L3b'f L3b L3b1 L3b1a L3b1a1 L3b1a2 L3b1a3 L3b1a4 L3b1a5 L3b1a5a L3b1a6 L3b1a7 L3b1a7 L3b1a8 L3b1a9 L3b1a9a L3b1a10 L3b1a11 L3b1b L3b1b1 L3b2 L3b2a L3b2a L3b3 L3f L3f1 L3f1a L3f1a1 L3f1b L3f1b1 L3f1b2 L3f1b2a L3f1b3 L3f1b4 L3f1b4a L3f1b4a1 L3f1b4b L3f1b4c L3f1b5 L3f2 L3f2a L3f2b L3f3 L3f3a L3f3b L3c'd L3c L3d L3d1-5 L3d1 L3d1a L3d1a1 L3d1a1a L3d1b L3d1b1 L3d1c L3d1d 199 L3d2 L3d5 L3d3 L3d3a L3d4 L3d5 L3e'i'k'x L3e L3e1 L3e1a L3e1a1 L3e1a1a 152 L3e1a2 L3e1a3 L3e1b L3e1c L3e1d L3e1e L3e2 L3e2a L3e2a1 L3e2a1a L3e2a1b L3e2a1b1 L3e2b L3e2b1 L3e2b1a L3e2b2 L3e2b3 L3e3'4'5 L3e3'4 L3e3 L3e3a L3e3b L3e3b1 L3e4 L3e5 L3i L3i1 L3i1a L3i1b L3i2 L3k L3k1 L3x L3x1 L3x1a L3x1a1 L3x1a2 L3x1b L3x2 L3x2a L3x2a1 L3x2a1a L3x2b L3h L3h1 L3h1a L3h1a1 L3h1a2 L3h1a2a L3h1a2b L3h1b L3h1b1 L3h1b1a L3h1b1a1 L3h1b2 L3h2 M N Popular culture Writer Bonnie Greer is a member of haplogroup L3. Author Malcolm Gladwell is a member of haplogroup L3f1. Musician Branford Marsalis is a member of haplogroup L3f1b. See also Genealogical DNA test Genetic genealogy Haplogroup Population genetics References Notes External links General Ian Logan's Mitochondrial DNA Site Haplogroup L3 Mannis van Oven's PhyloTree.org – mtDNA subtree L3 MITOMAP's Haplogroup L3 Spread of Haplogroup L3, from National Geographic'' L3 Recent African origin of modern humans
Haplogroup L3
[ "Biology" ]
3,304
[ "Biological hypotheses", "Recent African origin of modern humans" ]
4,145,863
https://en.wikipedia.org/wiki/Diversified%20Pharmaceutical%20Services
Diversified Pharmaceutical Services entered the market in 1976 as the pharmacy benefit manager for United HealthCare, a leading managed care organization. It pioneered many cost containment strategies that are now core pharmacy benefit manager services and became a recognized leader in clinical programs. History Diversified Pharmaceutical Services (DPS) grew out of the pharmacy department within United Healthcare. The company was sold to SmithKline Beecham for $2.3 billion in May 1994. In 1999, it was acquired by Express Scripts in 1999 for $700 million in cash to create what was then the third largest pharmacy benefit manager in the United States. References Health maintenance organizations Life sciences industry Medical and health organizations based in Missouri 1994 mergers and acquisitions 1999 mergers and acquisitions
Diversified Pharmaceutical Services
[ "Biology" ]
141
[ "Life sciences industry" ]
4,145,906
https://en.wikipedia.org/wiki/Corpuscularianism
Corpuscularianism, also known as corpuscularism (), is a set of theories that explain natural transformations as a result of the interaction of particles (minima naturalia, partes exiles, partes parvae, particulae, and semina). It differs from atomism in that corpuscles are usually endowed with a property of their own and are further divisible, while atoms are neither. Although often associated with the emergence of early modern mechanical philosophy, and especially with the names of Thomas Hobbes, René Descartes, Pierre Gassendi, Robert Boyle, Isaac Newton, and John Locke, corpuscularian theories can be found throughout the history of Western philosophy. Overview Corpuscles vs. atoms Corpuscularianism is similar to the theory of atomism, except that where atoms were supposed to be indivisible, corpuscles could in principle be divided. In this manner, for example, it was theorized that mercury could penetrate into metals and modify their inner structure, a step on the way towards the production of gold by transmutation. Perceived vs. real properties Corpuscularianism was associated by its leading proponents with the idea that some of the apparent properties of objects are artifacts of the perceiving mind, that is, "secondary" qualities as distinguished from "primary" qualities. Corpuscles were thought to be unobservable and having a very limited number of basic properties, such as size, shape, and motion. Thomas Hobbes The philosopher Thomas Hobbes used corpuscularianism to justify his political theories in Leviathan. It was used by Newton in his development of the corpuscular theory of light, while Boyle used it to develop his mechanical corpuscular philosophy, which laid the foundations for the Chemical Revolution. Robert Boyle Corpuscularianism remained a dominant theory for centuries and was blended with alchemy by early scientists such as Robert Boyle and Isaac Newton in the 17th century. In his work The Sceptical Chymist (1661), Boyle abandoned the Aristotelian ideas of the classical elements—earth, water, air, and fire—in favor of corpuscularianism. In his later work, The Origin of Forms and Qualities (1666), Boyle used corpuscularianism to explain all of the major Aristotelian concepts, marking a departure from traditional Aristotelianism. Light corpuscules Alchemical corpuscularianism William R. Newman traces the origins from the fourth book of Aristotle, Meteorology. The "dry" and "moist" exhalations of Aristotle became the alchemical 'sulfur' and 'mercury' of the eighth-century Islamic alchemist, Jābir ibn Hayyān (died c. 806–816). Pseudo-Geber's Summa perfectionis contains an alchemical theory in which unified sulfur and mercury corpuscles, differing in purity, size, and relative proportions, form the basis of a much more complicated process. Importance to the development of modern scientific theory Several of the principles which corpuscularianism proposed became tenets of modern chemistry. The idea that compounds can have secondary properties that differ from the properties of the elements which are combined to make them became the basis of molecular chemistry. The idea that the same elements can be predictably combined in different ratios using different methods to create compounds with radically different properties became the basis of stoichiometry, crystallography, and established studies of chemical synthesis. The ability of chemical processes to alter the composition of an object without significantly altering its form is the basis of fossil theory via mineralization and the understanding of numerous metallurgical, biological, and geological processes. See also Atomic theory Atomism Classical element History of chemistry References Bibliography Further reading Atomism History of chemistry 13th century in science Metaphysical theories Particles
Corpuscularianism
[ "Physics" ]
774
[ "Particles", "Physical objects", "Matter" ]
4,145,940
https://en.wikipedia.org/wiki/Haplogroup%20Z
In human mitochondrial genetics, Haplogroup Z is a human mitochondrial DNA (mtDNA) haplogroup. Origin Haplogroup Z is believed to have arisen in Central Asia, and is a descendant of haplogroup CZ. Distribution The greatest clade diversity of haplogroup Z is found in East Asia and Central Asia. However, its greatest frequency appears in some peoples of Russia, such as Evens from Kamchatka (8/39 Z1a2a, 3/39 Z1a3, 11/39 = 28.2% Z total) and from Berezovka, Srednekolymsky District, Sakha Republic (3/15 Z1a3, 1/15 Z1a2a, 4/15 = 26.7% Z total), and among the Saami people of northern Scandinavia. With the exception of three Khakasses who belong to Z4, two Yakut who belong to Z3a1, two Yakut, a Yakutian Evenk, a Buryat, and an Altai Kizhi who belong to Z3(xZ3a, Z3c), and the presence of the Z3c clade among populations of Altai Republic, nearly all members of haplogroup Z in North Asia and Europe belong to subclades of Z1. The TMRCA of Z1 is 20,400 [95% CI 7,400 <-> 34,000] ybp according to Sukernik et al. 2012, 20,400 [95% CI 7,800 <-> 33,800] ybp according to Fedorova et al. 2013, or 19,600 [95% CI 12,500 <-> 29,300] ybp according to YFull. Among the members (Z1, Z2, Z3, Z4, and Z7) of haplogroup Z, Nepalese populations were characterized by rare clades Z3a1a and Z7, of which Z3a1a was the most frequent sub-clade in Newar, with a frequency of 16.5%. Z3, found in East Asia, North Asia, and MSEA, is the oldest member of haplogroup Z with an estimated age of ~ 25.4 Kya. Haplogroup Z3a1a is also detected in other Nepalese populations, such as Magar (5.4%), Tharu, Kathmandu (mixed population) and Nepali-other (mixed population from Kathmandu and Eastern Nepal). S6). Z3a1a1 detected in Tibet, Myanmar, Nepal, India, Thai-Laos and Vietnam trace their ancestral roots to China with a coalescent age of ~ 8.4 Kya Fedorova et al. 2013 have reported finding Z* (xZ1a, Z3, Z4) in 1/388 Turks and 1/491 Kazakhs. These individuals should belong to Z1* (elsewhere observed in a Tofalar), Z2 (observed in Japanese), Z7 (observed in the Himalaya), Z5 (observed in Japanese), or basal Z* (observed in a Blang individual in Northern Thailand). Subclades Tree This phylogenetic tree of haplogroup Z subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research. Z Z* – Thailand (Blang in Chiang Rai Province) Z-T152C! (TMRCA 24,300 [95% CI 19,300 <-> 30,300] ybp) Z-T152C!* – Hong Kong Z1 (TMRCA 18,600 [95% CI 10,900 <-> 29,500] ybp) Z1a – Koryak, Buryat, Kalmyk, Mongol (Hinggan, Hulunbuir, Xilingol), Khakas, Shor, Altai Kizhi, Kazakh, Kyrgyz, Uyghur, Turk, Arab (Uzbekistan) (TMRCA 7,600 [95% CI 5,100 <-> 10,900] ybp) Z1a1 – Italy, Hungary (ancient Avar), Germany, Sweden, Kazakh, Uyghur, Buryat (TMRCA 5,600 [95% CI 2,500 <-> 10,900] ybp) Z1a1a – Khakas, Nogai, Udmurt, Russia (Krasnodar Krai, etc.), Abazin, Cherkessian, Finland, Norway, Sweden, Estonia, Ukrainian Z1a1a* – Norway (Vest-Agder, Aust-Agder), Finland, Sami (Västerbotten, Norrbotten), Komi, Russia (Chelyabinsk Oblast), Ket (lower Yenisey River basin) Z1a1a1 – Russia (Chelyabinsk Oblast) Z1a1a2 – Udmurt Z1a1a3 – Russia (Chelyabinsk Oblast, Novgorod Oblast), Poland Z1a1a4 – Finland (Eastern Finland Province), Estonia (Rapla County) Z1a1b – Evenk (Sakha Republic), Dolgan Z1a1b* – Nganasan (Taimyr Peninsula), Yukaghir (lower Indigirka River basin), Even (Sakkyryyr, Eveno-Bytantaysky National district or Momsky district of Sakha Republic), Evenk (Iengra River basin, Nyukzha river basin) Z1a1b1 Z1a1b1* – Buryat (Irkutsk Oblast) Z1a1b1a – Uyghur Z1a2 (TMRCA 5,400 [95% CI 2,400 <-> 10,400] ybp) Z1a2* – Ulchi (lower Amur River basin) Z1a2a – Itelmen, Koryak Z1a2a* – Even (Kamchatka), Yukaghir (upper Anadyr River basin) Z1a2a1 Z1a2a1* – Even (Kamchatka, Berezovka) Z1a2a1a – Even (Kamchatka), Evenk (village of Nelkan by the Maya River in the Okhotsk Region) Z1a3 (TMRCA 3,600 [95% CI 1,850 <-> 6,500] ybp) Z1a3* – Yukaghir (upper Anadyr River basin), Even (Tompo District, Eveno-Bytantaysky National district or Momsky district of Sakha Republic), Evenk (Nyukzha River basin), Yakut (central Yakutia) Z1a3a Z1a3a* – Even (Kamchatka) Z1a3a1 – Yukaghir (lower Kolyma River basin), Even (Berezovka) Z1a3b – Even (Berezovka), Yakut Z1a4 (TMRCA 5,500 [95% CI 3,200 <-> 9,000] ybp) Z1a4* – Uyghur, Tubalar, Buryat (Irkutsk Oblast) Z1a4a – Uyghur Z1b – Tofalar Z1b1 (G251A) - Tofalar (Karagas) from Alygdzher, Barghut Z2 – Japan (Tokyo, Aichi, etc.) (TMRCA 3,900 [95% CI 1,450 <-> 8,400] ybp) Z3 – China (Shanghai, Dengba, Xinjiang Uyghur, etc.), Singapore, Malaysia, Thailand (Lao Isan in Chaiyaphum Province), Vietnam, Uyghur, Evenk (Sakha Republic), Mongol (Hohhot, Tongliao, Chaoyang, Chifeng, Jiangsu), Buryat, Kalmyk, Altai Kizhi, Kyrgyz, Kazakh, Tajik, Azerbaijan, North Ossetian, Romania, USA (TMRCA 15,836 [SD 4,397] ybp) Z3a – China (Mongol, Xibo, Deng, etc.), Kazakh (TMRCA 12,900 [95% CI 9,000 <-> 18,000] ybp) Z3a1 Z3a1a Z3a1a - Nepal (Newar, Magar, Tharu, Eastern Nepal, Kathmandu) Z3a1a* – Lachungpa, Lepcha Z3a1a1 – China Z3a1a2 – Gallong, Dirang Monpa, Thailand (Khon Mueang in Mae Hong Son Province), Vietnam (Hà Nhì) Z3a1b – Yakut Z3a2 – Lachungpa Z3a2a – Lachungpa Z3a3 – Thailand (Palaung in Chiang Mai Province, Lawa in Mae Hong Son Province) Z3b – Deng, Gallong (TMRCA 8,400 [95% CI 2,300 <-> 21,500] ybp) Z3-G709A – Yakut, China (Han from Henan) Z3c – Altaian, Altai Kizhi, Iran, China (Kyrgyz from Tashkurgan, Mongol from Tongliao, etc.), Cambodia (Siem Reap), Vietnam Z3d – China (Han from Beijing, etc.), Taiwan (Minnan, etc.), Mongol (Inner Mongolia), Korea Z3+G11696A – China, Korea Z3+G11696A+C16380T - China Z3+G11696A+T454C - China Z3+T8227C – China Z3+T8227C+A13629G - China Z3+T8227C+T4363C - Korea Z3+T8227C+T4363C+A12996G - China (HGDP She people) Z3+T8227C+T4363C+A12996G+T773C - Pakistan (HGDP Hazara) Z3+G7337A - Japan (Tokyo), Kazakhstan (Jetisuu) Z3+A13105G! - China (Barghut from Inner Mongolia) Z3+A13105G!+A13434G - China (Han from Henan, etc.) Z4 – China (Suzhou, Mongol in Shandong, etc.), Thailand (Phuan in Suphan Buri Province), Philippines, Uzbekistan, Kazakhstan, Kalmyk, Khakas, Karanogai (TMRCA 14,900 [95% CI 9,200 <-> 22,800] ybp) Z4a – China (Han from Hunan and Denver, Mongol from Inner Mongolia, Liaoning, Heilongjiang, Hebei, Henan, Shandong, etc.), Uyghur, Daur, Japan (Tokyo) Z4a1 – China (Han from Wuhan, Mongol from Baotou and Xilingol) Z4a1a – China (Han from Hunan and Yunnan), Vietnam Z4a1a1 – Japan (Tokyo, etc.), South Korea Z7 – Dirang Monpa, Tibet (Tingri, Shannan) (TMRCA 1,750 [95% CI 275 <-> 6,200] ybp), Nepal (Newar) Z8* – Nepal (Newar) Z5 – Japan (Aichi) See also Genealogical DNA test Genetic genealogy Human mitochondrial genetics Population genetics Human mitochondrial DNA haplogroups References External links General Mannis van Oven's Phylotree Haplogroup Z Ian Logan's Mitochondrial DNA Site: Haplogroup Z Ian Logan's Mitochondrial DNA Site: Haplogroup Z2 Ian Logan's Mitochondrial DNA Site: Haplogroup Z3 Ian Logan's Mitochondrial DNA Site: Haplogroup Z4 Ian Logan's Mitochondrial DNA Site: Haplogroup Z7 YFull MTree's Haplogroup Z MITOMAP's Haplogroup Z FamilyTreeDNA's mtDNA Haplotree: Haplogroup Z Spread of Haplogroup Z, from National Geographic Z mtDNA
Haplogroup Z
[ "Biology" ]
2,692
[ "Genetics" ]
329,099
https://en.wikipedia.org/wiki/Secret%20decoder%20ring
A secret decoder ring (or secret decoder) is a device that allows one to decode a simple substitution cipher—or to encrypt a message by working in the opposite direction. As inexpensive toys, secret decoders have often been used as promotional items by retailers, as well as radio and television programs, from the 1930s through to the current day. Decoders, whether badges or rings, are an entertaining way for children to tap into a common fascination with encryption, ciphers, and secret codes, and are used to send hidden messages back and forth to one another. History Secret decoders are generally circular scales, descendants of the cipher disk developed in the 15th century by Leon Battista Alberti. Rather than the complex polyalphabetic Alberti cipher method, the decoders for children invariably use simple Caesar cipher substitutions. The most well-known example started in 1934 with the Ovaltine company's sponsored radio program Little Orphan Annie. The show's fan club, "Radio Orphan Annie's Secret Society", distributed a member's handbook that included a simple substitution cipher with a resulting numeric cipher text. This was followed the next year with a membership pin that included a cipher disk—enciphering the letters A–Z to numbers 1–26. From 1935 to 1940, metal decoders were produced for the promotion. From 1941 on, paper decoders were produced. Similar metal badges and pocket decoders continued with the Captain Midnight radio and television programs. None of these early decoders were in the form of finger rings; however, "secret compartment" rings were common radio program premiums. In the early 1960s, secret decoder rings appeared—notably in conjunction with the Jonny Quest television program sponsored by PF Shoes. A later, less ornate, decoder ring was offered by Kix Cereals. Today, high quality, stainless steel decoder rings for children and adults are being produced by companies such as Retroworks and ThinkGeek. Messages Ovaltine and other companies that marketed early decoders to children often included "secret messages" on their radio shows aimed at children. These could be decoded for a preview of the next episode of the show. Film references The film A Christmas Story (1983) depicts the Little Orphan Annie radio show transmitting a secret message that deciphered to: "Be sure to drink your Ovaltine", unlike the actual broadcasts' secret code segments, which usually previewed the upcoming episode. Decoder rings are mentioned by Arnold Schwarzenegger's character in Last Action Hero. A "Drogan's Decoder Wheel" is mentioned in the 1985 comedy movie Spies Like Us by characters played by Stephen Hoye and Dan Aykroyd. Laura Petrie mentions her husband Rob's "Captain Midnight Decoder Ring," in Season 5, episode 27 of The Dick Van Dyke Show. Kevin Pollak character Moishe Maisel finds a toy decoder ring in the cereal box of his grandson Ethan on Yom Kippur in season 2, episode 7 of "The Marvelous Mrs. Maisel" ("Look, She Made a Hat"). See also References Encryption devices History of cryptography Mechanical puzzles 1930s toys
Secret decoder ring
[ "Mathematics" ]
650
[ "Recreational mathematics", "Mechanical puzzles" ]
329,115
https://en.wikipedia.org/wiki/Fulgurite
Fulgurites (), commonly called "fossilized lightning", are natural tubes, clumps, or masses of sintered, vitrified, or fused soil, sand, rock, organic debris and other sediments that sometimes form when lightning discharges into ground. When composed of silica, fulgurites are classified as a variety of the mineraloid lechatelierite. When ordinary negative polarity cloud-ground lightning discharges into a grounding substrate, greater than 100 million volts (100 MV) of potential difference may be bridged. Such current may propagate into silica-rich quartzose sand, mixed soil, clay, or other sediments, rapidly vaporizing and melting resistant materials within such a common dissipation regime. This results in the formation of generally hollow and/or vesicular, branching assemblages of glassy tubes, crusts, and clumped masses. Fulgurites have no fixed composition because their chemical composition is determined by the physical and chemical properties of whatever material is being struck by lightning. Fulgurites are structurally similar to Lichtenberg figures, which are the branching patterns produced on surfaces of insulators during dielectric breakdown by high-voltage discharges, such as lightning. Description Fulgurites are formed when lightning strikes the ground, fusing and vitrifying mineral grains. The primary SiO2 phase in common tube fulgurites is lechatelierite, an amorphous silica glass. Many fulgurites show some evidence of crystallization: in addition to glasses, many are partially protocrystalline or microcrystalline. Because fulgurites are generally amorphous in structure, fulgurites are classified as mineraloids. Peak temperatures within a lightning channel exceed 30,000 K, with sufficient pressure to produce planar deformation features in SiO2, a kind of polymorphism. This is also known colloquially as shocked quartz. Material properties (size, color, texture) of fulgurites vary widely, depending on the size of the lightning bolt and the composition and moisture content of the surface struck by lightning. Most natural fulgurites fall on a spectrum from white to black. Iron is a common impurity that can result in a deep brownish-green coloration. Lechatelierite similar to fulgurites can also be produced via controlled (or uncontrolled) arcing of artificial electricity into a medium. Downed high voltage power lines have produced brightly colored lechatelierites, due to the incorporation of copper or other materials from the power lines. Brightly colored lechatelierites resembling fulgurites are usually synthetic and reflect the incorporation of synthetic materials. However, lightning can strike man-made objects, resulting in colored fulgurites. The interior of Type I (sand) fulgurites normally is smooth or lined with fine bubbles, while their exteriors are coated with rough sedimentary particles or small rocks. Other types of fulgurites are usually vesicular, and may lack an open central tube; their exteriors can be porous or smooth. Branching fulgurites display fractal-like self-similarity and structural scale invariance as a macroscopic or microscopic network of root-like branches, and can display this texture without central channels or obvious divergence from morphology of context or target (e.g. sheet-like melt, rock fulgurites). Fulgurites are usually fragile, making the field collection of large specimens difficult. Fulgurites can exceed 20 centimeters in diameter and can penetrate deep into the subsoil, sometimes occurring as far as below the surface that was struck, although they may also form directly on a sedimentary surface. One of the longest fulgurites to have been found in modern times was a little over in length, found in northern Florida. The Yale University Peabody Museum of Natural History displays one of the longest known preserved fulgurites, approximately in length. Charles Darwin in The Voyage of the Beagle recorded that tubes such as these found in Drigg, Cumberland, UK reached a length of . Fulgerites at Winans Lake, Livingston County, Michigan, extended discontinuously throughout a 30 m range and arguably include the largest reported fulgurite mass ever recovered and described: its largest section extending approximately 16 ft (4.88 m) in length by 1 ft in diameter (30 cm). Classification Fulgurites have been classified into five types related to the type of sediment in which the fulgurite formed, as follows: Type I – sand fulgurites with tubaceous structure; their central axial void may be collapsed Type II – soil fulgurites; these are glass-rich, and form in a wide range of sediment compositions, including clay-rich soils, silt-rich soils, gravel-rich soils, and loessoid; these may be tubaceous, branching, vesicular, irregular/slaggy, or may display a combination of these structures, and can produce exogenic fulgurites (droplet fulgurites) Type III – caliche or calcic sediment fulgurites, having thick, often surficially glazed granular walls with calcium-rich vitreous groundmass with little or no lechatelierite glass; their shapes are variable, with multiple narrow central channels common, and can span the entire range of morphological and structural variation for fulguritic objects Type IV – rock fulgurites, which are either crusts on minimally altered rocks, networks of tunneling within rocks, vesicular outgassed rocks (often glazed by a silicide-rich and/or metal oxide crust), or completely vitrified and dense rock material and masses of these forms with little sedimentary groundmass Type V – [droplet] fulgurites (exogenic fulgurites), which show evidence of ejection (e.g. spheroidal, filamentous, or aerodynamic), related by composition to Type II and Type IV fulgurites phytofulgurite – a proposed class of objects resulting from partial to total alteration of biomass (e.g. grasses, lichens, moss, wood) by lightning, described as "natural glasses formed by cloud-to-ground lightning." These were excluded from the classification scheme because they are not glasses, so classifying them as a subset of fulgurites is debatable. Significance The presence of fulgurites in an area can be used to estimate the frequency of lightning over a period of time, which can help to understand past regional climates. Paleolightning is the study of various indicators of past lightning strikes, primarily in the form of fulgurites and lightning-induced remanent magnetization signatures. Many high-pressure, high-temperature materials have been observed in fulgurites. Many of these minerals and compounds are also known to be formed in extreme environments such as nuclear weapon tests, hypervelocity impacts, and interstellar space. Shocked quartz was first described in fulgurites in 1980. Other materials, including highly reduced silicon-metal alloys (silicides), the fullerene allotropes C60 (buckminsterfullerenes) and C70, as well as high-pressure polymorphs of SiO2, have since been identified in fulgurites. Reduced phosphides have been identified in fulgurites, in the form of schreibersite ( and ), and titanium(III) phosphide. These reduced compounds are otherwise rare on Earth due to the presence of oxygen in Earth's atmosphere, which creates oxidizing surface conditions. History Fulgurite tubes have been mentioned already by Persian polymaths Avicenna and Al-Biruni in the 11th century, without knowing their true origination. Over the following centuries fulgurites have been described but missinterpreted as a result of subterrestrial fires, falsely attributing curative powers to them, e.g. by Leonhard David Hermann 1711 in his Maslographia. Other famous natural scientists, among them Charles Darwin, Horace Bénédict de Saussure and Alexander von Humboldt gave attention to fulgurites, without discovering the relationship to lightning. In 1805 the true process of forming fulgurites by lightning strikes to the ground was identified by agriculturist Hentzen and mineralogist and mining engineer Johann Karl Wilhelm Voigt. In 1817 mineralogist and mining engineer Karl Gustav Fiedler published and comprehensively documented the phenomenon in the Annalen der Physik. See also Electromechanical disintegration Impactite Tektite Trinitite References External links H. J. Melosh, "Impact geologists, beware!" (). Geophysical Research Letters, Volume 44, Issue 17, pp. 8873–8874, 2017 Petrified Lightning by Peter E. Viemeister (PDF) Interview with artist Allan McCollum along with an historical archive of 66 versions of booklets included in Allan McCollum's exhibition, The Event: Petrified Lightning from Central Florida Mindat with location data W. M. Myers and Albert B. Peck, "A Fulgurite from South Amboy, New Jersey", American Mineralogist, Volume 10, pages 152–155, 1925 Vladimir A. Rakov, "Lightning Makes Glass", 29th Annual Conference of the Glass Art Society, Tampa, Florida, 1999 Geochemistry Glass in nature Lightning Metamorphic rocks Mineralogy Paleoclimatology
Fulgurite
[ "Physics", "Chemistry" ]
2,003
[ "Physical phenomena", "Electrical phenomena", "nan", "Lightning" ]
329,400
https://en.wikipedia.org/wiki/Solid%20of%20revolution
In geometry, a solid of revolution is a solid figure obtained by rotating a plane figure around some straight line (the axis of revolution), which may not intersect the generatrix (except at its boundary). The surface created by this revolution and which bounds the solid is the surface of revolution. Assuming that the curve does not cross the axis, the solid's volume is equal to the length of the circle described by the figure's centroid multiplied by the figure's area (Pappus's second centroid theorem). A representative disc is a three-dimensional volume element of a solid of revolution. The element is created by rotating a line segment (of length ) around some axis (located units away), so that a cylindrical volume of units is enclosed. Finding the volume Two common methods for finding the volume of a solid of revolution are the disc method and the shell method of integration. To apply these methods, it is easiest to draw the graph in question; identify the area that is to be revolved about the axis of revolution; determine the volume of either a disc-shaped slice of the solid, with thickness , or a cylindrical shell of width ; and then find the limiting sum of these volumes as approaches 0, a value which may be found by evaluating a suitable integral. A more rigorous justification can be given by attempting to evaluate a triple integral in cylindrical coordinates with two different orders of integration. Disc method The disc method is used when the slice that was drawn is perpendicular to the axis of revolution; i.e. when integrating parallel to the axis of revolution. The volume of the solid formed by rotating the area between the curves of and and the lines and about the -axis is given by If (e.g. revolving an area between the curve and the -axis), this reduces to: The method can be visualized by considering a thin horizontal rectangle at between on top and on the bottom, and revolving it about the -axis; it forms a ring (or disc in the case that ), with outer radius and inner radius . The area of a ring is , where is the outer radius (in this case ), and is the inner radius (in this case ). The volume of each infinitesimal disc is therefore . The limit of the Riemann sum of the volumes of the discs between and becomes integral (1). Assuming the applicability of Fubini's theorem and the multivariate change of variables formula, the disk method may be derived in a straightforward manner by (denoting the solid as D): Shell Method of Integration The shell method (sometimes referred to as the "cylinder method") is used when the slice that was drawn is parallel to the axis of revolution; i.e. when integrating perpendicular to the axis of revolution. The volume of the solid formed by rotating the area between the curves of and and the lines and about the -axis is given by If (e.g. revolving an area between curve and -axis), this reduces to: The method can be visualized by considering a thin vertical rectangle at with height , and revolving it about the -axis; it forms a cylindrical shell. The lateral surface area of a cylinder is , where is the radius (in this case ), and is the height (in this case ). Summing up all of the surface areas along the interval gives the total volume. This method may be derived with the same triple integral, this time with a different order of integration: Parametric form When a curve is defined by its parametric form in some interval , the volumes of the solids generated by revolving the curve around the -axis or the -axis are given by Under the same circumstances the areas of the surfaces of the solids generated by revolving the curve around the -axis or the -axis are given by This can also be derived from multivariable integration. If a plane curve is given by then its corresponding surface of revolution when revolved around the x-axis has Cartesian coordinates given by with . Then the surface area is given by the surface integral Computing the partial derivatives yields and computing the cross product yields where the trigonometric identity was used. With this cross product, we get where the same trigonometric identity was used again. The derivation for a surface obtained by revolving around the y-axis is similar. Polar form For a polar curve where and , the volumes of the solids generated by revolving the curve around the x-axis or y-axis are The areas of the surfaces of the solids generated by revolving the curve around the -axis or the -axis are given See also Gabriel's Horn Guldinus theorem Pseudosphere Surface of revolution Ungula Notes References () Integral calculus Solids
Solid of revolution
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
959
[ "Calculus", "Phases of matter", "Condensed matter physics", "Integral calculus", "Solids", "Matter" ]
329,473
https://en.wikipedia.org/wiki/Job%20satisfaction
Job satisfaction, employee satisfaction or work satisfaction is a measure of workers' contentment with their job, whether they like the job or individual aspects or facets of jobs, such as nature of work or supervision. Job satisfaction can be measured in cognitive (evaluative), affective (or emotional), and behavioral components. Researchers have also noted that job satisfaction measures vary in the extent to which they measure feelings about the job (affective job satisfaction). or cognitions about the job (cognitive job satisfaction). One of the most widely used definitions in organizational research is that of Edwin A. Locke (1976), who defines job satisfaction as "a pleasurable or positive emotional state resulting from the appraisal of one's job or job experiences" (p. 1304). Others have defined it as simply how content an individual is with their job; whether they like the job. It is assessed at both the global level (whether the individual is satisfied with the job overall), or at the facet level (whether the individual is satisfied with different aspects of the job). Spector (1997) lists 14 common facets: appreciation, communication, coworkers, fringe benefits, Job conditions, nature of the work, organization, personal growth, policies and procedures, promotion opportunities, recognition, security, and supervision. Evaluation Hulin and Judge (2003) have noted that job satisfaction includes multidimensional psychological responses to an individual's job, and that these personal responses have cognitive (evaluative), affective (or emotional), and behavioral components. Job satisfaction scales vary in the extent to which they assess the affective feelings about the job or the cognitive assessment of the job. Affective job satisfaction is a subjective construct representing an emotional feeling individuals have about their job. Hence, affective job satisfaction for individuals reflects the degree of pleasure or happiness their job in general induces. Cognitive job satisfaction is a more objective and logical evaluation of various facets of a job. Cognitive job satisfaction can be unidimensional if it comprises evaluation of just one facet of a job, such as pay or maternity leave, or multidimensional if two or more than two facets of a job are simultaneously evaluated. Cognitive job satisfaction does not assess the degree of pleasure or happiness that arises from specific job facets, but rather gauges the extent to which those job facets are judged by the job holder to be satisfactory in comparison with objectives they themselves set or with other jobs. While cognitive job satisfaction might help to bring about affective job satisfaction, the two constructs are distinct, not necessarily directly related, and have different antecedents and consequences. Job satisfaction can also be seen within the broader context of the range of issues which affect an individual's experience of work, or their quality of working life. Job satisfaction can be understood in terms of its relationships with other key factors, such as general well-being, stress at work, control at work, home-work interface, and working conditions. History The assessment of job satisfaction through employee anonymous surveys became commonplace in the 1930s. Although prior to that time there was the beginning of interest in employee attitudes, there were only a handful of studies published. Latham and Budworth note that Uhrbrock in 1934 was one of the first psychologists to use the newly developed attitude measurement techniques to assess factory worker attitudes. They also note that in 1935 Hoppock conducted a study that focused explicitly on job satisfaction that is affected by both the nature of the job and relationships with coworkers and supervisors. Models Affect theory Edwin A. Locke's Range of Affect Theory (1976) is arguably the most famous job satisfaction model. The main premise of this theory is that satisfaction is determined by a discrepancy between what one wants in a job and what one has in a job. Further, the theory states that how much one values a given facet of work (e.g. the degree of autonomy in a position) moderates how satisfied/dissatisfied one becomes when expectations are/are not met. When a person values a particular facet of a job, their satisfaction is more greatly impacted both positively (when expectations are met) and negatively (when expectations are not met), compared to one who does not value that facet. To illustrate, if Employee A values autonomy in the workplace and Employee B is indifferent about autonomy, then Employee A would be more satisfied in a position that offers a high degree of autonomy and less satisfied in a position with little or no autonomy compared to Employee B. This theory also states that too much of a particular facet will produce stronger feelings of dissatisfaction the more a worker values that facet. Dispositional approach The dispositional approach suggests that individuals vary in their tendency to be satisfied with their jobs, in other words, job satisfaction is to some extent an individual trait. This approach became a notable explanation of job satisfaction in light of evidence that job satisfaction tends to be stable over time and across careers and jobs. Research also indicates that identical twins raised apart have similar levels of job satisfaction. A significant model that narrowed the scope of the dispositional approach was the Core Self-evaluations Model, proposed by Timothy A. Judge, Edwin A. Locke, and Cathy C. Durham in 1997. Judge et al. argued that there are four core self-evaluations that determine one's disposition towards job satisfaction: self-esteem, general self-efficacy, locus of control, and neuroticism. This model states that higher levels of self-esteem (the value one places on oneself) and general self-efficacy (the belief in one's own competence) lead to higher work satisfaction. Having an internal locus of control (believing one has control over one's own life, as opposed to outside forces having control) leads to higher job satisfaction. Finally, lower levels of neuroticism lead to higher job satisfaction. Equity theory Equity Theory shows how a person views fairness in regard to social relationships such as with an employer. A person identifies the amount of input (things gained) from a relationship compared to the output (things given) to produce an input/output ratio. They then compare this ratio to the ratio of other people in deciding whether they have an equitable relationship. Equity Theory suggests that if an individual thinks there is an inequality between two social groups or individuals, the person is likely to be distressed because the ratio between the input and the output are not equal. For example, consider two employees who work the same job and receive the same pay and benefits. If one individual gets a pay raise for doing the same work as the other, then the less benefited individual will become distressed in the workplace. If, on the other hand, both individuals get pay raises and new responsibilities, then the feeling of equity will be maintained. Other psychologists have extended the equity theory, suggesting three behavioral response patterns to situations of perceived equity or inequity. These three types are benevolent, equity sensitive, and entitled. The level by each type affects motivation, job satisfaction, and job performance. Benevolent-Satisfied when they are under-rewarded compared with co-workers Equity sensitive-Believe everyone should be fairly rewarded Entitled-People believe that everything they receive is their just due Discrepancy theory The concept of discrepancy theory is to explain the ultimate source of anxiety and dejection. An individual who has not fulfilled their responsibilities may feel a sense of anxiety and regret for not performing well. They may also feel dejection due to not being able to achieve their hopes and aspirations. According to this theory, all individuals will learn what their obligations and responsibilities are for a particular function, and if they fail to fulfill those obligations then they are punished. Over time, these duties and obligations consolidate to form an abstracted set of principles, designated as a self-guide. Agitation and anxiety are the main responses when an individual fails to achieve the obligation or responsibility. This theory also explains that if achievement of the obligations is obtained then the reward can be praise, approval, or love. These achievements and aspirations also form an abstracted set of principles, referred to as the ideal self guide. When the individual fails to obtain these rewards, they begin to have feelings of dejection, disappointment, or even depression. Two-factor theory (motivator-hygiene theory) Frederick Herzberg's two-factor theory (also known as motivator-hygiene theory) attempts to explain satisfaction and motivation in the workplace. This theory states that satisfaction and dissatisfaction are driven by different factors – motivation and hygiene factors, respectively. An employee's motivation to work is continually related to job satisfaction of a subordinate. Motivation can be seen as an inner force that drives individuals to attain personal and organizational goals. Motivating factors are those aspects of the job that make people want to perform, and provide people with satisfaction, for example achievement in work, recognition, promotion opportunities. These motivating factors are considered to be intrinsic to the job, or the work carried out. Hygiene factors include aspects of the working environment such as pay, company policies, supervisory practices, and other working conditions. Herzberg's model has stimulated much research. In the 1970s, researchers were unable to reliably empirically prove the model however, with Hackman & Oldham suggesting that Herzberg's original formulation of the model may have been a methodological artifact. The theory has been criticized because it does not consider individual differences, conversely predicting all employees will react in an identical manner to changes in motivating/hygiene factors. The model has also been criticised in that it does not specify how motivating/hygiene factors are to be measured. Most studies use a quantitative approach by for example using validated instruments such as the Minnesota Satisfaction Questionnaire. There are also studies that have utilized a qualitative methodology such as by means of individual interviews. Job characteristics model Hackman & Oldham proposed the job characteristics model, which is widely used as a framework to study how particular job characteristics impact job outcomes, including job satisfaction. The five core job characteristics can be combined to form a motivating potential score (MPS) for a job, which can be used as an index of how likely a job is to affect an employee's attitudes and behaviors. Not everyone is equally affected by the MPS of a job. People who are high in growth need strength (the desire for autonomy, challenge and development of new skills on the job) are particularly affected by job characteristics. A meta-analysis of studies that assess the framework of the model provides some support for the validity of the JCM. Influencing factors Environmental factors Communication overload and underload One of the most important aspects of an individual's work in a modern organization concerns the management of communication demands that they encounter on the job. Demands can be characterized as a communication load, which refers to "the rate and complexity of communication inputs an individual must process in a particular time frame." Individuals in an organization can experience communication overload and communication underload, which can affect their level of job satisfaction. Communication overload can occur when "an individual receives too many messages in a short period of time," resulting in unprocessed information or when an individual faces more complex messages that are more difficult to process. Due to this process, "given an individual's style of work and motivation to complete a task, when more inputs exist than outputs, the individual perceives a condition of overload," which can be positively or negatively related to job satisfaction. In comparison, communication underload can occur when messages or inputs are sent below the individual's ability to process them. According to the ideas of communication overload and underload, if an individual does not receive enough input on the job or is unsuccessful in processing these inputs, the individual is more likely to become dissatisfied, aggravated, and unhappy with their work, leading to a low level of job satisfaction. Superior-subordinate communication Superior-subordinate communication is an important influence on job satisfaction in the workplace. The way in which subordinates perceive a supervisor's behavior can positively or negatively influence job satisfaction. Communication behavior such as facial expression, eye contact, vocal expression, and body movement is crucial to the superior-subordinate relationship. Nonverbal messages play a central role in interpersonal interactions with respect to impression formation, deception, attraction, social influence, and emotion. Nonverbal immediacy from the supervisor helps to increase interpersonal involvement with their subordinates, impacting job satisfaction. The manner in which supervisors communicate with their subordinates nonverbally may be more important than the verbal content. Individuals who dislike and think negatively about their supervisor are less willing to communicate or have motivation to work, whereas individuals who like and think positively of their supervisor are more likely to communicate and are satisfied with their job and work environment. A supervisor who uses nonverbal immediacy, friendliness, and open communication lines is more likely to receive positive feedback and high job satisfaction from a subordinate. Conversely, a supervisor who is antisocial, unfriendly, and unwilling to communicate will naturally receive negative feedback and create low job satisfaction in their subordinates. Strategic employee recognition A Watson Wyatt Worldwide study identified a positive outcome between a collegial and flexible work environment and an increase in shareholder value. Suggesting that employee satisfaction is directly related to financial gain. Over 40 percent of the companies listed in the top 100 of Fortune magazine's "America's Best Companies to Work For" also appear on the Fortune 500. It is possible that successful workers enjoy working at successful companies, however, the Watson Wyatt Worldwide Human Capital Index study claims that effective human resources practices, such as employee recognition programs, lead to positive financial outcomes more often than positive financial outcomes lead to good practices. Employee recognition is not only about gifts and points. It's about changing the corporate culture in order to meet goals and initiatives and most importantly to connect employees to the company's core values and beliefs. Strategic employee recognition is seen as the most important program not only to improve employee retention and motivation but also to positively influence the financial situation. The difference between the traditional approach (gifts and points) and strategic recognition is the ability to serve as a serious business influencer that can advance a company's strategic objectives in a measurable way. "The vast majority of companies want to be innovative, coming up with new products, business models and better ways of doing things. However, innovation is not so easy to achieve. A CEO cannot just order it, and so it will be. You have to carefully manage an organization so that, over time, innovations will emerge." Individual factors Emotion Mood and emotions at work are related to job satisfaction. Moods tend to be longer lasting but often weaker states of uncertain origin, while emotions are often more intense, short-lived and have a clear object or cause. Some research suggests moods are related to overall job satisfaction. Positive and negative emotions were also found to be significantly related to overall job satisfaction. Frequency of experiencing net positive emotion will be a better predictor of overall job satisfaction than will intensity of positive emotion when it is experienced. Emotion work (or emotion management) refers to various types of efforts to manage emotional states and displays. Emotion management includes all of the conscious and unconscious efforts to increase, maintain, or decrease one or more components of an emotion. Although early studies of the consequences of emotional work emphasized its harmful effects on workers, studies of workers in a variety of occupations suggest that the consequences of emotional work are not uniformly negative. It was found that suppression of unpleasant emotions decreases job satisfaction and the amplification of pleasant emotions increases job satisfaction. The understanding of how emotion regulation relates to job satisfaction concerns two models: Emotional dissonance: a state of discrepancy between public displays of emotions and internal experiences of emotions, that often follows the process of emotion regulation. Emotional dissonance is associated with high emotional exhaustion, low organizational commitment, and low job satisfaction. Social interaction model: taking the social interaction perspective, workers' emotion regulation might beget responses from others during interpersonal encounters that subsequently impact their own job satisfaction. For example, the accumulation of favorable responses to displays of pleasant emotions might positively affect job satisfaction. Genetics The influence that genetics has had on a variety of individual differences is well documented. Some research suggests genetics also play a role in the intrinsic, direct experiences of job satisfaction like challenge or achievement (as opposed to extrinsic, environmental factors like working conditions). Notably, Arvey et al. (1989) examined job satisfaction in 34 pairs of monozygotic twins who were reared apart to test for the existence of genetic influence on job satisfaction. After correcting for age and gender, they obtained an intra-class correlation of .31. This suggests that 31% of variance in job satisfaction has a genetic basis, the estimate would be slightly larger if corrected for measurement error. They also found that evidence of genetic heritability for job characteristics, such as complexity level, motor skill requirements, and physical demands. Personality Some research suggests an association between personality and job satisfaction. Specifically, this research describes the role of negative affectivity and positive affectivity. Negative affectivity is related strongly to the personality trait of neuroticism. Individuals high in negative affectivity are more prone to experience less job satisfaction. Positive affectivity is related strongly to the personality trait of extraversion. Those high in positive affectivity are more prone to be satisfied in most dimensions of their life, including their job. Differences in affectivity likely impact how individuals will perceive objective job circumstances like pay and working conditions, thus affecting their satisfaction in that job. There are two personality factors related to job satisfaction, alienation and locus of control. Employees who have an internal locus of control and feel less alienated are more likely to experience job satisfaction, job involvement and organizational commitment. A meta-analysis of 187 studies of job satisfaction concluded that high satisfaction was positively associated with internal locus of control. The study also showed characteristics like high Machiavellianism, narcissism, trait anger, type A personality dimensions of achievement striving and impatience/irritability, are also related to job satisfaction. Psychological well-being Psychological well-being (PWB) is defined as "the overall effectiveness of an individual's psychological functioning" as related to primary facets of one's life: work, family, community, etc. There are three defining characteristics of PWB. First, it is a phenomenological event, meaning that people are happy when they subjectively believe themselves to be so. Second, well-being involves some emotional conditions. Particularly, psychologically well people are more prone to experience positive emotions and less prone to experience negative emotions. Third, well-being refers to one's life as a whole. It is a global evaluation. PWB is primarily measured using the eight-item Index of Psychological Well-Being developed by Berkman (IPWB). IPWB asks respondents to reply to a series a questions on how often they felt "pleased about accomplishing something", "bored", "depressed or unhappy", etc. PWB in the workplace plays an important role in determining job satisfaction and has attracted much research attention in recent years. These studies have focused on the effects of PWB on job satisfaction as well as job performance. One study noted that because job satisfaction is specific to one's job, the research that examined job satisfaction had not taken into account aspects of one's life external to the job. Prior studies had focused only on the work environment as the main determinant of job satisfaction. Ultimately, to better understand job satisfaction (and its close relative, job performance), it is important to take into account an individual's PWB. Research published in 2000 showed a significant correlation between PWB and job satisfaction (r = .35, p < .01). A follow-up study by the same authors in 2007 revealed similar results (r = .30, p < .01). In addition, these studies show that PWB is a better predictor of job performance than job satisfaction alone. Job satisfaction more associate to mental health than physical health. Measuring The majority of job satisfaction measures are self-reports and based on multi-item scales. Several measures have been developed over the years, although they vary in terms of how carefully and distinctively they are conceptualized with respect to affective or cognitive job satisfaction. They also vary in terms of the extent and rigour of their psychometric validation. The Brief Index of Affective Job Satisfaction (BIAJS) is a four-item, overtly affective as opposed to cognitive, measure of overall affective job satisfaction. The BIAJS differs from other job satisfaction measures in being comprehensively validated not just for internal consistency reliability, temporal stability, convergent and criterion-related validities, but also for cross-population invariance by nationality, job level, and job type. Reported internal consistency reliabilities range between 0.81 and 0.87. The Job Descriptive Index (JDI) is a specifically cognitive job satisfaction measure. It measures one's satisfaction in five facets: pay, promotions and promotion opportunities, coworkers, supervision, and the work itself. The scale is simple, participants answer either yes, no, or can't decide (indicated by '?') in response to whether given statements accurately describe one's job. The Job Satisfaction Survey (JSS) measures satisfaction with nine facets: Pay, Promotion, Supervision, Fringe Benefits, Contingent Rewards, Operating Procedures, Coworkers, Nature of Work, and Communication. The Michigan Organizational Assessment Questionnaire job satisfaction subscale is a 3-item measure of general job satisfaction. It has been very popular with researchers. The Minnesota Satisfaction Questionnaire (MSQ) has 20 facets plus intrinsic and extrinsic satisfaction scores. There are long and short forms. The Short Index of Job Satisfaction (SIJS) is a free five-item measure which provides overall attitudinal job satisfaction scores. It derived from the Index of Job Satisfaction (IJS) which originally had 18 items. The SIJS presented good validity evidence based on the internal structure (i.e., dimensionality, reliability of the scores, and measurement invariance among sex and countries) as also good validity evidence based on the relation to other variables (e.g., US samples, Brazilian and Portuguese samples). Relationships and practical implications Job satisfaction can be indicative of work behaviors such as organizational citizenship, and withdrawal behaviors such as absenteeism, and turnover. Further, job satisfaction can partially mediate the relationship of personality variables and deviant work behaviors. The most important predictor of job satisfaction was perceived organizational support, followed by organizational health. Research shows that staying healthy boosts a person's mindset, which influences job satisfaction. Moreover, a corporate wellness program can positively shape how employees feel about their work environment and office conditions. Positive psychological capital was also predicted by organizational health, which was highly related to work satisfaction. One common research finding is that job satisfaction is correlated with life satisfaction. This correlation is reciprocal, meaning people who are satisfied with life tend to be satisfied with their job and people who are satisfied with their job tend to be satisfied with life. In fact, a 2016 FlexJobs survey revealed 97% of respondents believe a job that offered flexibility would positively impact their lives, 87% think it would help lower stress and 79% think the flexibility would help them live healthier. Additionally, a second survey of 650 working parents revealed that flexible work arrangements can positively affect people's personal health, as well as improve their romantic relationships and 99% of respondents believe a flexible job would make them a happier person in general. However, some research has found that job satisfaction is not significantly related to life satisfaction when other variables such as nonwork satisfaction and core self-evaluations are taken into account. An important finding for organizations to note is that job satisfaction has a rather tenuous correlation to productivity on the job. This is a vital piece of information to researchers and businesses, as the idea that satisfaction and job performance are directly related to one another is often cited in the media and in some non-academic management literature. A recent meta-analysis found surprisingly low correlations between job satisfaction and performance. Further, the meta-analysis found that the relationship between satisfaction and performance can be moderated by job complexity, such that for high-complexity jobs the correlation between satisfaction and performance is higher than for jobs of low to moderate complexity. Additionally, one longitudinal study indicated that among work attitudes, job satisfaction is a strong predictor of absenteeism, suggesting that increasing job satisfaction and organizational commitment are potentially good strategies for reducing absenteeism and turnover intentions. Recent research has also shown that intention to quit alone can have negative effects on performance, organizational deviance, and organizational citizenship behaviours. In short, the relationship of satisfaction to productivity is not as straightforward as often assumed and can be influenced by a number of different work-related constructs, and the notion that "a happy worker is a productive worker" should not be the foundation of organizational decision-making. For example, employee personality may even be more important than job satisfaction in regards to performance. Job satisfaction has also been found to be impacting the shorter job tenure among persons with severe mental illness. Absenteeism Numerous studies have been done to show the correlation of job satisfaction and absenteeism. For example, Goldberg and Waldman looked at absenteeism in two dimensions as total time lost (number of missed days) and the frequency of time lost. Self-reported data and records-based data were collected and compared. Following absenteeism measures were evaluated according to absenteeism predictors. Self-report time lost self-reported frequency records-based time lost Only three categories of predictors had a significant relationship ratio and were taken in account further: Health Wages Position level This research results revealed that absenteeism cannot be predicted by job satisfaction, although other studies have found significant relationships. See also References Industrial and organizational psychology Employee relations Organizational behavior Subjective experience Happiness
Job satisfaction
[ "Biology" ]
5,327
[ "Behavior", "Organizational behavior", "Human behavior" ]
329,542
https://en.wikipedia.org/wiki/Disk%20%28mathematics%29
In geometry, a disk (also spelled disc) is the region in a plane bounded by a circle. A disk is said to be closed if it contains the circle that constitutes its boundary, and open if it does not. For a radius, , an open disk is usually denoted as and a closed disk is . However in the field of topology the closed disk is usually denoted as while the open disk is . Formulas In Cartesian coordinates, the open disk of center and radius R is given by the formula: while the closed disk of the same center and radius is given by: The area of a closed or open disk of radius R is πR2 (see area of a disk). Properties The disk has circular symmetry. The open disk and the closed disk are not topologically equivalent (that is, they are not homeomorphic), as they have different topological properties from each other. For instance, every closed disk is compact whereas every open disk is not compact. However from the viewpoint of algebraic topology they share many properties: both of them are contractible and so are homotopy equivalent to a single point. This implies that their fundamental groups are trivial, and all homology groups are trivial except the 0th one, which is isomorphic to Z. The Euler characteristic of a point (and therefore also that of a closed or open disk) is 1. Every continuous map from the closed disk to itself has at least one fixed point (we don't require the map to be bijective or even surjective); this is the case n=2 of the Brouwer fixed-point theorem. The statement is false for the open disk: Consider for example the function which maps every point of the open unit disk to another point on the open unit disk to the right of the given one. But for the closed unit disk it fixes every point on the half circle As a statistical distribution A uniform distribution on a unit circular disk is occasionally encountered in statistics. It most commonly occurs in operations research in the mathematics of urban planning, where it may be used to model a population within a city. Other uses may take advantage of the fact that it is a distribution for which it is easy to compute the probability that a given set of linear inequalities will be satisfied. (Gaussian distributions in the plane require numerical quadrature.) "An ingenious argument via elementary functions" shows the mean Euclidean distance between two points in the disk to be , while direct integration in polar coordinates shows the mean squared distance to be . If we are given an arbitrary location at a distance from the center of the disk, it is also of interest to determine the average distance from points in the distribution to this location and the average square of such distances. The latter value can be computed directly as . Average distance to an arbitrary internal point To find we need to look separately at the cases in which the location is internal or external, i.e. in which , and we find that in both cases the result can only be expressed in terms of complete elliptic integrals. If we consider an internal location, our aim (looking at the diagram) is to compute the expected value of under a distribution whose density is for , integrating in polar coordinates centered on the fixed location for which the area of a cell is  ; hence Here can be found in terms of and using the Law of cosines. The steps needed to evaluate the integral, together with several references, will be found in the paper by Lew et al.; the result is that where and are complete elliptic integrals of the first and second kinds. ; . Average distance to an arbitrary external point Turning to an external location, we can set up the integral in a similar way, this time obtaining where the law of cosines tells us that and are the roots for of the equation Hence We may substitute to get using standard integrals. Hence again , while also See also Unit disk, a disk with radius one Annulus (mathematics), the region between two concentric circles Ball (mathematics), the usual term for the 3-dimensional analogue of a disk Disk algebra, a space of functions on a disk Circular segment Orthocentroidal disk, containing certain centers of a triangle References Euclidean geometry Circles Planar surfaces
Disk (mathematics)
[ "Mathematics" ]
868
[ "Planes (geometry)", "Euclidean plane geometry", "Planar surfaces", "Circles", "Pi" ]
329,549
https://en.wikipedia.org/wiki/Surface%20of%20revolution
A surface of revolution is a surface in Euclidean space created by rotating a curve (the generatrix) one full revolution around an axis of rotation (normally not intersecting the generatrix, except at its endpoints). The volume bounded by the surface created by this revolution is the solid of revolution. Examples of surfaces of revolution generated by a straight line are cylindrical and conical surfaces depending on whether or not the line is parallel to the axis. A circle that is rotated around any diameter generates a sphere of which it is then a great circle, and if the circle is rotated around an axis that does not intersect the interior of a circle, then it generates a torus which does not intersect itself (a ring torus). Properties The sections of the surface of revolution made by planes through the axis are called meridional sections. Any meridional section can be considered to be the generatrix in the plane determined by it and the axis. The sections of the surface of revolution made by planes that are perpendicular to the axis are circles. Some special cases of hyperboloids (of either one or two sheets) and elliptic paraboloids are surfaces of revolution. These may be identified as those quadratic surfaces all of whose cross sections perpendicular to the axis are circular. Area formula If the curve is described by the parametric functions , , with ranging over some interval , and the axis of revolution is the -axis, then the surface area is given by the integral provided that is never negative between the endpoints and . This formula is the calculus equivalent of Pappus's centroid theorem. The quantity comes from the Pythagorean theorem and represents a small segment of the arc of the curve, as in the arc length formula. The quantity is the path of (the centroid of) this small segment, as required by Pappus' theorem. Likewise, when the axis of rotation is the -axis and provided that is never negative, the area is given by If the continuous curve is described by the function , , then the integral becomes for revolution around the -axis, and for revolution around the y-axis (provided ). These come from the above formula. This can also be derived from multivariable integration. If a plane curve is given by then its corresponding surface of revolution when revolved around the x-axis has Cartesian coordinates given by with . Then the surface area is given by the surface integral Computing the partial derivatives yields and computing the cross product yields where the trigonometric identity was used. With this cross product, we get where the same trigonometric identity was used again. The derivation for a surface obtained by revolving around the y-axis is similar. For example, the spherical surface with unit radius is generated by the curve , , when ranges over . Its area is therefore For the case of the spherical curve with radius , rotated about the -axis A minimal surface of revolution is the surface of revolution of the curve between two given points which minimizes surface area. A basic problem in the calculus of variations is finding the curve between two points that produces this minimal surface of revolution. There are only two minimal surfaces of revolution (surfaces of revolution which are also minimal surfaces): the plane and the catenoid. Coordinate expressions A surface of revolution given by rotating a curve described by around the x-axis may be most simply described by . This yields the parametrization in terms of and as . If instead we revolve the curve around the y-axis, then the curve is described by , yielding the expression in terms of the parameters and . If x and y are defined in terms of a parameter , then we obtain a parametrization in terms of and . If and are functions of , then the surface of revolution obtained by revolving the curve around the x-axis is described by , and the surface of revolution obtained by revolving the curve around the y-axis is described by . Geodesics Meridians are always geodesics on a surface of revolution. Other geodesics are governed by Clairaut's relation. Toroids A surface of revolution with a hole in, where the axis of revolution does not intersect the surface, is called a toroid. For example, when a rectangle is rotated around an axis parallel to one of its edges, then a hollow square-section ring is produced. If the revolved figure is a circle, then the object is called a torus. See also Channel surface, a generalisation of a surface of revolution Gabriel's Horn Generalized helicoid Lemon (geometry), surface of revolution of a circular arc Liouville surface, another generalization of a surface of revolution Spheroid Surface integral Translation surface (differential geometry) References External links Integral calculus Surfaces of revolution
Surface of revolution
[ "Mathematics" ]
962
[ "Integral calculus", "Calculus" ]
329,553
https://en.wikipedia.org/wiki/Restricted%20use%20pesticide
Restricted use pesticides (RUP) are pesticides not available to the general public in the United States. Fulfilling its pesticide regulation responsibilities, the United States Environmental Protection Agency (EPA) registers all pesticides as either "unclassified" or "restricted use". Unclassified pesticides are available over-the-counter, while the latter require a license to purchase and apply the product. Pesticides are classified as "restricted use" for a variety of reasons, such as potential for or history of groundwater contamination. The RUP classification restricts a product, or its uses, to use by a certificated pesticide applicator or under the direct supervision of a certified applicator. Certification programs are administered by the federal government, individual states, and by company policies that vary from state to state. This is managed by the EPA under the Worker Protection Standard, in cooperation with the United States Department of Agriculture. The RUP list is part of Title 40 of the Code of Federal Regulations (40 CFR 152.175). Atrazine is the most widely used restricted-use herbicide, however there are over 700 RUPs as of 2017. Many insecticides and fungicides used in fruit production are restricted use. License The Worker Protection Standard (WPS) identifies the type of requirements that must be satisfied to obtain the proper license needed to purchase and apply restricted use pesticide. The process required to obtain a pest control licenses is regulated by a combination of state laws, federal laws, common law, and private company policies. All RUP applications must be recorded to identify the date, location, and type of pesticide applied. Federal law requires a minimum record retention period, which may be three years or longer depending upon state laws. There are two licensee categories: supervisor and applicator. A pest control supervisor license is required to purchase RUP. Duties of a licensed pest control supervisor include: ensuring that pest control applicators are competent to use any restricted use products. maintaining application records for 3 years or more, as determined by state and federal laws. These records must identify the date, location, and type of pesticide that has been applied. notifying the local government agency that is responsible for air quality to satisfy laws governing the right to know regarding public health and safety risks when restricted use pesticides are applied outside buildings. See also Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Pesticide misuse Toxicity class References External links Restricted Use Products (RUP) Report Pesticides Pesticide regulation
Restricted use pesticide
[ "Chemistry", "Biology", "Environmental_science" ]
519
[ "Toxicology", "Pesticides", "Regulation of chemicals", "Pesticide regulation", "Biocides" ]
329,625
https://en.wikipedia.org/wiki/Mimesis
Mimesis (; , mīmēsis) is a term used in literary criticism and philosophy that carries a wide range of meanings, including imitatio, imitation, nonsensuous similarity, receptivity, representation, mimicry, the act of expression, the act of resembling, and the presentation of the self. The original Ancient Greek term mīmēsis () derives from mīmeisthai (, 'to imitate'), itself coming from mimos (μῖμος, 'imitator, actor'). In ancient Greece, mīmēsis was an idea that governed the creation of works of art, in particular, with correspondence to the physical world understood as a model for beauty, truth, and the good. Plato contrasted mimesis, or imitation, with diegesis, or narrative. After Plato, the meaning of mimesis eventually shifted toward a specifically literary function in ancient Greek society. One of the best-known modern studies of mimesis—understood in literature as a form of realism—is Erich Auerbach's Mimesis: The Representation of Reality in Western Literature, which opens with a comparison between the way the world is represented in Homer's Odyssey and the way it appears in the Bible. In addition to Plato and Auerbach, mimesis has been theorised by thinkers as diverse as Aristotle, Philip Sidney, Jean Baudrillard (via his concept of Simulacra and Simulation) Samuel Taylor Coleridge, Adam Smith, Gabriel Tarde, Sigmund Freud, Walter Benjamin, Theodor Adorno, Paul Ricœur, Guy Debord ( via his conceptual polemical tract,The Society of the Spectacle ) Luce Irigaray, Jacques Derrida, René Girard, Nikolas Kompridis, Philippe Lacoue-Labarthe, Michael Taussig, Merlin Donald, Homi Bhabha, Roberto Calasso, and Nidesh Lawtoo. During the nineteenth century, the racial politics of imitation towards African Americans influenced the term mimesis and its evolution. Classical definitions Plato Both Plato and Aristotle saw in mimesis the representation of nature, including human nature, as reflected in the dramas of the period. Plato wrote about mimesis in both Ion and The Republic (Books II, III, and X). In Ion, he states that poetry is the art of divine madness, or inspiration. Because the poet is subject to this divine madness, instead of possessing "art" or "knowledge" (techne) of the subject, the poet does not speak truth (as characterized by Plato's account of the Forms). As Plato has it, truth is the concern of the philosopher. As culture in those days did not consist in the solitary reading of books, but in the listening to performances, the recitals of orators (and poets), or the acting out by classical actors of tragedy, Plato maintained in his critique that theatre was not sufficient in conveying the truth. He was concerned that actors or orators were thus able to persuade an audience by rhetoric rather than by telling the truth. In Book II of The Republic, Plato describes Socrates' dialogue with his pupils. Socrates warns we should not seriously regard poetry as being capable of attaining the truth and that we who listen to poetry should be on our guard against its seductions, since the poet has no place in our idea of God. Developing upon this in Book X, Plato told of Socrates's metaphor of the three beds: One bed exists as an idea made by God (the Platonic ideal, or form); one is made by the carpenter, in imitation of God's idea; and one is made by the artist in imitation of the carpenter's. So the artist's bed is twice removed from the truth. Those who copy only touch on a small part of things as they really are, where a bed may appear differently from various points of view, looked at obliquely or directly, or differently again in a mirror. So painters or poets, though they may paint or describe a carpenter, or any other maker of things, know nothing of the carpenter's (the craftsman's) art, and though the better painters or poets they are, the more faithfully their works of art will resemble the reality of the carpenter making a bed, the imitators will nonetheless still not attain the truth (of God's creation). The poets, beginning with Homer, far from improving and educating humanity, do not possess the knowledge of craftsmen and are mere imitators who copy again and again images of virtue and rhapsodise about them, but never reach the truth in the way the superior philosophers do. Aristotle Similar to Plato's writings about mimesis, Aristotle also defined mimesis as the perfection and imitation of nature. Art is not only imitation but also the use of mathematical ideas and symmetry in the search for the perfect, the timeless, and contrasting being with becoming. Nature is full of change, decay, and cycles, but art can also search for what is everlasting and the first causes of natural phenomena. Aristotle wrote about the idea of four causes in nature. The first, the formal cause, is like a blueprint, or an immortal idea. The second cause is the material cause, or what a thing is made out of. The third cause is the efficient cause, that is, the process and the agent by which the thing is made. The fourth, the final cause, is the good, or the purpose and end of a thing, known as telos. Aristotle's Poetics is often referred to as the counterpart to this Platonic conception of poetry. Poetics is his treatise on the subject of mimesis. Aristotle was not against literature as such; he stated that human beings are mimetic beings, feeling the urge to create texts (art) that reflect and represent reality. Aristotle considered it important that there be a certain distance between the work of art on the one hand and life on the other; we draw knowledge and consolation from tragedies only because they do not happen to us. Without this distance, tragedy could not give rise to catharsis. However, it is equally important that the text causes the audience to identify with the characters and the events in the text, and unless this identification occurs, it does not touch us as an audience. Aristotle holds that it is through "simulated representation," mimesis, that we respond to the acting on the stage, which is conveying to us what the characters feel, so that we may empathise with them in this way through the mimetic form of dramatic roleplay. It is the task of the dramatist to produce the tragic enactment to accomplish this empathy by means of what is taking place on stage. In short, catharsis can be achieved only if we see something that is both recognisable and distant. Aristotle argued that literature is more interesting as a means of learning than history, because history deals with specific facts that have happened, and which are contingent, whereas literature, although sometimes based on history, deals with events that could have taken place or ought to have taken place. Aristotle thought of drama as being "an imitation of an action" and of tragedy as "falling from a higher to a lower estate" and so being removed to a less ideal situation in more tragic circumstances than before. He posited the characters in tragedy as being better than the average human being, and those of comedy as being worse. Michael Davis, a translator and commentator of Aristotle writes: Contrast to diegesis It was also Plato and Aristotle who contrasted mimesis with diegesis (Greek: διήγησις). Mimesis shows, rather than tells, by means of directly represented action that is enacted. Diegesis, however, is the telling of the story by a narrator; the author narrates action indirectly and describes what is in the characters' minds and emotions. The narrator may speak as a particular character or may be the "invisible narrator" or even the "all-knowing narrator" who speaks from above in the form of commenting on the action or the characters. In Book III of his Republic (c. 373 BC), Plato examines the style of poetry (the term includes comedy, tragedy, and epic and lyric poetry): all types narrate events, he argues, but by differing means. He distinguishes between narration or report (diegesis) and imitation or representation (mimesis). Tragedy and comedy, he goes on to explain, are wholly imitative types; the dithyramb is wholly narrative; and their combination is found in epic poetry. When reporting or narrating, "the poet is speaking in his own person; he never leads us to suppose that he is anyone else;" when imitating, the poet produces an "assimilation of himself to another, either by the use of voice or gesture." In dramatic texts, the poet never speaks directly; in narrative texts, the poet speaks as himself or herself. In his Poetics, Aristotle argues that kinds of poetry (the term includes drama, flute music, and lyre music for Aristotle) may be differentiated in three ways: according to their medium, according to their objects, and according to their mode or manner (section I); "For the medium being the same, and the objects the same, the poet may imitate by narration—in which case he can either take another personality, as Homer does, or speak in his own person, unchanged—or he may present all his characters as living and moving before us." Though they conceive of mimesis in quite different ways, its relation with diegesis is identical in Plato's and Aristotle's formulations. In ludology, mimesis is sometimes used to refer to the self-consistency of a represented world, and the availability of in-game rationalisations for elements of the gameplay. In this context, mimesis has an associated grade: highly self-consistent worlds that provide explanations for their puzzles and game mechanics are said to display a higher degree of mimesis. This usage can be traced back to the essay "Crimes Against Mimesis". Dionysian imitatio Dionysian imitatio is the influential literary method of imitation as formulated by Greek author Dionysius of Halicarnassus in the 1st century BC, who conceived it as technique of rhetoric: emulating, adapting, reworking, and enriching a source text by an earlier author. Dionysius' concept marked a significant departure from the concept of mimesis formulated by Aristotle in the 4th century BC, which was only concerned with "imitation of nature" rather than the "imitation of other authors." Latin orators and rhetoricians adopted the literary method of Dionysius' imitatio and discarded Aristotle's mimesis. Modern usage Samuel Taylor Coleridge Referring to it as imitation, the concept of mimesis was crucial for Samuel Taylor Coleridge's theory of the imagination. Coleridge begins his thoughts on imitation and poetry from Plato, Aristotle, and Philip Sidney, adopting their concept of imitation of nature instead of other writers. His departure from the earlier thinkers lies in his arguing that art does not reveal a unity of essence through its ability to achieve sameness with nature. Coleridge claims: Here, Coleridge opposes imitation to copying, the latter referring to William Wordsworth's notion that poetry should duplicate nature by capturing actual speech. Coleridge instead argues that the unity of essence is revealed precisely through different materialities and media. Imitation, therefore, reveals the sameness of processes in nature. Erich Auerbach One of the best-known modern studies of mimesis—understood in literature as a form of realism—is Erich Auerbach's Mimesis: The Representation of Reality in Western Literature (1953), which opens with a famous comparison between the way the world is represented in Homer's Odyssey and the way it appears in the Bible. From these two seminal texts Auerbach builds the foundation for a unified theory of representation that spans the entire history of Western literature, including the Modernist novels being written at the time Auerbach began his study. Walter Benjamin In his essay, "On The Mimetic Faculty"(1933) Walter Benjamin outlines connections between mimesis and sympathetic magic, imagining a possible origin of astrology arising from an interpretation of human birth that assumes its correspondence with the apparition of a seasonally rising constellation augurs that new life will take on aspects of the myth connected to the star. Luce Irigaray Belgian feminist Luce Irigaray used the term to describe a form of resistance where women imperfectly imitate stereotypes about themselves to expose and undermine such stereotypes. Michael Taussig In Mimesis and Alterity (1993), anthropologist Michael Taussig examines the way that people from one culture adopt another's nature and culture (the process of mimesis) at the same time as distancing themselves from it (the process of alterity). He describes how a legendary tribe, the "White Indians" (the Guna people of Panama and Colombia), have adopted in various representations figures and images reminiscent of the white people they encountered in the past (without acknowledging doing so). Taussig, however, criticises anthropology for reducing yet another culture, that of the Guna, for having been so impressed by the exotic technologies of the whites that they raised them to the status of gods. To Taussig this reductionism is suspect, and he argues this from both sides in his Mimesis and Alterity to see values in the anthropologists' perspective while simultaneously defending the independence of a lived culture from the perspective of anthropological reductionism. René Girard In Things Hidden Since the Foundation of the World (1978), René Girard posits that human behavior is based upon mimesis, and that imitation can engender pointless conflict. Girard notes the productive potential of competition: "It is because of this unprecedented capacity to promote competition within limits that always remain socially, if not individually, acceptable that we have all the amazing achievements of the modern world," but states that competition stifles progress once it becomes an end in itself: "rivals are more apt to forget about whatever objects are the cause of the rivalry and instead become more fascinated with one another." Roberto Calasso In The Unnameable Present, Calasso outlines the way that mimesis, called "Mimickry" by Joseph Goebbels—though it is a universal human ability—was interpreted by the Third Reich as being a sort of original sin attributable to "the Jew." Thus, an objection to the tendency of human beings to mimic one another instead of "just being themselves" and a complementary, fantasized desire to achieve a return to an eternally static pattern of predation by means of "will" expressed as systematic mass-murder became the metaphysical argument (underlying circumstantial, temporally contingent arguments deployed opportunistically for propaganda purposes) for perpetrating the Holocaust amongst the Nazi elite. Insofar as this issue or this purpose was ever even explicitly discussed in print by Hitler's inner-circle, in other words, this was the justification (appearing in the essay "Mimickry" in a war-time book published by Joseph Goebbels). The text suggests that a radical failure to understand the nature of mimesis as an innate human trait or a violent aversion to the same, tends to be a diagnostic symptom of the totalitarian or fascist character if it is not, in fact, the original unspoken occult impulse that animated the production of totalitarian or fascist movements to begin with. Calasso's argument here echoes, condenses and introduces new evidence to reinforce one of the major themes of Adorno and Horkheimer's Dialectic of the Enlightenment (1944), which was itself in dialog with earlier work hinting in this direction by Walter Benjamin who died during an attempt to escape the gestapo. Calasso insinuates and references this lineage throughout the text. The work can be read as a clarification of their earlier gestures in this direction, written while the Holocaust was still unfolding. Calasso's earlier book The Celestial Hunter, written immediately prior to The Unnamable Present, is an informed and scholarly speculative cosmology depicting the possible origins and early prehistoric cultural evolution of the human mimetic faculty. In particular, the books first and fifth chapters ("In The Time of the Great Raven" and "Sages & Predators") focuses on the terrain of mimesis and its early origins, though insights in this territory appear as a motif in every chapter of the book. Nidesh Lawtoo In Homo Mimeticus (2022) Swiss philosopher and critic Nidesh Lawtoo develops a relational theory of mimetic subjectivity arguing that not only desires but all affects are mimetic, for good and ill. Lawtoo opens up the transdisciplinary field of "mimetic studies" to account for the proliferation of hypermimetic affects in the digital age. See also Similarity (philosophy) Man, Play and Games (Roger Caillois) Anti-mimesis Mimesis criticism Dionysian imitatio References Classical sources Citations Bibliography Auerbach, Erich . 1953. Mimesis: The Representation of Reality in Western Literature . Princeton: Princeton UP. . Coleridge, Samuel T. 1983. Biographia Literaria, vol. 1, edited by J. Engell and W. J. Bate. Princeton, NJ: Princeton UP. . Davis, Michael. 1999. The Poetry of Philosophy: On Aristotle's Poetics . South Bend, IN: St Augustine's P. . Elam, Keir. 1980. The Semiotics of Theatre and Drama , New Accents series. London: Methuen. . Gebauer, Gunter, and Christoph Wulf. [1992] 1995. Mimesis: Culture—Art—Society, translated by D. Reneau. Berkeley, CA: U of California Press. . Girard, René. 2008. Mimesis and Theory: Essays on Literature and Criticism, 1953–2005, edited by R. Doran. Stanford: Stanford University Press. . Halliwell, Stephen. 2002. The Aesthetics of Mimesis. Ancient Texts and Modern Problems . Princeton. . Kaufmann, Walter . 1992. Tragedy and Philosophy . Princeton: Princeton UP. . Lacoue-Labarthe, Philippe. 1989. Typography: Mimesis, Philosophy, Politics, edited by C. Fynsk. Cambridge: Harvard UP. . Lawtoo, Nidesh. 2013. The Phantom of the Ego: Modernism and the Mimetic Unconscious. East Lansing: Michigan State UP. . Lawtoo, Nidesh. 2022. Homo Mimeticus: A New Theory of Imitation Leuven: Leuven UP. . Miller, Gregg Daniel. 2011. Mimesis and Reason: Habermas's Political Philosophy. Albany, NY: SUNY Press. Pfister, Manfred. [1977] 1988. The Theory and Analysis of Drama , translated by J. Halliday, European Studies in English Literature series. Cambridige: Cambridge UP. . Potolsky, Matthew. 2006. Mimesis. London: Routledge. . Prang, Christoph. 2010. "Semiomimesis: The influence of semiotics on the creation of literary texts. Peter Bichsel's Ein Tisch ist ein Tisch and Joseph Roth's Hotel Savoy." Semiotica (182):375–396. Sen, R. K. 1966. Aesthetic Enjoyment: Its Background in Philosophy and Medicine. Calcutta: University of Calcutta. —— 2001. Mimesis. Calcutta: Syamaprasad College. Sörbom, Göran. 1966. Mimesis and Art . Uppsala. Snow, Kim, Hugh Crethar, Patricia Robey, and John Carlson. 2005. "Theories of Family Therapy (Part 1)." As cited in "Family Therapy Review: Preparing for Comprehensive Licensing Examination." 2005. Lawrence Erlbaum Associates. . Tatarkiewicz, Władysław . 1980. A History of Six Ideas: An Essay in Aesthetics , translated by C. Kasparek . The Hague: Martinus Nijhoff. . Taussig, Michael . 1993. Mimesis and Alterity: A Particular History of the Senses . London: Routledge. . Tsitsiridis, Stavros. 2005. "Mimesis and Understanding. An Interpretation of Aristotle's 'Poetics' 4.1448b4–19." Classical Quarterly (55):435–446. External links Plato's Republic II, transl. Benjamin Jowett Plato's Republic III, transl. Benjamin Jowett Plato's Republic X, transl. Benjamin Jowett The Infinite Regress of Forms Plato's recounting of the "bedness" theory involved in the bed metaphor The University of Chicago, Theories of Media Keywords University of Barcelona Mimesi (Research on Poetics & Rhetorics in Catalan Literature) Mimesislab , Laboratory of Pedagogy of Expression of the Department of Educational Design of the university "Roma Tre" "Mimesis", an article by Władysław Tatarkiewicz for the Dictionary of History of Ideas "Mimesis", 2021, an article by María Antonia González Valerio for the Online Encyclopedia Philosophy of Nature, doi: mimesis. Ancient Greek theatre Aristotelianism Concepts in ancient Greek aesthetics Film theory Muses (mythology) Narratology Platonism Play (activity) Plot (narrative) Poetics Theatre Visual arts Visual arts theory
Mimesis
[ "Biology" ]
4,519
[ "Play (activity)", "Behavior", "Human behavior" ]
329,806
https://en.wikipedia.org/wiki/Sleight%20of%20hand
Sleight of hand (also known as prestidigitation or legerdemain ()) refers to fine motor skills when used by performing artists in different art forms to entertain or manipulate. It is closely associated with close-up magic, card magic, card flourishing and stealing. Because of its heavy use and practice by magicians, sleight of hand is often confused as a branch of magic; however, it is a separate genre of entertainment and many artists practice sleight of hand as an independent skill. Sleight of hand pioneers with worldwide acclaim include Dan and Dave, Ricky Jay, Derek DelGaudio, David Copperfield, Yann Frisch, Norbert Ferré, Dai Vernon, Jerry Sadowitz, Cardini, Tony Slydini, Helder Guimarães and Tom Mullica. Etymology and history The word sleight, meaning "the use of dexterity or cunning, especially so as to deceive", comes from the Old Norse. The phrase sleight of hand means "quick fingers" or "trickster fingers". Common synonyms of Latin and French include prestidigitation and legerdemain respectively. Seneca the Younger, philosopher of the Silver Age of Latin literature, famously compared rhetorical techniques and illusionist techniques. Association with close-up magic Sleight of hand is often used in close-up magic, where the sleights are performed with the audience close to the magician, usually in physical contact or within . This close contact eliminates theories of fake audience members and the use of gimmicks. It makes use of everyday items as props, such as cards, coins, rubber bands, paper, phones and even saltshakers. A well-performed sleight looks like an ordinary, natural and completely innocent gesture, change in hand position or body posture. In addition to manual dexterity, sleight of hand in close-up magic depends on the use of psychology, timing, misdirection, and natural choreography in accomplishing a magical effect. Association with stage magic Sleight of hand during stage magic performances is not common, as most magic events and stunts are performed with objects visible to a much larger audience, but is nevertheless done occasionally by many stage performers. The most common magic tricks performed with sleight of hand on stage are rope manipulations and card tricks, with the first typically being done with a member of the audience to rule out the possibility of stooges and the latter primarily being done on a table while a camera is live-recording, allowing the rest of the audience to see the performance on a big screen. Worldwide acclaimed stage magician David Copperfield often includes illusions featuring sleight of hand in his stage shows. Association with card cheating Although being mostly used for entertainment and comedy purposes, sleight of hand is also notoriously used to cheat at casinos and gambling facilities throughout the world. Common ways to professionally cheat at card games using sleight of hand include palming, switching, ditching, and stealing cards from the table. Such techniques involve extreme misdirection and years of practice. For these reasons, the term sleight of hand frequently carries negative associations of dishonesty and deceit at many gambling halls, and many magicians known around the world are publicly banned from casinos, such as British mentalist and close-up magician Derren Brown, who is banned from every casino in Britain. Association with cardistry Unlike card tricks done on the streets or on stage and card cheating, cardistry is solely about impressing without illusions, deceit, misdirection and other elements commonly used in card tricks and card cheating. Cardistry is the art of card flourishing, and is intended to be visually impressive and to give the appearance of being difficult to perform. Card flourishing is often associated with card tricks, but many sleight of hand artists perform flourishing without considering themselves magicians or having any real interest in card tricks. Association with card throwing The art of card throwing generally consists of throwing standard playing cards with excessively high speed and accuracy, powerful enough to slice fruits like carrots and even melons. Like flourishing, throwing cards is meant to be visibly impressive and does not include magic elements. Magician Ricky Jay popularized throwing cards within the sleight of hand industry with the release of his 1977 book Cards as Weapons, which was met with large sales and critical acclaim. Some magic tricks, both close-up and on stage, are heavily connected to throwing cards. See also Cups and balls Tenkai palm References Sources Printed Online External links Sleight of hand on YouTube Sleight of hand on https://Cardtricks.info Card magic Coin magic Motor skills de:Zauberkunst#Manipulation ja:手品
Sleight of hand
[ "Biology" ]
976
[ "Behavior", "Motor skills", "Motor control" ]
329,829
https://en.wikipedia.org/wiki/Vertical%20blanking%20interval
In a raster scan display, the vertical blanking interval (VBI), also known as the vertical interval or VBLANK, is the time between the end of the final visible line of a frame or field and the beginning of the first visible line of the next frame or field. It is present in analog television, VGA, DVI and other signals. Here the term field is used in interlaced video, and the term frame is used in progressive video and there can be a VBI after each frame or field. In interlaced video a frame is made up of 2 fields. Sometimes in interlaced video a field is called a frame which can lead to confusion. In raster cathode-ray tube (CRT) displays, the blank level is usually supplied during this period to avoid painting the retrace line—see raster scan for details; signal sources such as television broadcasts do not supply image information during the blanking period. Digital displays usually will not display incoming data stream during the blanking interval even if present. The VBI was originally needed because of the inductive inertia of the magnetic coils which deflect the electron beam vertically in a CRT; the magnetic field, and hence the position being drawn, cannot change instantly. Additionally, the speed of older circuits was limited. For horizontal deflection, there is also a pause between successive lines, to allow the beam to return from right to left, called the horizontal blanking interval. Modern CRT circuitry does not require such a long blanking interval, and thin panel displays require none, but the standards were established when the delay was needed (and to allow the continued use of older equipment). Blanking of a CRT may not be perfect due to equipment faults or brightness set very high; in this case a white retrace line shows on the screen, often alternating between fairly steep diagonals from right to left and less-steep diagonals back from left to right, starting in the lower right of the display. In analog television systems the vertical blanking interval can be used for datacasting (to carry digital data), since nothing sent during the VBI is displayed on the screen; various test signals, VITC timecode, closed captioning, teletext, CGMS-A copy-protection indicators, and various data encoded by the XDS protocol (e.g., the content ratings for V-chip use) and other digital data can be sent during this time period. In U.S. analog broadcast television, line 19 was reserved for a Ghost-canceling reference and line 21 was reserved for NABTS captioning data. The obsolete Teletext service contemplated the use of line 22 for data transmission. The pause between sending video data is sometimes used in real time computer graphics to modify the frame buffer, or to provide a time reference for when switching the source buffer for video output can happen without causing a visible tear. This is especially true in video game systems, where the fixed frequency of the blanking period might also be used to derive in-game timing. On many consoles there is an extended blanking period, as the console opts to paint graphics on fewer lines than the television would natively allow, permitting its output to be surrounded by a border. On some very early machines such as the Atari 2600, the programmer is in full control of video output and therefore may select their own blanking period, allowing arbitrarily few painted lines. On others such as the Nintendo Entertainment System, a predefined blanking period could be extended. Most consumer VCRs use the known black level of the vertical blanking pulse to set their recording levels. The Macrovision copy protection scheme inserts pulses in the VBI, where the recorder expects a constant level, to disrupt recording to videotapes. Vertical blanking interval in digital video While digital video interconnects (such as DVI and HDMI) generally do have a "vertical blanking" part of the datastream, they are unable to carry closed caption text or most of the other items that, in analog TV interconnects, are transmitted during the "vertical blanking interval". This can lead to . See also Vertical blank interrupt Datacasting Horizontal blanking interval Nominal analogue blanking Raster scan Television technology References
Vertical blanking interval
[ "Technology" ]
894
[ "Information and communications technology", "Television technology" ]
329,877
https://en.wikipedia.org/wiki/Taijitu
In Chinese philosophy, a taijitu () is a symbol or diagram () representing taiji () in both its monist (wuji) and its dualist (yin and yang) forms in application is a deductive and inductive theoretical model. Such a diagram was first introduced by Neo-Confucian philosopher Zhou Dunyi of the Song Dynasty in his Taijitu shuo (). The Daozang, a Taoist canon compiled during the Ming dynasty, has at least half a dozen variants of the taijitu. The two most similar are the Taiji Xiantiandao and wujitu () diagrams, both of which have been extensively studied since the Qing period for their possible connection with Zhou Dunyi's taijitu. Ming period author Lai Zhide simplified the taijitu to a design of two interlocking spirals with two black-and-white dots superimposed on them, became synonymous with the Yellow River Map. This version was represented in Western literature and popular culture in the late 19th century as the "Great Monad", this depiction became known in English as the "yin-yang symbol" since the 1960s. The contemporary Chinese term for the modern symbol is referred to as "the two-part Taiji diagram" (). Ornamental patterns with visual similarity to the "yin yang symbol" are found in archaeological artefacts of European prehistory; such designs are sometimes descriptively dubbed "yin yang symbols" in archaeological literature by modern scholars. Structure The taijitu consists of five parts. Strictly speaking, the "yin and yang symbol", itself popularly called taijitu, represents the second of these five parts of the diagram. At the top, an empty circle depicts the absolute (wuji). According to Zhou, wuji is also a synonym for taiji. A second circle represents the Taiji as harboring Dualism, yin and yang, represented by filling the circle in a black-and-white pattern. In some diagrams, there is a smaller empty circle at the center of this, representing Emptiness as the foundation of duality. Below this second circle is a five-part diagram representing the Five Agents (Wuxing), representing a further stage in the differentiation of Unity into Multiplicity. The Five Agents are connected by lines indicating their proper sequence, Wood () → Fire () → Earth () → Metal () → Water (). The circle below the Five Agents represents the conjunction of Heaven and Earth, which in turn gives rise to the "ten thousand things". This stage is also represented by the bagua. The final circle represents the state of multiplicity, glossed "The ten thousand things are born by transformation" (; simplified ) History The term taijitu in modern Chinese is commonly used to mean the simple "divided circle" form (), but it may refer to any of several schematic diagrams that contain at least one circle with an inner pattern of symmetry representing yin and yang. Song and Yuan eras While the concept of yin and yang dates to Chinese antiquity, the interest in "diagrams" ( tú) is an intellectual fashion of Neo-Confucianism during the Song period (11th century), and it declined again in the Ming period, by the 16th century. During the Mongol Empire and Yuan dynasty, Taoist traditions and diagrams were compiled and published in the encyclopedia Shilin Guangji by Chen Yuanjing. The original description of a taijitu is due to Song era philosopher Zhou Dunyi (1017–1073), author of the Taijitu shuo (; "Explanation of the Diagram of the Supreme Ultimate"), which became the cornerstone of Neo-Confucianist cosmology. His brief text synthesized aspects of Chinese Buddhism and Taoism with metaphysical discussions in the Yijing. Zhou's key terms Wuji and Taiji appear in the opening line , which Adler notes could also be translated "The Supreme Polarity that is Non-Polar". Non-polar (wuji) and yet Supreme Polarity (taiji)! The Supreme Polarity in activity generates yang; yet at the limit of activity it is still. In stillness it generates yin; yet at the limit of stillness it is also active. Activity and stillness alternate; each is the basis of the other. In distinguishing yin and yang, the Two Modes are thereby established. The alternation and combination of yang and yin generate water, fire, wood, metal, and earth. With these five [phases of] qi harmoniously arranged, the Four Seasons proceed through them. The Five Phases are simply yin and yang; yin and yang are simply the Supreme Polarity; the Supreme Polarity is fundamentally Non-polar. [Yet] in the generation of the Five Phases, each one has its nature. Instead of usual Taiji translations "Supreme Ultimate" or "Supreme Pole", Adler uses "Supreme Polarity" (see Robinet 1990) because Zhu Xi describes it as the alternating principle of yin and yang, and: insists that taiji is not a thing (hence "Supreme Pole" will not do). Thus, for both Zhou and Zhu, taiji is the yin-yang principle of bipolarity, which is the most fundamental ordering principle, the cosmic "first principle." Wuji as "non-polar" follows from this. Since the 12th century, there has been a vigorous discussion in Chinese philosophy regarding the ultimate origin of Zhou Dunyi's diagram. Zhu Xi (12th century) insists that Zhou Dunyi had composed the diagram himself, against the prevailing view that he had received it from Daoist sources. Zhu Xi could not accept a Daoist origin of the design, because it would have undermined the claim of uniqueness attached to the Neo-Confucian concept of dao. Ming and Qing eras While Zhou Dunyi (1017–1073) popularized the circular diagram, the introduction of "swirling" patterns first appears in the Ming period and representative of transformation. Zhao Huiqian (, 1351–1395) was the first to introduce the "swirling" variant of the taijitu in his Liushu benyi (, 1370s). The diagram is combined with the eight trigrams (bagua) and called the "River Chart spontaneously generated by Heaven and Earth". By the end of the Ming period, this diagram had become a widespread representation of Chinese cosmology. The dots were introduced in the later Ming period (replacing the droplet-shapes used earlier, in the 16th century) and are encountered more frequently in the Qing period. The dots represent the seed of yin within yang and the seed of yang within yin; the idea that neither can exist without the other and are never absolute. Lai Zhide's design is similar to the gakyil (dga' 'khyil or "wheel of joy") symbols of Tibetan Buddhism; but while the Tibetan designs have three or four swirls (representing the Three Jewels or the Four Noble Truths, i.e. as a triskele and a tetraskelion design), Lai Zhide's taijitu has two swirls, terminating in a central circle. Modern yin-yang symbol The Ming-era design of the taijitu of two interlocking spirals was a common yin-yang symbol in the first half of the 20th century. The flag of South Korea, originally introduced as the flag of Joseon era Korea in 1882, shows this symbol in red and blue. This was a modernisation of the older (early 19th century) form of the Bat Quai Do used as the Joseon royal standard. The symbol is referred to as taijitu, simply taiji (or the Supreme Ultimate in English), hetu or "river diagram", "the yin-yang circle", or wuji, as wuji was viewed synonymously with the artistic and philosophical concept of taiji by some Taoists, including Zhou. Zhou viewed the dualistic and paradoxical relationship between the concepts of taiji and wuji, which were and are often thought to be opposite concepts, as a cosmic riddle important for the "beginning...and ending" of a life. The names of the taijitu are highly subjective and some interpretations of the texts they appear in would only call the principle of taiji those names rather than the symbol. Since the 1960s, the He tu symbol, which combines the two interlocking spirals with two dots, has more commonly been used as a yin-yang symbol. compare with In the standard form of the contemporary symbol, one draws on the diameter of a circle two non-overlapping circles each of which has a diameter equal to the radius of the outer circle. One keeps the line that forms an "S", and one erases or obscures the other line. In 2008 the design was also described by Isabelle Robinet as a "pair of fishes nestling head to tail against each other". The Soyombo symbol of Mongolia may be prior to 1686. It combines several abstract shapes, including a Taiji symbol illustrating the mutual complement of man and woman. In socialist times, it was alternatively interpreted as two fish symbolizing vigilance, because fish never close their eyes. The modern symbol has also been widely used in martial arts, particularly tai chi, and Jeet Kune Do, since the 1970s. In this context, it is generally used to represent the interplay between hard and soft techniques. The dots in the modern "yin-yang symbol" have been given the additional interpretation of "intense interaction" between the complementary principles, i.e. a flux or flow to achieve harmony and balance. Similar symbols Similarities can be seen in Neolithic–Eneolithic era Cucuteni–Trypillia culture on the territory of current Ukraine and Romania. Patterns containing ornament looking like Taijitu from archeological artifacts of that culture were displayed in the Ukraine pavilion at the Expo 2010 in Shanghai, China. The interlocking design is found in artifacts of the European Iron Age. Similar interlocking designs are found in the Americas: Xicalcoliuhqui. While this design appears to become a standard ornamental motif in Iron-Age Celtic culture by the 3rd century BC, found on a wide variety of artifacts, it is not clear what symbolic value was attached to it. Unlike the Chinese symbol, the Celtic yin-yang lack the element of mutual penetration, and the two halves are not always portrayed in different colors. Comparable designs are also found in Etruscan art. In computing Unicode features the he tu symbol in the Miscellaneous Symbols block, at code point . The related "double body symbol" is included at U+0FCA (TIBETAN SYMBOL NOR BU NYIS -KHYIL ࿊), in the Tibetan block. The Soyombo symbol, which includes a taijitu, is available in Unicode as the sequence U+11A9E 𑪞 + U+11A9F 𑪟 + U+11AA0 𑪠. See also Gankyil Koru Lauburu Taegeuk Three hares Tomoe Triskelion References Sources External links Where does the Chinese Yin Yang Symbol Come From? – chinesefortunecalendar.com Chart of the Great Ultimate (Taiji tu) – goldenelixir.com) Iconography Ornaments Rotational symmetry Symbols Taoist cosmology Visual motifs eo:Jino kaj Jango#Tajĝifiguro
Taijitu
[ "Physics", "Mathematics" ]
2,346
[ "Visual motifs", "Symbols", "Symmetry", "Rotational symmetry" ]
329,898
https://en.wikipedia.org/wiki/Stationary%20process
In mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time. Since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data are often transformed to become stationary. The most common cause of violation of stationarity is a trend in the mean, which can be due either to the presence of a unit root or of a deterministic trend. In the former case of a unit root, stochastic shocks have permanent effects, and the process is not mean-reverting. In the latter case of a deterministic trend, the process is called a trend-stationary process, and stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving (non-constant) mean. A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time. Similarly, processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, which is a stochastic process that varies cyclically with time. For many applications strict-sense stationarity is too restrictive. Other forms of stationarity such as wide-sense stationarity or N-th-order stationarity are then employed. The definitions for different kinds of stationarity are not consistent among different authors (see Other terminology). Strict-sense stationarity Definition Formally, let be a stochastic process and let represent the cumulative distribution function of the unconditional (i.e., with no reference to any particular starting value) joint distribution of at times . Then, is said to be strictly stationary, strongly stationary or strict-sense stationary if Since does not affect , is independent of time. Examples White noise is the simplest example of a stationary process. An example of a discrete-time stationary process where the sample space is also discrete (so that the random variable may take one of N possible values) is a Bernoulli scheme. Other examples of a discrete-time stationary process with continuous sample space include some autoregressive and moving average processes which are both subsets of the autoregressive moving average model. Models with a non-trivial autoregressive component may be either stationary or non-stationary, depending on the parameter values, and important non-stationary special cases are where unit roots exist in the model. Example 1 Let be any scalar random variable, and define a time-series , by Then is a stationary time series, for which realisations consist of a series of constant values, with a different constant value for each realisation. A law of large numbers does not apply on this case, as the limiting value of an average from a single realisation takes the random value determined by , rather than taking the expected value of . The time average of does not converge since the process is not ergodic. Example 2 As a further example of a stationary process for which any single realisation has an apparently noise-free structure, let have a uniform distribution on and define the time series by Then is strictly stationary since ( modulo ) follows the same uniform distribution as for any . Example 3 Keep in mind that a weakly white noise is not necessarily strictly stationary. Let be a random variable uniformly distributed in the interval and define the time series Then So is a white noise in the weak sense (the mean and cross-covariances are zero, and the variances are all the same), however it is not strictly stationary. Nth-order stationarity In , the distribution of samples of the stochastic process must be equal to the distribution of the samples shifted in time for all . N-th-order stationarity is a weaker form of stationarity where this is only requested for all up to a certain order . A random process is said to be N-th-order stationary if: Weak or wide-sense stationarity Definition A weaker form of stationarity commonly employed in signal processing is known as weak-sense stationarity, wide-sense stationarity (WSS), or covariance stationarity. WSS random processes only require that 1st moment (i.e. the mean) and autocovariance do not vary with respect to time and that the 2nd moment is finite for all times. Any strictly stationary process which has a finite mean and covariance is also WSS. So, a continuous time random process which is WSS has the following restrictions on its mean function and autocovariance function : The first property implies that the mean function must be constant. The second property implies that the autocovariance function depends only on the difference between and and only needs to be indexed by one variable rather than two variables. Thus, instead of writing, the notation is often abbreviated by the substitution : This also implies that the autocorrelation depends only on , that is The third property says that the second moments must be finite for any time . Motivation The main advantage of wide-sense stationarity is that it places the time-series in the context of Hilbert spaces. Let H be the Hilbert space generated by {x(t)} (that is, the closure of the set of all linear combinations of these random variables in the Hilbert space of all square-integrable random variables on the given probability space). By the positive definiteness of the autocovariance function, it follows from Bochner's theorem that there exists a positive measure on the real line such that H is isomorphic to the Hilbert subspace of L2(μ) generated by {e−2iξ⋅t}. This then gives the following Fourier-type decomposition for a continuous time stationary stochastic process: there exists a stochastic process with orthogonal increments such that, for all where the integral on the right-hand side is interpreted in a suitable (Riemann) sense. The same result holds for a discrete-time stationary process, with the spectral measure now defined on the unit circle. When processing WSS random signals with linear, time-invariant (LTI) filters, it is helpful to think of the correlation function as a linear operator. Since it is a circulant operator (depends only on the difference between the two arguments), its eigenfunctions are the Fourier complex exponentials. Additionally, since the eigenfunctions of LTI operators are also complex exponentials, LTI processing of WSS random signals is highly tractable—all computations can be performed in the frequency domain. Thus, the WSS assumption is widely employed in signal processing algorithms. Definition for complex stochastic process In the case where is a complex stochastic process the autocovariance function is defined as and, in addition to the requirements in , it is required that the pseudo-autocovariance function depends only on the time lag. In formulas, is WSS, if Joint stationarity The concept of stationarity may be extended to two stochastic processes. Joint strict-sense stationarity Two stochastic processes and are called jointly strict-sense stationary if their joint cumulative distribution remains unchanged under time shifts, i.e. if Joint (M + N)th-order stationarity Two random processes and is said to be jointly (M + N)-th-order stationary if: Joint weak or wide-sense stationarity Two stochastic processes and are called jointly wide-sense stationary if they are both wide-sense stationary and their cross-covariance function depends only on the time difference . This may be summarized as follows: Relation between types of stationarity If a stochastic process is N-th-order stationary, then it is also M-th-order stationary for all . If a stochastic process is second order stationary () and has finite second moments, then it is also wide-sense stationary. If a stochastic process is wide-sense stationary, it is not necessarily second-order stationary. If a stochastic process is strict-sense stationary and has finite second moments, it is wide-sense stationary. If two stochastic processes are jointly (M + N)-th-order stationary, this does not guarantee that the individual processes are M-th- respectively N-th-order stationary. Other terminology The terminology used for types of stationarity other than strict stationarity can be rather mixed. Some examples follow. Priestley uses stationary up to order m if conditions similar to those given here for wide sense stationarity apply relating to moments up to order m. Thus wide sense stationarity would be equivalent to "stationary to order 2", which is different from the definition of second-order stationarity given here. Honarkhah and Caers also use the assumption of stationarity in the context of multiple-point geostatistics, where higher n-point statistics are assumed to be stationary in the spatial domain. Differencing One way to make some time series stationary is to compute the differences between consecutive observations. This is known as differencing. Differencing can help stabilize the mean of a time series by removing changes in the level of a time series, and so eliminating trends. This can also remove seasonality, if differences are taken appropriately (e.g. differencing observations 1 year apart to remove a yearly trend). Transformations such as logarithms can help to stabilize the variance of a time series. One of the ways for identifying non-stationary times series is the ACF plot. Sometimes, patterns will be more visible in the ACF plot than in the original time series; however, this is not always the case. Another approach to identifying non-stationarity is to look at the Laplace transform of a series, which will identify both exponential trends and sinusoidal seasonality (complex exponential trends). Related techniques from signal analysis such as the wavelet transform and Fourier transform may also be helpful. See also Lévy process Stationary ergodic process Wiener–Khinchin theorem Ergodicity Statistical regularity Autocorrelation Whittle likelihood References Further reading Hyndman, Athanasopoulos (2013). Forecasting: Principles and Practice. Otexts. https://www.otexts.org/fpp/8/1 External links Spectral decomposition of a random function (Springer) Stochastic processes Signal processing
Stationary process
[ "Technology", "Engineering" ]
2,213
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
329,906
https://en.wikipedia.org/wiki/Civil%20disorder
Civil disorder, also known as civil disturbance, civil unrest, civil strife, or turmoil, are situations when law enforcement struggle to maintain public order or tranquility. Causes Any number of things may cause civil disorder, whether it is a single cause or a combination of causes; however, most are born from political grievances, economic disparities, social discord, but historically have been the result of long-standing oppression by a group of people towards another. Civil disorder arising from political grievances can include a range of events, from a simple protest to a mass civil disobedience. These events can be spontaneous, but can also be planned. These events can turn violent when agitators and law enforcers overreact. Civil disorder has in history arisen from economic disputes, political reasons (such as in opposition to oppressive or tyrannical government forces), religious opposition, racial oppression and social discord among various cases throughout history. Crowd Formation Exploiting a crowd's mood, radicals can manipulate and weaponize a crowd, using skillful agitation to coax the crowd's capacity for violence and turn it into a vengeful mob, directing the crowd's aggression and resentment at the agitator's chosen target. Tactical agitators can leverage media, including social media, to connect with potential crowd members and incite them to break the law or provoke others, all without direct personal contact. Conversely, a skilled leader can calm or divert a crowd using strategic suggestions, commands, or appeals to reason, aiming to de-escalate a situation. Emotional contagion plays a significant role in crowd behaviour by fostering a sense of unity among its members. This unity can lead the crowd to adopt a mob mentality and engage in mob behaviour. Crowd members amplify each other's emotions, creating a heightened state of collective emotion. Ideas rapidly spread among the group and to bystanders and mass media. When emotional contagion prevails, raw emotion is high while self-discipline is low. Personal prejudices and unsatisfied desires – usually restrained – are unabashedly released. This incentivizes crowd membership, as the crowd provides cover for individuals to do things they want to do, but would not dare try to do alone. This incentive can become greater for the crowd than its concern for law and authority, leading to unlawful and disruptive acts. Once the crowd engages in such acts, it effectively becomes a mob – a highly emotional, unreasonable, potentially violent crowd. Behavior Crowd behavior is the emotional needs, fears, and prejudices of the crowd members. It is driven by social factors such as the strength, or weakness, of leadership, moral perspective, or community uniformity, and also by psychological factors of suggestion e.g. imitation, anonymity, impersonality, emotional release, emotional contagion, panic, etc. During civil disorder, any crowd can be a threat to law enforcers because it is open to manipulation. This is because the behavior of a crowd is under the direction of the majority of its members. While its members are usually inclined to obey the law, emotional stimuli, and the feeling of fearlessness that arises from being in a crowd, can cause crowd members to indulge in impulses, act on aggressions, and unleash rage. When law enforcement limits the full realization of these actions, the crowd will channel this hostility elsewhere, making the crowd a hostile and unpredictable threat to law enforcers. Crowds want to be directed, and can become frustrated by confusion and uncertainty; therefore, leadership can have a profound influence on the intensity and conduct of a crowd's behavior. The first person to authoritatively direct a crowd will likely be followed. Opportunity for radicals to take charge of a group to emerge when no authoritative voice emerges, and the crowd becomes frustrated without direction. Panic, which is extremely and quickly contagious, also affects crowd behavior by influencing their ability to reason, lending to frantic, irrational behavior that can not only endanger the crowd, but also others. During civil disorder, panic can set in when a crowd member realizes – They are in danger and fleeing is necessary to escape arrest or harm Few escape routes exist The few escape routes are congested with traffic Their actions have caused harm to others When they have not dispersed the scene quickly enough, that their life, or freedom, is at risk from encroaching law enforcement agents Tactics A goal of violent demonstrators is to spur law enforcers to take action that can be exploited as acts of brutality in order to generate sympathy for their cause, and/or to anger and demoralize the opposition. Crowds can use a range of tactics to evade law enforcement or to promote disorder, from verbal assault to distracting law enforcers to building barricades. The more well-planned tactics occur, the more purposeful the disorder. For example, crowds may form human blockades to shut down roads, they may trespass on government property, they may try to force mass arrests, they may handcuff themselves to things or to each other, or they may lock arms, making it more difficult to separate them, or they might create confusion or diversions through the use of rock throwing, arson, or terrorist acts, giving leeway to law enforcers to be forceful or excessive while trying to remove them. Most participants of civil disorder engage on foot. However, organized efforts can often implore the use vehicles and wireless communication. Participants have been known to use scanners to monitor police frequencies or transmitters to sabotage law enforcement communications. If a crowd turns violent, effectively becoming a "mob," it may execute physical attacks on people and property, such as by throwing homemade weapons like Molotov cocktails, firing small arms, and planting improvised explosive devices. A crowd may resort to throwing rocks, bricks, bottles, etc. If violence is pre-arranged, the crowd can hide their weapons or vandalism tools well before the crowd formation, catching law enforcement by surprise. Crowds may arm themselves with: Gas masks Rocks Helmets Homemade shields Improvised picket signs Molotov cocktails Paint bombs Pipes Safety goggles Wire cutters A mob may erect barricades to impede, or prevent, the effectiveness of law enforcement. For example, they may use grappling hooks, chains, rope, or vehicles to breach gates or fences. They may use sticks or poles to limit law enforcement's use of billy clubs and bayonets. They may overturn civilian vehicles to impede troops advancing to engage them or vandalize law enforcement vehicles to try to spark over-reaction from law enforcement or to incite further lawlessness from the mob. Mobs often employ fire, smoke, or hidden explosive devices e.g. strapped to animals, masked in cigarette lighters or toys, rigged to directed vehicles, etc. Not only can these devices be used to create confusion or diversion, but they can also be used to destroy property, mask looting of mob participants, or provide cover for mob participants firing weapons at law enforcement. If law enforcement engages with the mob, in returning fire, any innocent casualties resulting from the chaos usually make law enforcement look undisciplined and oppressive. United States Legal definition According to the U.S. Code, a person is engaged in civil disorder if he or she - Like mob participants, law enforcers are also susceptible to crowd behavior. Such tense confrontation can emotionally stimulate them, creating a highly emotional atmosphere all around. This emotional stimulation can become infectious throughout law enforcement agents, conflicting with their disciplined training. When emotional tension is high among law enforcement agents, they may breach their feeling of restraint and commit acts, against people in the mob, that they normally would suppress. The emotional atmosphere can also make them highly susceptible to rumors and fear. Like mob members, law enforcement agents, acting as a group, can also lose their sense of individuality and develop a feeling of anonymity. Under emotional instability, individual prejudices, that any individual law enforcement agent may harbor against the mob, or against individual participants of the mob, may influence the behavior of the law enforcement agent. Like the mob, these conditions make law enforcement actors more likely to imitate the behavior of each other, which can result in a chain of biased, excessive, or otherwise, dangerous, behavior in which law enforcement agents act upon mob agents as impersonal threats and not as human beings. Such action is heightened in which law enforcement agents are monolithic, across race and ethnicity, as law enforcement will become more susceptible to framing the disorder as a confrontation between "them" and "us." Actions by law enforcement agents, motivated by emotion and prejudice, is often used as evidence against their ill will toward a crowd, or a mob, with their behavior only further inflaming confrontation rather than reducing it. Under such situations, law enforcement agents are rarely held accountable for all their actions against a crowd. See also References External links Revolution '67 Film website - Documentary about the Newark, New Jersey race riots of 1967 Brazil uprising points to rise of leaderless networks Deviance (sociology)
Civil disorder
[ "Biology" ]
1,855
[ "Deviance (sociology)", "Behavior", "Human behavior" ]
329,915
https://en.wikipedia.org/wiki/Supercentenarian
A supercentenarian, sometimes hyphenated as super-centenarian, is a person who is 110 years or older. This age is achieved by about one in 1,000 centenarians. Supercentenarians typically live a life free of significant age-related diseases until shortly before the maximum human lifespan is reached. Etymology The term "supercentenarian" has been used since 1832 or earlier. Norris McWhirter, editor of The Guinness Book Of Records, used the term in association with age claims researcher A. Ross Eckler Jr. in 1976, and the term was further popularised in 1991 by William Strauss and Neil Howe in their book Generations. The term "semisupercentenarian", has been used to describe someone aged 105-109. Originally the term "supercentenarian" was used to mean someone well over the age of 100, but 110 years and over became the cutoff point of accepted criteria for demographers. Incidence The Gerontology Research Group maintains a top 30–40 list of oldest verified living people. The researchers estimate, based on a 0.15% to 0.25% survival rate of centenarians until the age of 110, that there should be between 300 and 450 living supercentenarians in the world. A study conducted in 2010 by the Max Planck Institute for Demographic Research found 663 validated supercentenarians, living and dead, and showed that the countries with the highest total number (not frequency) of supercentenarians (in decreasing order) were the United States, Japan, England plus Wales, France, and Italy. The first verified supercentenarian in human history was Dutchman Geert Adriaans Boomgaard (1788–1899), and it was not until the 1980s that the oldest verified age surpassed 115. History While claims of extreme age have persisted from the earliest times in history, the earliest supercentenarian accepted by Guinness World Records is Dutchman Thomas Peters (reportedly c. 1745–1857). However, Peters's age cannot be reliably verified due to an absence of any documents recording his early life. Other scholars, such as French demographer Jean-Marie Robine, consider Geert Adriaans Boomgaard, also of the Netherlands, who turned 110 in 1898, to be the first verifiable case, as the alleged evidence for Peters has apparently been lost. The evidence for the 112 years of Englishman William Hiseland (reportedly 1620–1732) does not meet the standards required by Guinness World Records. Church of Norway records, the accuracy of which is subject to dispute, also show what appear to be several supercentenarians who lived in the south-central part of present-day Norway during the 16th and 17th centuries, including Johannes Torpe (1549–1664), and Knud Erlandson Etun (1659–1770), both residents of Valdres, Oppland. In 1902, Margaret Ann Neve, born in 1792, became the first verified female supercentenarian. Jeanne Calment of France, who died in 1997 aged 122 years, 164 days, had the longest human lifespan documented. The oldest man ever verified is Jiroemon Kimura of Japan, who died in 2013 aged 116 years and 54 days. Inah Canabarro Lucas (born 8 June 1908) of Brazil is the world's oldest living person, aged . João Marinho Neto (born 5 October 1912) of Brazil is the world's oldest living man, aged . Research into centenarians Research into centenarians helps scientists understand how an ordinary person might live longer. Organisations that research centenarians and supercentenarians include the GRG, LongeviQuest, and the Supercentenarian Research Foundation. In May 2021, whole genome sequencing analysis of 81 Italian semi-supercentenarians and supercentenarians were published, along with 36 control group people from the same region who were simply of advanced age. Morbidity Research on the morbidity of supercentenarians has found that they remain free of major age-related diseases (e.g., stroke, cardiovascular disease, dementia, cancer, Parkinson's disease and diabetes) until the very end of life when they die of exhaustion of organ reserve, which is the ability to return organ function to homeostasis. About 10% of supercentenarians survive until the last three months of life without major age-related diseases, as compared to only 4% of semi-supercentenarians and 3% of centenarians. By measuring the biological age of various tissues from supercentenarians, researchers may be able to identify the nature of those that are protected from ageing effects. According to a study of 30 different body parts from a 112-year-old female supercentenarian, along with younger controls, the cerebellum is protected from ageing, according to an epigenetic biomarker of tissue age known as the epigenetic clock—the reading is about 15 years younger than expected in a centenarian. These findings could explain why the cerebellum exhibits fewer neuropathological hallmarks of age-related dementia as compared to other brain regions. A 2021 genomic study identified genetic characteristics that protect against age-related diseases, particularly variants that improve DNA repair. Five variants were found to be significant, affecting STK17A (increased expression) and COA1 (reduced expression) genes. Supercentenarians also had an unexpectedly low level of somatic mutations. See also List of supercentenarians References External links Gerontology Research Group International Database on Longevity Supercentenarian Research Foundation New England Supercentenarian Study European Supercentenarian Organisation Senescence
Supercentenarian
[ "Chemistry", "Biology" ]
1,176
[ "Senescence", "Metabolism", "Cellular processes" ]
330,017
https://en.wikipedia.org/wiki/Discretization
In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification). Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused. Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand. The terms discretization and quantization often have the same denotation but not always identical connotations. (Specifically, the two terms share a semantic field.) The same is true of discretization error and quantization error. Mathematical methods relating to discretization include the Euler–Maruyama method and the zero-order hold. Discretization of linear state space models Discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing. The following continuous-time state space model where and are continuous zero-mean white noise sources with power spectral densities can be discretized, assuming zero-order hold for the input and continuous integration for the noise , to with covariances where and is the sample time. If is nonsingular, The equation for the discretized measurement noise is a consequence of the continuous measurement noise being defined with a power spectral density. A clever trick to compute and in one step is by utilizing the following property: Where and are the discretized state-space matrices. Discretization of process noise Numerical evaluation of is a bit trickier due to the matrix exponential integral. It can, however, be computed by first constructing a matrix, and computing the exponential of it The discretized process noise is then evaluated by multiplying the transpose of the lower-right partition of with the upper-right partition of : Derivation Starting with the continuous model we know that the matrix exponential is and by premultiplying the model we get which we recognize as and by integrating, which is an analytical solution to the continuous model. Now we want to discretise the above expression. We assume that is constant during each timestep. We recognize the bracketed expression as , and the second term can be simplified by substituting with the function . Note that . We also assume that is constant during the integral, which in turn yields which is an exact solution to the discretization problem. When is singular, the latter expression can still be used by replacing by its Taylor expansion, This yields which is the form used in practice. Approximations Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timesteps . The approximate solution then becomes: This is also known as the Euler method, which is also known as the forward Euler method. Other possible approximations are , otherwise known as the backward Euler method and , which is known as the bilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system. Discretization of continuous features In statistics and machine learning, discretization refers to the process of converting continuous features or variables to discretized or nominal features. This can be useful when creating probability mass functions. Discretization of smooth functions In generalized functions theory, discretization arises as a particular case of the Convolution Theorem on tempered distributions where is the Dirac comb, is discretization, is periodization, is a rapidly decreasing tempered distribution (e.g. a Dirac delta function or any other compactly supported function), is a smooth, slowly growing ordinary function (e.g. the function that is constantly or any other band-limited function) and is the (unitary, ordinary frequency) Fourier transform. Functions which are not smooth can be made smooth using a mollifier prior to discretization. As an example, discretization of the function that is constantly yields the sequence which, interpreted as the coefficients of a linear combination of Dirac delta functions, forms a Dirac comb. If additionally truncation is applied, one obtains finite sequences, e.g. . They are discrete in both, time and frequency. See also Discrete event simulation Discrete space Discrete time and continuous time Finite difference method Finite volume method for unsteady flow Interpolation Smoothing Stochastic simulation Time-scale calculus References Further reading External links Discretization in Geometry and Dynamics: research on the discretization of differential geometry and dynamics Numerical analysis Applied mathematics Functional analysis Iterative methods Control theory
Discretization
[ "Mathematics" ]
1,062
[ "Functions and mappings", "Functional analysis", "Applied mathematics", "Control theory", "Mathematical objects", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations", "Dynamical systems" ]
330,095
https://en.wikipedia.org/wiki/Kernel%20%28set%20theory%29
In set theory, the kernel of a function (or equivalence kernel) may be taken to be either the equivalence relation on the function's domain that roughly expresses the idea of "equivalent as far as the function can tell", or the corresponding partition of the domain. An unrelated notion is that of the kernel of a non-empty family of sets which by definition is the intersection of all its elements: This definition is used in the theory of filters to classify them as being free or principal. Definition For the formal definition, let be a function between two sets. Elements are equivalent if and are equal, that is, are the same element of The kernel of is the equivalence relation thus defined. The is The kernel of is also sometimes denoted by The kernel of the empty set, is typically left undefined. A family is called and is said to have if its is not empty. A family is said to be if it is not fixed; that is, if its kernel is the empty set. Quotients Like any equivalence relation, the kernel can be modded out to form a quotient set, and the quotient set is the partition: This quotient set is called the coimage of the function and denoted (or a variation). The coimage is naturally isomorphic (in the set-theoretic sense of a bijection) to the image, specifically, the equivalence class of in (which is an element of ) corresponds to in (which is an element of ). As a subset of the Cartesian product Like any binary relation, the kernel of a function may be thought of as a subset of the Cartesian product In this guise, the kernel may be denoted (or a variation) and may be defined symbolically as The study of the properties of this subset can shed light on Algebraic structures If and are algebraic structures of some fixed type (such as groups, rings, or vector spaces), and if the function is a homomorphism, then is a congruence relation (that is an equivalence relation that is compatible with the algebraic structure), and the coimage of is a quotient of The bijection between the coimage and the image of is an isomorphism in the algebraic sense; this is the most general form of the first isomorphism theorem. In topology If is a continuous function between two topological spaces then the topological properties of can shed light on the spaces and For example, if is a Hausdorff space then must be a closed set. Conversely, if is a Hausdorff space and is a closed set, then the coimage of if given the quotient space topology, must also be a Hausdorff space. A space is compact if and only if the kernel of every family of closed subsets having the finite intersection property (FIP) is non-empty; said differently, a space is compact if and only if every family of closed subsets with F.I.P. is fixed. See also References Bibliography Abstract algebra Basic concepts in set theory Set theory Topology
Kernel (set theory)
[ "Physics", "Mathematics" ]
624
[ "Abstract algebra", "Set theory", "Mathematical logic", "Basic concepts in set theory", "Topology", "Space", "Geometry", "Spacetime", "Algebra" ]
330,102
https://en.wikipedia.org/wiki/Social%20learning%20theory
Social learning theory is a theory of social behavior that proposes that new behaviors can be acquired by observing and imitating others. It states that learning is a cognitive process that takes place in a social context and can occur purely through observation or direct instruction, even in the absence of motor reproduction or direct reinforcement. In addition to the observation of behavior, learning also occurs through the observation of rewards and punishments, a process known as vicarious reinforcement. When a particular behavior is rewarded regularly, it will most likely persist; conversely, if a particular behavior is constantly punished, it will most likely desist. The theory expands on traditional behavioral theories, in which behavior is governed solely by reinforcements, by placing emphasis on the important roles of various internal processes in the learning individual. Albert Bandura is known for studying this theory. History and theoretical background In the 1940s, B. F. Skinner delivered a series of lectures on verbal behavior, putting forward a more empirical approach to the subject than existed in psychology at the time. In them, he proposed the use of stimulus-response theories to describe language use and development, and that all verbal behavior was underpinned by operant conditioning. He did however mention that some forms of speech derived from words and sounds that had previously been heard (echoic response), and that reinforcement from parents allowed these 'echoic responses' to be pared down to that of understandable speech. While he denied that there was any "instinct or faculty of imitation", Skinner's behaviorist theories formed a basis for redevelopment into Social Learning Theory. At around the same time, Clark Leonard Hull, an American psychologist, was a strong proponent of behaviorist stimulus-response theories, and headed a group at Yale University's Institute of Human Relations. Under him, Neal Miller and John Dollard aimed to come up with a reinterpretation of psychoanalytic theory in terms of stimulus-response. This led to their book, Social Learning and Imitation, published in 1941, which posited that personality consisted of learned habits. They used Hull's drive theory, where a drive is a need that stimulates a behavioral response, crucially conceiving a drive for imitation, which was positively reinforced by social interaction and widespread as a result. This was the first use of the term 'social learning', but Miller and Dollard did not consider their ideas to be separate from Hullian learning theory, only a possible refinement. Nor did they follow up on their original ideas with a sustained research program. Julian B. Rotter, a professor at Ohio State University, published his book, Social Learning and Clinical Psychology in 1954. This was the first extended statement of a comprehensive social learning theory. Rotter moved away from the strictly behaviorist learning of the past, and considered instead the holistic interaction between the individual and the environment. Essentially he was attempting an integration of behaviorism (which generated precise predictions but was limited in its ability to explain complex human interactions) and gestalt psychology (which did a better job of capturing complexity but was much less powerful at predicting actual behavioral choices). In his theory, the social environment and individual personality created probabilities of behavior, and the reinforcement of these behaviors led to learning. He emphasized the subjective nature of the responses and effectiveness of reinforcement types. While his theory used vocabulary common to that of behaviorism, the focus on internal functioning and traits differentiated his theories, and can be seen as a precursor to more cognitive approaches to learning. Rotter's theory is also known as expectancy-value theory due to its central explanatory constructs. Expectancy is defined as the individual's subjectively held probability that a given action will lead to a given outcome. It can range from zero to one, with one representing 100% confidence in the outcome. For example, a person may entertain a given level of belief that they can make a foul shot in basketball or that an additional hour of study will improve their grade on an examination. Reinforcement value is defined as the individual's subjective preference for a given outcome, assuming that all possible outcomes were equally available. In other words, the two variables are independent of each other. These two variables interact to generate behavior potential, or the likelihood that a given action will be performed. The nature of the interaction is not specified, though Rotter suggests that it is likely to be multiplicative. The basic predictive equation is: Where: BP = Behavior Potential E = Expectancy RV = Reinforcement Value Although the equation is essentially conceptual, it is possible to enter numerical values if one is conducting an experiment. Rotter's 1954 book contains the results of many such experiments demonstrating this and other principles. Importantly, both expectancies and reinforcement values generalize. After many experiences ('learning trials', in behaviorist language) a person will develop a generalized expectancy for success in a domain. For example, a person who has played several sports develops a generalized expectancy concerning how they will do in an athletic setting. This is also termed freedom of movement. Generalized expectancies become increasingly stable as we accumulate experience, eventually taking on a trait-like consistency. Similarly, we generalize across related reinforcers, developing what Rotter termed need values. These needs (which resemble those described by Henry Murray) are another major determinant of behavior. Generalized expectancies and needs are the major personality variables in Rotter's theory. The influence of a generalized expectancy will be greatest when encountering novel, unfamiliar situations. As experience is gained, specific expectancies are developed regarding that situation. For example, a person's generalized expectancy for success in sports will have less influence on their actions in a sport with which they have long experience. Another conceptual equation in Rotter's theory proposes that the value of a given reinforcer is a function of the expectancy that it will lead to another reinforcing outcome and the value set upon that outcome. This is important because many social reinforcers are what behaviorists term secondary reinforcers – they have no intrinsic value, but have become linked with other, primary, reinforcers. For example, the value set on obtaining a high grade on an examination is dependent on how strongly that grade is linked (in the subjective belief system of the student) with other outcomes – which might include parental praise, graduation with honors, offers of more prestigious jobs upon graduation, etc. – and the extent to which those other outcomes are themselves valued. Rotter's social learning theory also generated many suggestions for clinical practice. Psychotherapy was largely conceptualized as expectancy modification and, to some extent, as values modification. This may be seen as an early form of cognitive-behavioral therapy. In 1959, Noam Chomsky published his criticism of Skinner's book Verbal Behavior, an extension of Skinner's initial lectures. In his review, Chomsky stated that pure stimulus-response theories of behavior could not account for the process of language acquisition, an argument that contributed significantly to psychology's cognitive revolution. He theorized that "human beings are somehow specially designed to" understand and acquire language, ascribing a definite but unknown cognitive mechanism to it. Within this context, Albert Bandura studied learning processes that occurred in interpersonal contexts and were not, in his view, adequately explained either by theories of operant conditioning or by existing models of social learning. Bandura began to conduct studies of the rapid acquisition of novel behaviors via social observation, the most famous of which were the Bobo doll experiments (1961-63). In their 1963 book Social Learning and Personality Development, Bandura and Richard Walters argued that "the weaknesses of learning approaches that discount the influence of social variables are nowhere more clearly revealed than in their treatment of the acquisition of novel responses." Skinner's explanation of the acquisition of new responses relied on the process of successive approximation, which required multiple trials, reinforcement for components of behavior, and gradual change. Rotter's theory proposed that the likelihood of a behavior occurring was a function of the subjective expectancy and value of the reinforcement. According to Bandura, this model did not account for a response that had not yet been learned – though this contention does not address the likelihood that generalization from related situations would produce behaviors in new ones. Bandura went on to write the book Social Learning Theory in 1977. Bandura's Social Learning Theory (1977) Social Learning Theory integrated behavioral and cognitive theories of learning in order to provide a comprehensive model that could account for the wide range of learning experiences that occur in the real world. As initially outlined by Bandura and Walters in 1963, the theory was entirely behavioral in nature; the crucial element that made it innovative and increasingly influential was its emphasis upon the role of imitation. Over the years, however, Bandura shifted to a more cognitive perspective, and this led to a major revision of the theory in 1977. At this time, the key tenets of Social Learning Theory were stated as follows: Learning is not purely behavioral; rather, it is a cognitive process that takes place in a social context. Learning can occur by observing a behavior and by observing the consequences of the behavior (vicarious reinforcement). Learning involves observation, extraction of information from those observations, and making decisions about the performance of the behavior (observational learning or modeling). Thus, learning can occur without an observable change in behavior. Reinforcement plays a role in learning but is not entirely responsible for learning. The learner is not a passive recipient of information. Cognition, environment, and behavior all mutually influence each other (reciprocal determinism). Observation and direct experience Typical stimulus-response theories rely entirely upon direct experience (of the stimulus) to inform behavior. Bandura opens up the scope of learning mechanisms by introducing observation as a possibility. He adds to this the ability of modeling – a means by which humans "represent actual outcomes symbolically". These models, cognitively mediated, allow future consequences to have as much of an impact as actual consequences would in a typical stimulus-response theory. An important factor in Social Learning Theory is the concept of reciprocal determinism. This notion states that just as an individual's behavior is influenced by the environment, the environment is also influenced by the individual's behavior. In other words, a person's behavior, environment, and personal qualities all reciprocally influence each other. For example, a child who plays violent video games will likely influence their peers to play as well, which then encourages the child to play more often. Modeling and underlying cognitive processes Social Learning Theory draws heavily on the concept of modeling as described above. Bandura outlined three types of modeling stimuli: Live models, where a person is demonstrating the desired behavior Verbal instruction, in which an individual describes the desired behavior in detail and instructs the participant in how to engage in the behavior Symbolic, in which modeling occurs by means of the media, including movies, television, Internet, literature, and radio. Stimuli can be either real or fictional characters. Exactly what information is gleaned from observation is influenced by the type of model, as well as a series of cognitive and behavioral processes, including: Attention – in order to learn, observers must attend to the modeled behavior. Experimental studies have found that awareness of what is being learned and the mechanisms of reinforcement greatly boosts learning outcomes. Attention is impacted by characteristics of the observer (e.g., perceptual abilities, cognitive abilities, arousal, past performance) and characteristics of the behavior or event (e.g., relevance, novelty, affective valence, and functional value). In this way, social factors contribute to attention – the prestige of different models affects the relevance and functional value of observation and therefore modulates attention. Retention – In order to reproduce an observed behavior, observers must be able to remember features of the behavior. Again, this process is influenced by observer characteristics (cognitive capabilities, cognitive rehearsal) and event characteristics (complexity). The cognitive processes underlying retention are described by Bandura as visual and verbal, where verbal descriptions of models are used in more complex scenarios. Reproduction – By reproduction, Bandura refers not to the propagation of the model but the implementation of it. This requires a degree of cognitive skill, and may in some cases require sensorimotor capabilities. Reproduction can be difficult because in the case of behaviors that are reinforced through self-observation (he cites improvement in sports), it can be difficult to observe behavior well. This can require the input of others to provide self-correcting feedback. Newer studies on feedback support this idea by suggesting effective feedback, which would help with observation and correction improves the performance on participants on tasks. Motivation – The decision to reproduce (or refrain from reproducing) an observed behavior is dependent on the motivations and expectations of the observer, including anticipated consequences and internal standards. Bandura's description of motivation is also fundamentally based on environmental and thus social factors, since motivational factors are driven by the functional value of different behaviors in a given environment. Evolution and cultural intelligence Social Learning Theory has more recently applied alongside and been used to justify the theory of cultural intelligence. The cultural intelligence hypothesis argues that humans possess a set of specific behaviors and skills that allow them to exchange information culturally. This hinges on a model of human learning where social learning is key, and that humans have selected for traits that maximize opportunities for social learning. The theory builds on extant social theory by suggesting that social learning abilities, like Bandura's cognitive processes required for modeling, correlate with other forms of intelligence and learning. Experimental evidence has shown that humans overimitate behavior compared to chimpanzees, lending credence to the idea that we have selected for methods of social learning. Some academics have suggested that our ability to learn socially and culturally have led to our success as a species. In neuroscience Recent research in neuroscience has implicated mirror neurons as a neurophysiology basis for social learning, observational learning, motor cognition and social cognition. Mirror neurons have been heavily linked to social learning in humans. Mirror neurons were first discovered in primates in studies which involved teaching the monkey motor activity tasks. One such study focused on teaching primates to crack nuts with a hammer. When the primate witnessed another individual cracking nuts with a hammer, the mirror neuron systems became activated as the primate learned to use the hammer to crack nuts. However, when the primate was not presented with a social learning opportunity, the mirror neuron systems did not activate and learning did not occur. Similar studies with humans also show similar evidence to the human mirror neuron system activating when observing another person perform a physical task. The activation of the mirror neuron system is thought to be critical for the understanding of goal directed behaviors and understanding their intention. Although still controversial, this provides a direct neurological link to understanding social cognition. In social work In social work, some theories can be taken from many disciplines, such as criminology and education. Even though social learning theory comes from psychology, this theory can also be applied to the study of social work. Social learning theory is important in social work because of the observation of others. For example, if a child watches their sibling do their daily routine they are more likely going to want to copy the routine step by step. Feedback and reinforcement can help individuals learn and adopt new behaviors. Social workers can use feedback and reinforcements to help their clients make positive changes. For example, a social worker might provide feedback and reinforcement for a client who has made progress toward a goal, such as maintaining sobriety. Social learning provides a useful framework for social workers to help their clients make positive changes by leveraging the power of social influence and modeling. Depression Social learning theory has been explained and shown in many different examples. Depression in social learning can be discussed in a variety of ways. For example, a person with depression may withdraw from social situations and avoid interacting with others. They may feel like they don't have anything to contribute to conversations or others won't understand them. Depression can make it difficult for people to find the motivation to engage in social activities. They may also feel like it takes too much energy to interact with others, and they would rather stay home alone. Social learning theory provides a framework for understanding the role of social factors in depression and for developing interventions that promote positive behaviors and attitudes. In health promotion Social learning theory emphasizes the importance of observing and modeling the behaviors, attitudes and beliefs of others in promoting health behaviors. Promoting positive and healthy habits is a big part of an educator's and even a social worker's job. Teachers are expected to teach their students how to behave in class. For example, if a teacher desires students to be quiet while they are talking, they have to teach them that when the teacher is talking they should be quiet. The teachers are also expected to teach them how to role play and tell stories and also do classroom activities. Another example is peer led health programs they can effectively promote health behaviors among adolescents and young adults by using social learning behaviors and attitudes and provide social support for positive changes. Community based interventions can use Social learning theory principles to promote healthy behaviors at the community levels. In addiction Addiction is related to the social learning theory as it emphasizes the role of social influences and reinforces the development and maintenance of addictive behaviors. The social learning theory suggests that people learn and adopt behaviors through observation, experience, and reinforcement from social interactions with others. In the case of addiction, individuals may learn and adopt substance use behaviors from peers, family members, or media exposure, and through positive reinforcement such as pleasure or relief from stress. Additionally, the social learning theory highlights the importance of social context in reinforcing addictive behaviors, as social situations and norms may influence the decision to engage in substance use. The social learning theory proposes that addiction is a learned behavior influenced by environmental and social factors. Viral challenges One modern-day example of the social learning theory in action is the phenomenon of "viral challenges" on social media. These challenges involve individuals performing a specific action or task, usually for the purpose of entertainment, and then sharing the video with their online community. According to Bandura's social learning theory, people learn new behaviors by observing and imitating the actions of others. In the case of viral challenges, individuals watch others perform a specific task and then "imitate" the behavior by completing the challenge themselves. For example, the ALS Ice Bucket Challenge, which went viral in the summer of 2014, involved people dumping a bucket of ice water over their heads to raise awareness for ALS (amyotrophic lateral sclerosis) and then challenging others to do the same. The challenge quickly spread across social media platforms, with celebrities and politicians also participating, and raised over $115 million for the ALS Association. Another example is the "In My Feelings" challenge, which took place on social media in the summer of 2018. The challenge involved people dancing to Drake's song "In My Feelings" alongside a moving car and sharing the video online. The challenge was initially started by Instagram user @theshiggyshow and quickly gained popularity across social media platforms. These examples demonstrate how the social learning theory can be applied to real-world phenomena, with individuals learning and imitating new behaviors through the observation of others on social media platforms. Criminology Social learning theory has been used to explain the emergence and maintenance of deviant behavior, especially aggression. Criminologists Ronald Akers and Robert Burgess integrated the principles of social learning theory and operant conditioning with Edwin Sutherland's differential association theory to create a comprehensive theory of criminal behavior. Burgess and Akers emphasized that criminal behavior is learned in both social and nonsocial situations through combinations of direct reinforcement, vicarious reinforcement, explicit instruction, and observation. Both the probability of being exposed to certain behaviors and the nature of the reinforcement are dependent on group norms. Developmental psychology In her book Theories of Developmental Psychology, Patricia H. Miller lists both moral development and gender-role development as important areas of research within social learning theory. Social learning theorists emphasize observable behavior regarding the acquisition of these two skills. For gender-role development, the same-sex parent provides only one of many models from which the individual learns gender-roles. Social learning theory also emphasizes the variable nature of moral development due to the changing social circumstances of each decision: "The particular factors the child thinks are important vary from situation to situation, depending on variables such as which situational factors are operating, which causes are most salient, and what the child processes cognitively. Moral judgments involve a complex process of considering and weighing various criteria in a given social situation." For social learning theory, gender development has to do with the interactions of numerous social factors, involving all the interactions the individual encounters. For social learning theory, biological factors are important but take a back seat to the importance of learned, observable behavior. Because of the highly gendered society in which an individual might develop, individuals begin to distinguish people by gender even as infants. Bandura's account of gender allows for more than cognitive factors in predicting gendered behavior: for Bandura, motivational factors and a broad network of social influences determine if, when, and where gender knowledge is expressed. Management Social learning theory proposes that rewards are not the sole force behind creating motivation. Thoughts, beliefs, morals, and feedback all help to motivate us. Three other ways in which we learn are vicarious experience, verbal persuasion, and physiological states. Modeling, or the scenario in which we see someone's behaviors and adopt them as our own, aide the learning process as well as mental states and the cognitive process. Media violence Principles of social learning theory have been applied extensively to the study of media violence. Akers and Burgess hypothesized that observed or experienced positive rewards and lack of punishment for aggressive behaviors reinforces aggression. Many research studies and meta-analyses have discovered significant correlations between viewing violent television and aggression later in life and many have not, as well as playing violent video games and aggressive behaviors. The role of observational learning has also been cited as an important factor in the rise of rating systems for TV, movies, and video games. Creating social change with media Entertainment-education in the form of a telenovela or soap opera can help viewers learn socially desired behaviors in a positive way from models portrayed in these programs. The telenovela format allows the creators to incorporate elements that can bring a desired response. These elements may include music, actors, melodrama, props or costumes. Entertainment education is symbolic modeling and has a formula with three sets of characters with the cultural value that is to be examined is determined ahead of time: Characters that support a value (positive role models) Characters who reject the value (negative role models) Characters who have doubts about the value (undecided) Within this formula there are at least three doubters that represent the demographic group within the target population. One of these doubters will accept the value less than halfway through, the second will accept the value two-thirds of the way through and the third doubter does not accept the value and is seriously punished. This doubter is usually killed. Positive social behaviors are reinforced with rewards and negative social behaviors are reinforced with punishment. At the end of the episode a short epilogue done by a recognizable figure summarizes the educational content and within the program viewers are given resources in their community. Applications for social change Through observational learning a model can bring forth new ways of thinking and behaving. With a modeled emotional experience, the observer shows an affinity toward people, places and objects. They dislike what the models do not like and like what the models care about. Television helps contribute to how viewers see their social reality. "Media representations gain influence because people's social constructions of reality depend heavily on what they see, hear and read rather than what they experience directly". Any effort to change beliefs must be directed toward the sociocultural norms and practices at the social system level. Before a drama is developed, extensive research is done through focus groups that represent the different sectors within a culture. Participants are asked what problems in society concern them most and what obstacles they face, giving creators of the drama culturally relevant information to incorporate into the show. The pioneer of entertainment-education is Miguel Sabido a creative writer/producer/director in the 1970s at the Mexican national television system, Televisa. Sabido spent eight years working on a method that would create social change and is known as the Sabido Method. He credits Albert Bandura's social learning theory, the drama theory of Eric Bentley, Carl Jung's theory of archetypes, MacLean's triune brain theory and Sabido's own soap opera theory for influencing his method. Sabido's method has been used worldwide to address social issues such as national literacy, population growth and health concerns such as HIV. Psychotherapy Social Learning Theory has significantly influenced psychotherapy, providing a multifaceted framework that extends beyond traditional behavioral conditioning. Social learning theory can be integrated with various therapeutic models and lends itself to a wide range of practical techniques and interventions. For example, therapists in many therapeutic approaches utilize modeling, where clients observe and learn from the therapist's behaviors and response patterns. Techniques such as behavioral rehearsal, role-playing, and social skills training empower clients to acquire new behaviors and enhance their coping mechanisms in various situations. Additionally, social learning theory underscores the profound influence of culture on human development, and posits that cultural norms, values, and social contexts significantly shape individual development. Therapists must assess for the influence of culture in understanding their clients and adapt interventions to align with clients cultural backgrounds. In systemic therapy, such as couples or family therapy, social learning theory assists therapists in identifying intergenerational patterns of behavior. Systemic therapists may assist clients in gaining insights into the origins of presenting problems. Presenting problems are often conceptualized as behaviors and coping mechanisms that can be transmitted across generations through observational learning and operant conditioning. Frequently, issues are rooted in learned behaviors and coping strategies acquired through observation and modeling. Social learning theory enriches psychotherapy by providing a holistic perspective that encompasses historical context, cultural considerations, family dynamics and interpersonal relationships, and intergenerational patterns. These allow therapists to have a more complex, nuanced understanding of the development and maintenance of presenting problems. School psychology Many classroom and teaching strategies draw on principles of social learning to enhance students' knowledge acquisition and retention. For example, using the technique of guided participation, a teacher says a phrase and asks the class to repeat the phrase. Thus, students both imitate and reproduce the teacher's action, aiding retention. An extension of guided participation is reciprocal learning, in which both student and teacher share responsibility in leading discussions. Additionally, teachers can shape the classroom behavior of students by modelling appropriate behavior and visibly rewarding students for good behavior. By emphasizing the teacher's role as model and encouraging the students to adopt the position of observer, the teacher can make knowledge and practices explicit to students, enhancing their learning outcomes. Algorithm for computer optimization In modern field of computational intelligence, the social learning theory is adopted to develop a new computer optimization algorithm, the social learning algorithm. Emulating the observational learning and reinforcement behaviors, a virtual society deployed in the algorithm seeks the strongest behavioral patterns with the best outcome. This corresponds to searching for the best solution in solving optimization problems. Compared with other bio-inspired global optimization algorithms that mimic natural evolution or animal behaviors, the social learning algorithm has its prominent advantages. First, since the self-improvement through learning is more direct and rapid than the evolution process, the social learning algorithm can improve the efficiency of the algorithms mimicking natural evolution. Second, compared with the interaction and learning behaviors in animal groups, the social learning process of human beings exhibits a higher level of intelligence. By emulating human learning behaviors, it is possible to arrive at more effective optimizers than existing swarm intelligence algorithms. Experimental results have demonstrated the effectiveness and efficiency of the social learning algorithm, which has in turn also verified through computer simulations the outcomes of the social learning behavior in human society. Another example is the social cognitive optimization, which is a population-based metaheuristic optimization algorithm. This algorithm is based on the social cognitive theory, simulating the process of individual learning of a set of agents with their own memory and their social learning with the knowledge in the social sharing library. It has been used for solving continuous optimization, integer programming, and combinatorial optimization problems. There also several mathematical models of social learning which try to model this phenomenon using probabilistic tools. See also Mimetic theory References External links
Social learning theory
[ "Biology" ]
5,836
[ "Behavior", "Behavioral concepts", "Behaviorism", "Social learning theory" ]
330,158
https://en.wikipedia.org/wiki/Artificial%20island
An artificial island or man-made island is an island that has been constructed by humans rather than formed through natural processes. Other definitions may suggest that artificial islands are lands with the characteristics of human intervention in their formation process, while others argue that artificial islands are created by expanding existing islets, constructing on existing reefs, or amalgamating several islets together. Although constructing artificial islands is not a modern phenomenon, there is no definite legal definition of it. Artificial islands may vary in size from small islets reclaimed solely to support a single pillar of a building or structure to those that support entire communities and cities. Archaeologists argue that such islands were created as far back as the Neolithic era. Early artificial islands included floating structures in still waters or wooden or megalithic structures erected in shallow waters (e.g. crannógs and Nan Madol discussed below). In modern times, artificial islands are usually formed by land reclamation, but some are formed by flooding of valleys resulting in the tops of former knolls getting isolated by water (e.g., Barro Colorado Island). There are several reasons for the construction of these islands, which include residential, industrial, commercial, structural (for bridge pylons) or strategic purposes. One of the world's largest artificial islands, René-Levasseur Island, was formed by the flooding of two adjacent reservoirs. Technological advancements have made it feasible to build artificial islands in waters as deep as 75 meters. The size of the waves and the structural integrity of the island play a crucial role in determining the maximum depth. History Despite a popular image of modernity, artificial islands actually have a long history in many parts of the world, dating back to the reclaimed islands of Ancient Egyptian civilization, the Stilt crannogs of prehistoric Wales, Scotland and Ireland, the ceremonial centers of Nan Madol in Micronesia and the still extant floating islands of Lake Titicaca. The city of Tenochtitlan, the Aztec predecessor of Mexico City that was home to 500,000 people when the Spaniards arrived, stood on a small natural island in Lake Texcoco that was surrounded by countless artificial chinamitl islands. The people of Langa Langa Lagoon and Lau Lagoon in Malaita, Solomon Islands, built about 60 artificial islands on the reef including Funaafou, Sulufou, and Adaege. The people of Lau Lagoon build islands on the reef as this provided protection against attack from the people who lived in the centre of Malaita. These islands were formed literally one rock at a time. A family would take their canoe out to the reef which protects the lagoon and then dive for rocks, bring them to the surface and then return to the selected site and drop the rocks into the water. Living on the reef was also healthier as the mosquitoes, which infested the coastal swamps, were not found on the reef islands. The Lau people continue to live on the reef islands. Many artificial islands have been built in urban harbors to provide either a site deliberately isolated from the city or just spare real estate otherwise unobtainable in a crowded metropolis. An example of the first case is Dejima (or Deshima), created in the bay of Nagasaki in Japan's Edo period as a contained center for European merchants. During the isolationist era, Dutch people were generally banned from Nagasaki and Japanese from Dejima. Similarly, Ellis Island, in Upper New York Bay beside New York City, a former tiny islet greatly expanded by land reclamation, served as an isolated immigration center for the United States in the late 19th and early 20th century, preventing an escape to the city of those refused entry for disease or other perceived flaws, who might otherwise be tempted toward illegal immigration. One of the most well-known artificial islands is the Île Notre-Dame in Montreal, built for Expo 67. The Venetian Islands in Miami Beach, Florida, in Biscayne Bay added valuable new real estate during the Florida land boom of the 1920s. When the bubble that the developers were riding burst, the bay was left scarred with the remnants of their failed project. A boom town development company was building a sea wall for an island that was to be called Isola di Lolando but could not stay in business after the 1926 Miami Hurricane and the Great Depression, dooming the island-building project. The concrete pilings from the project still stand as another development boom roared around them, 80 years later. Largest artificial islands according to their size (reclaimed lands) Modern projects Bahrain Bahrain has several artificial islands including Northern City, Diyar Al Muharraq, and Durrat Al Bahrain. Named after the 'most perfect pearl' in the Persian Gulf, Durrat Al Bahrain is a US$6 billion joint development owned by the Bahrain Mumtalakat Holding Company and Kuwait Finance House Bahrain (KFH). The project is designed by the firm Atkins. It consists of a series of 15 large artificial islands covering an area of about 5 km2 (54,000,000 sq ft) and has six atolls, five fish-shaped islands, two crescent-shaped islands, and two more small islands related to the Marina area. Netherlands In 1969, the Flevopolder in the Netherlands was finished, as part of the Zuiderzee Works. It has a total land surface of 970 km2, which makes it by far the largest artificial island by land reclamation in the world. The island consists of two polders, Eastern Flevoland and Southern Flevoland. Together with the Noordoostpolder, which includes some small former islands like Urk, the polders form Flevoland, the 12th province of the Netherlands that almost entirely consists of reclaimed land. An entire artificial archipelago, Marker Wadden has been built as a conservation area for birds and other wildlife, the project started in 2016. Maldives Maldives have been creating various artificial islands to promote economic development and to address the threat of rising sea level. Hulhumalé island was reclaimed to establish a new land mass required to meet the existing and future housing, industrial and commercial development demands of the Malé region. The official settlement was inaugurated on May 12, 2004. Qatar The Pearl Island is in the north of the Qatari capital Doha, home to a range of residential, commercial and tourism activities. Qanat Quartier is designed to be a 'Virtual Venice in the Middle East'. Lusail & large areas around Ras Laffan, Hamad International Airport & Hamad Port. The New Doha International Airport is the second largest artificial island built in the world, with a size of 22km2. The Pearl-Qatar is the third largest artificial island in the world, with a size of 13.9km2. The island was built in 2006, by main contractor DEME Group. United Arab Emirates The United Arab Emirates is home to several artificial island projects. They include the Yas Island, augmentations to Saadiyat Island, Khalifa Port, Al Reem Island, Al Lulu Island, Al Raha Creek, al Hudairiyat Island, The Universe and the Dubai Waterfront. Palm Islands (Palm Jumeirah, Palm Jebel Ali, and Deira Island) and the World Islands off Dubai are created for leisure and tourism purposes. The Burj Al Arab is on its own artificial island. The Universe, Palm Jebel Ali, Dubai Waterfront, and Palm Deira are on hold. China China has conducted a land reclamation project which had built at least seven artificial islands in the South China Sea off the coast of Palawan totaling 2000 acres in size by mid 2015. One artificial island built on Fiery Cross Reef near the Spratly Islands is now the site of a military barracks, lookout tower and a runway long enough to handle Chinese military aircraft. A largely touristic and commercial project is the Ocean Flower Island project on Hainan island. Indonesia Pantai Indah Kapuk (PIK) in North Jakarta is an area featuring luxury residential and commercial developments. Two artificial islands, Golf Island and Ebony Island, were created to expand the PIK area. They offer facilities, recreational spaces, scenic waterfront views and residential areas. Airports Kansai International Airport is the first airport to be built completely on an artificial island in 1994, followed by Chūbu Centrair International Airport in 2005, and both the New Kitakyushu Airport and Kobe Airport in 2006, Ordu Giresun Airport in 2016, and Rize-Artvin Airport in 2022 When Hong Kong International Airport opened in 1998, 75% of the property was created using land reclamation upon the existing islands of Chek Lap Kok and Lam Chau. Currently China is building several airports on artificial islands, they include runways of Shanghai international Airport Dalian Jinzhouwan International Airport being built on a 21 square kilometer artificial island, Xiamen Xiang'an International Airport, Sanya Hongtangwan International Airport designed by Bentley Systems which is being built on a 28 square kilometer artificial islands. Environmental impact Artificial islands negatively impact the marine environment. The large quantities of sand required to build these islands are acquired through dredging, which is harmful to coral reefs and disrupts marine life. The increased amount of sand, sediment, and fine particles creates turbid conditions, blocking necessary UV rays from reaching coral reefs, creating coral turbidity (where more organic material is taken in by coral) and increasing bacterial activity (more harmful bacteria are introduced into coral). The construction of artificial islands also decreases the subaqueous area in surrounding waters, leading to habitat destruction or degradation for many species. Political status Under the United Nations Convention on the Law of the Sea treaty (UNCLOS), artificial islands are not considered harbor works (Article 11) and are under the jurisdiction of the nearest coastal state if within (Article 56). Artificial islands are also not considered islands for purposes of having their own territorial waters or exclusive economic zones, and only the coastal state may authorize their construction (Article 60); however, on the high seas beyond national jurisdiction, any "state" may construct artificial islands (Article 87). The unrecognised micronation known as the Principality of Sealand (often shorted to simply "Sealand") is entirely on a single artificial island. Greyzone warfare strategies Over time, after World War II, several countries have been reported to have built artificial islands for strategic and military purposes. For instance, the Philippines and China have been reported to have constructed artificial islands in the South China Sea, primarily to assert territorial claims over the disputed waters. Similarly, Russia has allegedly done so in the Arctic, both for strategic and military purposes. These reports are subject to ongoing political and diplomatic debates. China The island-building activities of China have been the subject of close examination by experts, who suggest that they are driven by strategic objectives. The issue at the heart of the matter revolves around China's claim that its historical entitlement justifies its actions in the area. This is opposed by the legal argument supported by the United Nations Convention on the Law of the Sea (UNCLOS). It is noteworthy that UNCLOS serves as the primary legal framework that governs the use and control of maritime zones. This convention establishes regulations on how coastal states can exercise their sovereignty over territorial waters, contiguous zones, exclusive economic zones (EEZs), and the continental shelf. China's claim to the South China Sea dates back to the 1940s. At that time, China recovered islands in the name of the Cairo Declaration and the Potsdam Proclamation, and there was no reaction from Vietnam or any other state against it. In 1947, China drafted the eleven-dash line (also referred to as the nine-dash line) to outline the geographical scope of its authority over the South China Sea. China began building islands in the 1980s, initially creating a series of minor military garrisons. However, the reason why China faces criticism is because some of the reclaimed islands fall within the EEZs of other countries, which raises concerns about China's compliance with UNCLOS. Vietnam has also made a historical claim, pointing to its rule over the islands in the 17th century. The Philippines argues for its rights based on geographical proximity. Meanwhile, Malaysia and Brunei claim parts of the sea using EEZ as the basis of their claims. UNCLOS Article 60 stipulates that naturally formed islands can generate EEZs, while artificial islands cannot. Therefore, China's construction of artificial islands raises questions about whether they can legitimately claim an EEZ around those islands. UNCLOS also enshrines the freedom of navigation and overflight in the EEZ of coastal states, which implies that all countries have the right to sail, fly, and conduct military exercises in those waters. Nevertheless, China has repeatedly challenged this principle by constructing artificial islands, imposing restrictions on navigation, and militarising the area. Legal status of artificial islands by China The legal implications surrounding China's island construction efforts present complex challenges. A key issue revolves around determining the classification of land masses as either rocks or seabed, which holds significant importance in these disputed cases. Maritime law establishes a clear distinction between land masses eligible for expansion into new island groups and those that do not qualify. According to this legal framework, low-tide elevations are considered part of the seabed and do not generate a territorial sea, EEZ, or continental shelf. However, they serve as a reference point for measuring the entitlements of nearby rocks or islands. Rocks, unlike islands, lack the capacity to sustain human habitation or support economic activity. While they generate a territorial sea, they do not establish an EEZ or continental shelf. UNCLOS stipulates that both rocks and islands must be naturally formed and remain above water at high tide. The Spratly Islands have been a subject of contention among multiple countries, including Taiwan, Vietnam, the Philippines, Malaysia, Brunei, and China. China's claim to the islands, despite entering the dispute relatively late, has been supported by arguments asserting historical presence and construction activities on the islands as a basis for their claim. In terms of international law, land reclamation itself is not explicitly prohibited. There is no specific rule within international law that prohibits any country from engaging in land reclamation at sea. The legality of such activities primarily depends on their location in relation to adjacent land territories. Within the 12 nautical mile territorial sea, a country holds the right to reclaim land as it falls under its sovereign authority. However, beyond this 12 nautical mile limit, the country must consider whether its actions conform to the rights and jurisdictions recognised by UNCLOS. Reclamation activities conducted between 12 and 200 nautical miles are considered part of the process of establishing and utilising artificial islands, installations, and structures, governed by specific provisions within UNCLOS. It is worth mentioning that artificial islands may include stationary oil rigs. Coastal states are permitted to undertake reclamation within designated areas as long as they fulfil their obligation to inform other countries and respect their rights, as outlined by UNCLOS rules. However, any artificial islands created through this process are restricted to maintaining a 500-meter safety zone around them and must not obstruct international navigation. Hybrid warfare and China's greyzone tactics Hybrid warfare is understood as a form of conflict that combines conventional and irregular tactics. Hybrid warfare may also be defined as a multifaceted strategy aimed at destabilising a functioning state and dividing its society. This comprehensive definition portrays hybrid strategy as a versatile and complex approach utilising a combination of conventional and unconventional means, overt and covert activities, involving military, paramilitary, irregular, and civilian actors across different domains of power. The ultimate objective of hybrid warfare is to exploit vulnerabilities and weaknesses in order to achieve geopolitical and strategic goals. Some argue, that China's greyzone tactics mainly aim to improve its geopolitical position in a peaceful manner. In contrast to the greyzone tactics used by Russia in Crimea in 2014, China's approach differs significantly. One supporting argument is that the majority of the activities occur in uninhabited areas at sea, which contradicts a definition of hybrid warfare that suggests it is targeted at populations. Additionally, China's objective is not to destabilise other states, but rather to enhance its national security by gaining control over regional waters. Furthermore, China is not aiming to seize control from another power, but rather seeks to establish a dominant security and political position in the region. It is worth noting that China employs unarmed or lightly armed vessels deliberately, as they are unlikely to resort to deadly force. However, others argue that China's greyzone tactics can be classified as hybrid warfare. Some viewpoints contend that China's establishment of military bases on artificial islands serves as a means to assert their territorial claims through the use of force. This approach is referred to as the Cabbage strategy, wherein a contested area is encircled by multiple layers of security to deny access to rival nations, ultimately solidifying their claim. While there is no consensus on China's motives behind the creation of artificial islands, it is widely acknowledged that China aims to bolster its power and influence in the region. These actions contribute to the escalating tensions in the South China Sea. Gallery See also Artificial hill Chinampa Discovery Bay, California Eko Atlantic Land reclamation in Monaco List of artificial islands Ocean colonization Ocean Flower Island Offshore geotechnical engineering Principality of Sealand Republic of Rose Island Seasteading Very large floating structure References External links Artificial Islands in The Law of the Sea Coastal construction artificial Land reclamation
Artificial island
[ "Engineering" ]
3,577
[ "Construction", "Coastal construction" ]
330,206
https://en.wikipedia.org/wiki/Differentiable%20function
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp. If is an interior point in the domain of a function , then is said to be differentiable at if the derivative exists. In other words, the graph of has a non-vertical tangent line at the point . is said to be differentiable on if it is differentiable at every point of . is said to be continuously differentiable if its derivative is also a continuous function over the domain of the function . Generally speaking, is said to be of class if its first derivatives exist and are continuous over the domain of the function . For a multivariable function, as shown here, the differentiability of it is something more complex than the existence of the partial derivatives of it. Differentiability of real functions of one variable A function , defined on an open set , is said to be differentiable at if the derivative exists. This implies that the function is continuous at . This function is said to be differentiable on if it is differentiable at every point of . In this case, the derivative of is thus a function from into A continuous function is not necessarily differentiable, but a differentiable function is necessarily continuous (at every point where it is differentiable) as is shown below (in the section Differentiability and continuity). A function is said to be continuously differentiable if its derivative is also a continuous function; there exist functions that are differentiable but not continuously differentiable (an example is given in the section Differentiability classes). Differentiability and continuity If is differentiable at a point , then must also be continuous at . In particular, any differentiable function must be continuous at every point in its domain. The converse does not hold: a continuous function need not be differentiable. For example, a function with a bend, cusp, or vertical tangent may be continuous, but fails to be differentiable at the location of the anomaly. Most functions that occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions that have a derivative at some point is a meagre set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function. Differentiability classes A function is said to be if the derivative exists and is itself a continuous function. Although the derivative of a differentiable function never has a jump discontinuity, it is possible for the derivative to have an essential discontinuity. For example, the function is differentiable at 0, since exists. However, for differentiation rules imply which has no limit as Thus, this example shows the existence of a function that is differentiable but not continuously differentiable (i.e., the derivative is not a continuous function). Nevertheless, Darboux's theorem implies that the derivative of any function satisfies the conclusion of the intermediate value theorem. Similarly to how continuous functions are said to be of continuously differentiable functions are sometimes said to be of . A function is of if the first and second derivative of the function both exist and are continuous. More generally, a function is said to be of if the first derivatives all exist and are continuous. If derivatives exist for all positive integers the function is smooth or equivalently, of Differentiability in higher dimensions A function of several real variables is said to be differentiable at a point if there exists a linear map such that If a function is differentiable at , then all of the partial derivatives exist at , and the linear map is given by the Jacobian matrix, an n × m matrix in this case. A similar formulation of the higher-dimensional derivative is provided by the fundamental increment lemma found in single-variable calculus. If all the partial derivatives of a function exist in a neighborhood of a point and are continuous at the point , then the function is differentiable at that point . However, the existence of the partial derivatives (or even of all the directional derivatives) does not guarantee that a function is differentiable at a point. For example, the function defined by is not differentiable at , but all of the partial derivatives and directional derivatives exist at this point. For a continuous example, the function is not differentiable at , but again all of the partial derivatives and directional derivatives exist. Differentiability in complex analysis In complex analysis, complex-differentiability is defined using the same definition as single-variable real functions. This is allowed by the possibility of dividing complex numbers. So, a function is said to be differentiable at when Although this definition looks similar to the differentiability of single-variable real functions, it is however a more restrictive condition. A function , that is complex-differentiable at a point is automatically differentiable at that point, when viewed as a function . This is because the complex-differentiability implies that However, a function can be differentiable as a multi-variable function, while not being complex-differentiable. For example, is differentiable at every point, viewed as the 2-variable real function , but it is not complex-differentiable at any point because the limit does not exist (the limit depends on the angle of approach). Any function that is complex-differentiable in a neighborhood of a point is called holomorphic at that point. Such a function is necessarily infinitely differentiable, and in fact analytic. Differentiable functions on manifolds If M is a differentiable manifold, a real or complex-valued function f on M is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate chart defined around p. If M and N are differentiable manifolds, a function f: M → N is said to be differentiable at a point p if it is differentiable with respect to some (or any) coordinate charts defined around p and f(p). See also Generalizations of the derivative Semi-differentiability Differentiable programming References Multivariable calculus Smooth functions
Differentiable function
[ "Mathematics" ]
1,326
[ "Multivariable calculus", "Calculus" ]
330,303
https://en.wikipedia.org/wiki/Zygoma
The term zygoma generally refers to the zygomatic bone, a bone of the human skull that is commonly referred to as the cheekbone or malar bone, but it may also refer to: The zygomatic arch, a structure in the human skull formed primarily by parts of the zygomatic bone and the temporal bone The zygomatic process, a bony protrusion of the human skull, mostly composed of the zygomatic bone but also contributed to by the frontal bone, temporal bone, and maxilla See also Zygoma implant Zygoma reduction plasty Anatomy
Zygoma
[ "Biology" ]
130
[ "Anatomy" ]
330,310
https://en.wikipedia.org/wiki/Rank%E2%80%93nullity%20theorem
The rank–nullity theorem is a theorem in linear algebra, which asserts: the number of columns of a matrix is the sum of the rank of and the nullity of ; and the dimension of the domain of a linear transformation is the sum of the rank of (the dimension of the image of ) and the nullity of (the dimension of the kernel of ). It follows that for linear transformations of vector spaces of equal finite dimension, either injectivity or surjectivity implies bijectivity. Stating the theorem Linear transformations Let be a linear transformation between two vector spaces where 's domain is finite dimensional. Then where is the rank of (the dimension of its image) and is the nullity of (the dimension of its kernel). In other words, This theorem can be refined via the splitting lemma to be a statement about an isomorphism of spaces, not just dimensions. Explicitly, since induces an isomorphism from to the existence of a basis for that extends any given basis of implies, via the splitting lemma, that Taking dimensions, the rank–nullity theorem follows. Matrices Linear maps can be represented with matrices. More precisely, an matrix represents a linear map where is the underlying field. So, the dimension of the domain of is , the number of columns of , and the rank–nullity theorem for an matrix is Proofs Here we provide two proofs. The first operates in the general case, using linear maps. The second proof looks at the homogeneous system where is a with rank and shows explicitly that there exists a set of linearly independent solutions that span the null space of . While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain. First proof Let be vector spaces over some field and defined as in the statement of the theorem with . As is a subspace, there exists a basis for it. Suppose and let be such a basis. We may now, by the Steinitz exchange lemma, extend with linearly independent vectors to form a full basis of . Let such that is a basis for . From this, we know that We now claim that is a basis for . The above equality already states that is a generating set for ; it remains to be shown that it is also linearly independent to conclude that it is a basis. Suppose is not linearly independent, and let for some . Thus, owing to the linearity of , it follows that This is a contradiction to being a basis, unless all are equal to zero. This shows that is linearly independent, and more specifically that it is a basis for . To summarize, we have , a basis for , and , a basis for . Finally we may state that This concludes our proof. Second proof Let be an matrix with linearly independent columns (i.e. ). We will show that: To do this, we will produce an matrix whose columns form a basis of the null space of . Without loss of generality, assume that the first columns of are linearly independent. So, we can write where is an matrix with linearly independent column vectors, and is an matrix such that each of its columns is linear combinations of the columns of . This means that for some matrix (see rank factorization) and, hence, Let where is the identity matrix. So, is an matrix such that Therefore, each of the columns of are particular solutions of . Furthermore, the columns of are linearly independent because will imply for : Therefore, the column vectors of constitute a set of linearly independent solutions for . We next prove that any solution of must be a linear combination of the columns of . For this, let be any vector such that . Since the columns of are linearly independent, implies . Therefore, This proves that any vector that is a solution of must be a linear combination of the special solutions given by the columns of . And we have already seen that the columns of are linearly independent. Hence, the columns of constitute a basis for the null space of . Therefore, the nullity of is . Since equals rank of , it follows that . This concludes our proof. A third fundamental subspace When is a linear transformation between two finite-dimensional subspaces, with and (so can be represented by an matrix ), the rank–nullity theorem asserts that if has rank , then is the dimension of the null space of , which represents the kernel of . In some texts, a third fundamental subspace associated to is considered alongside its image and kernel: the cokernel of is the quotient space , and its dimension is . This dimension formula (which might also be rendered ) together with the rank–nullity theorem is sometimes called the fundamental theorem of linear algebra. Reformulations and generalizations This theorem is a statement of the first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the splitting lemma. In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that is a short exact sequence of vector spaces, then , hence Here plays the role of and is , i.e. In the finite-dimensional case, this formulation is susceptible to a generalization: if is an exact sequence of finite-dimensional vector spaces, then The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the index of a linear map. The index of a linear map , where and are finite-dimensional, is defined by Intuitively, is the number of independent solutions of the equation , and is the number of independent restrictions that have to be put on to make solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement We see that we can easily read off the index of the linear map from the involved spaces, without any need to analyze in detail. This effect also occurs in a much deeper result: the Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces. Citations References . External links , MIT Linear Algebra Lecture on the Four Fundamental Subspaces, from MIT OpenCourseWare Theorems in linear algebra Isomorphism theorems Articles containing proofs
Rank–nullity theorem
[ "Mathematics" ]
1,356
[ "Theorems in algebra", "Theorems in linear algebra", "Articles containing proofs" ]
330,320
https://en.wikipedia.org/wiki/Exponential%20decay
A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the following differential equation, where is the quantity and (lambda) is a positive rate called the exponential decay constant, disintegration constant, rate constant, or transformation constant: The solution to this equation (see derivation below) is: where is the quantity at time , is the initial quantity, that is, the quantity at time . Measuring rates of decay Mean lifetime If the decaying quantity, N(t), is the number of discrete elements in a certain set, it is possible to compute the average length of time that an element remains in the set. This is called the mean lifetime (or simply the lifetime), where the exponential time constant, , relates to the decay rate constant, λ, in the following way: The mean lifetime can be looked at as a "scaling time", because the exponential decay equation can be written in terms of the mean lifetime, , instead of the decay constant, λ: and that is the time at which the population of the assembly is reduced to ≈ 0.367879441 times its initial value. This is equivalent to ≈ 1.442695 half-lives. For example, if the initial population of the assembly, N(0), is 1000, then the population at time , , is 368. A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, rather than e. In that case the scaling time is the "half-life". Half-life A more intuitive characteristic of exponential decay for many people is the time required for the decaying quantity to fall to one half of its initial value. (If N(t) is discrete, then this is the median life-time rather than the mean life-time.) This time is called the half-life, and often denoted by the symbol t1/2. The half-life can be written in terms of the decay constant, or the mean lifetime, as: When this expression is inserted for in the exponential equation above, and ln 2 is absorbed into the base, this equation becomes: Thus, the amount of material left is 2−1 = 1/2 raised to the (whole or fractional) number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the original material left. Therefore, the mean lifetime is equal to the half-life divided by the natural log of 2, or: For example, polonium-210 has a half-life of 138 days, and a mean lifetime of 200 days. Solution of the differential equation The equation that describes exponential decay is or, by rearranging (applying the technique called separation of variables), Integrating, we have where C is the constant of integration, and hence where the final substitution, N0 = eC, is obtained by evaluating the equation at t = 0, as N0 is defined as being the quantity at t = 0. This is the form of the equation that is most commonly used to describe exponential decay. Any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the decay constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the differential operator with N(t) as the corresponding eigenfunction. The units of the decay constant are s−1. Derivation of the mean lifetime Given an assembly of elements, the number of which decreases ultimately to zero, the mean lifetime, , (also called simply the lifetime) is the expected value of the amount of time before an object is removed from the assembly. Specifically, if the individual lifetime of an element of the assembly is the time elapsed between some reference time and the removal of that element from the assembly, the mean lifetime is the arithmetic mean of the individual lifetimes. Starting from the population formula first let c be the normalizing factor to convert to a probability density function: or, on rearranging, Exponential decay is a scalar multiple of the exponential distribution (i.e. the individual lifetime of each object is exponentially distributed), which has a well-known expected value. We can compute it here using integration by parts. Decay by two or more processes A quantity may decay via two or more different processes simultaneously. In general, these processes (often called "decay modes", "decay channels", "decay routes" etc.) have different probabilities of occurring, and thus occur at different rates with different half-lives, in parallel. The total decay rate of the quantity N is given by the sum of the decay routes; thus, in the case of two processes: The solution to this equation is given in the previous section, where the sum of is treated as a new total decay constant . Partial mean life associated with individual processes is by definition the multiplicative inverse of corresponding partial decay constant: . A combined can be given in terms of s: Since half-lives differ from mean life by a constant factor, the same equation holds in terms of the two corresponding half-lives: where is the combined or total half-life for the process, and are so-named partial half-lives of corresponding processes. Terms "partial half-life" and "partial mean life" denote quantities derived from a decay constant as if the given decay mode were the only decay mode for the quantity. The term "partial half-life" is misleading, because it cannot be measured as a time interval for which a certain quantity is halved. In terms of separate decay constants, the total half-life can be shown to be For a decay by three simultaneous exponential processes the total half-life can be computed as above: Decay series / coupled decay In nuclear science and pharmacokinetics, the agent of interest might be situated in a decay chain, where the accumulation is governed by exponential decay of a source agent, while the agent of interest itself decays by means of an exponential process. These systems are solved using the Bateman equation. In the pharmacology setting, some ingested substances might be absorbed into the body by a process reasonably modeled as exponential decay, or might be deliberately formulated to have such a release profile. Applications and examples Exponential decay occurs in a wide variety of situations. Most of these fall into the domain of the natural sciences. Many decay processes that are often treated as exponential, are really only exponential so long as the sample is large and the law of large numbers holds. For small samples, a more general analysis is necessary, accounting for a Poisson process. Natural sciences Chemical reactions: The rates of certain types of chemical reactions depend on the concentration of one or another reactant. Reactions whose rate depends only on the concentration of one reactant (known as first-order reactions) consequently follow exponential decay. For instance, many enzyme-catalyzed reactions behave this way. Electrostatics: The electric charge (or, equivalently, the potential) contained in a capacitor (capacitance C) discharges with exponential decay (when the capacitor experiences a constant external load of resistance R) and similarly charges with the mirror image of exponential decay (when the capacitor is charged from a constant voltage source though a constant resistance). The exponential time-constant for the process is so the half-life is The same equations can be applied to the dual of current in an inductor. Furthermore, the particular case of a capacitor or inductor changing through several parallel resistors makes an interesting example of multiple decay processes, with each resistor representing a separate process. In fact, the expression for the equivalent resistance of two resistors in parallel mirrors the equation for the half-life with two decay processes. Geophysics: Atmospheric pressure decreases approximately exponentially with increasing height above sea level, at a rate of about 12% per 1000m. Heat transfer: If an object at one temperature is exposed to a medium of another temperature, the temperature difference between the object and the medium follows exponential decay (in the limit of slow processes; equivalent to "good" heat conduction inside the object, so that its temperature remains relatively uniform through its volume). See also Newton's law of cooling. Luminescence: After excitation, the emission intensity – which is proportional to the number of excited atoms or molecules – of a luminescent material decays exponentially. Depending on the number of mechanisms involved, the decay can be mono- or multi-exponential. Pharmacology and toxicology: It is found that many administered substances are distributed and metabolized (see clearance) according to exponential decay patterns. The biological half-lives "alpha half-life" and "beta half-life" of a substance measure how quickly a substance is distributed and eliminated. Physical optics: The intensity of electromagnetic radiation such as light or X-rays or gamma rays in an absorbent medium, follows an exponential decrease with distance into the absorbing medium. This is known as the Beer-Lambert law. Radioactivity: In a sample of a radionuclide that undergoes radioactive decay to a different state, the number of atoms in the original state follows exponential decay as long as the remaining number of atoms is large. The decay product is termed a radiogenic nuclide. Thermoelectricity: The decline in resistance of a Negative Temperature Coefficient Thermistor as temperature is increased. Vibrations: Some vibrations may decay exponentially; this characteristic is often found in damped mechanical oscillators, and used in creating ADSR envelopes in synthesizers. An overdamped system will simply return to equilibrium via an exponential decay. Beer froth: Arnd Leike, of the Ludwig Maximilian University of Munich, won an Ig Nobel Prize for demonstrating that beer froth obeys the law of exponential decay. Social sciences Finance: a retirement fund will decay exponentially being subject to discrete payout amounts, usually monthly, and an input subject to a continuous interest rate. A differential equation dA/dt = input – output can be written and solved to find the time to reach any amount A, remaining in the fund. In simple glottochronology, the (debatable) assumption of a constant decay rate in languages allows one to estimate the age of single languages. (To compute the time of split between two languages requires additional assumptions, independent of exponential decay). Computer science The core routing protocol on the Internet, BGP, has to maintain a routing table in order to remember the paths a packet can be deviated to. When one of these paths repeatedly changes its state from available to not available (and vice versa), the BGP router controlling that path has to repeatedly add and remove the path record from its routing table (flaps the path), thus spending local resources such as CPU and RAM and, even more, broadcasting useless information to peer routers. To prevent this undesired behavior, an algorithm named route flapping damping assigns each route a weight that gets bigger each time the route changes its state and decays exponentially with time. When the weight reaches a certain limit, no more flapping is done, thus suppressing the route. See also Exponential formula Exponential growth Radioactive decay for the mathematics of chains of exponential processes with differing constants Notes References External links Exponential decay calculator A stochastic simulation of exponential decay Tutorial on time constants Exponentials
Exponential decay
[ "Mathematics" ]
2,374
[ "E (mathematical constant)", "Exponentials" ]
330,361
https://en.wikipedia.org/wiki/Thyroid-stimulating%20hormone
Thyroid-stimulating hormone (also known as thyrotropin, thyrotropic hormone, or abbreviated TSH) is a pituitary hormone that stimulates the thyroid gland to produce thyroxine (T4), and then triiodothyronine (T3) which stimulates the metabolism of almost every tissue in the body. It is a glycoprotein hormone produced by thyrotrope cells in the anterior pituitary gland, which regulates the endocrine function of the thyroid. Physiology Hormone levels TSH (with a half-life of about an hour) stimulates the thyroid gland to secrete the hormone thyroxine (T4), which has only a slight effect on metabolism. T4 is converted to triiodothyronine (T3), which is the active hormone that stimulates metabolism. About 80% of this conversion is in the liver and other organs, and 20% in the thyroid itself. TSH is secreted throughout life but particularly reaches high levels during the periods of rapid growth and development, as well as in response to stress. The hypothalamus, in the base of the brain, produces thyrotropin-releasing hormone (TRH). TRH stimulates the anterior pituitary gland to produce TSH. Somatostatin is also produced by the hypothalamus, and has an opposite effect on the pituitary production of TSH, decreasing or inhibiting its release. The concentration of thyroid hormones (T3 and T4) in the blood regulates the pituitary release of TSH; when T3 and T4 concentrations are low, the production of TSH is increased, and, conversely, when T3 and T4 concentrations are high, TSH production is decreased. This is an example of a negative feedback loop. Any inappropriateness of measured values, for instance a low-normal TSH together with a low-normal T4 may signal tertiary (central) disease and a TSH to TRH pathology. Elevated reverse T3 (RT3) together with low-normal TSH and low-normal T3, T4 values, which is regarded as indicative for euthyroid sick syndrome, may also have to be investigated for chronic subacute thyroiditis (SAT) with output of subpotent hormones. Absence of antibodies in patients with diagnoses of an autoimmune thyroid in their past would always be suspicious for development to SAT even in the presence of a normal TSH because there is no known recovery from autoimmunity. For clinical interpretation of laboratory results it is important to acknowledge that TSH is released in a pulsatile manner resulting in both circadian and ultradian rhythms of its serum concentrations. Subunits TSH is a glycoprotein and consists of two subunits, the alpha and the beta subunit. The α (alpha) subunit (i.e., chorionic gonadotropin alpha) is nearly identical to that of human chorionic gonadotropin (hCG), luteinizing hormone (LH), and follicle-stimulating hormone (FSH). The α subunit is thought to be the effector region responsible for stimulation of adenylate cyclase (involved the generation of cAMP). The α chain has a 92-amino acid sequence. The β (beta) subunit (TSHB) is unique to TSH, and therefore determines its receptor specificity. The β chain has a 118-amino acid sequence. The TSH receptor The TSH receptor is found mainly on thyroid follicular cells. Stimulation of the receptor increases T3 and T4 production and secretion. This occurs through stimulation of six steps in thyroid hormone synthesis: (1) Up-regulating the activity of the sodium-iodide symporter (NIS) on the basolateral membrane of thyroid follicular cells, thereby increasing intracellular concentrations of iodine (iodine trapping). (2) Stimulating iodination of thyroglobulin in the follicular lumen, a precursor protein of thyroid hormone. (3) Stimulating the conjugation of iodinated tyrosine residues. This leads to the formation of thyroxine (T4) and triiodothyronine (T3) that remain attached to the thyroglobulin protein. (4) Increased endocytocis of the iodinated thyroglobulin protein across the apical membrane back into the follicular cell. (5) Stimulation of proteolysis of iodinated thyroglobulin to form free thyroxine (T4) and triiodothyronine (T3). (6) Secretion of thyroxine (T4) and triiodothyronine (T3) across the basolateral membrane of follicular cells to enter the circulation. This occurs by an unknown mechanism. Stimulating antibodies to the TSH receptor mimic TSH and cause Graves' disease. In addition, hCG shows some cross-reactivity to the TSH receptor and therefore can stimulate production of thyroid hormones. In pregnancy, prolonged high concentrations of hCG can produce a transient condition termed gestational hyperthyroidism. This is also the mechanism of trophoblastic tumors increasing the production of thyroid hormones. Applications Diagnostics Reference ranges for TSH may vary slightly, depending on the method of analysis, and do not necessarily equate to cut-offs for diagnosing thyroid dysfunction. In the UK, guidelines issued by the Association for Clinical Biochemistry suggest a reference range of 0.4–4.0 μIU/mL (or mIU/L). The National Academy of Clinical Biochemistry (NACB) stated that it expected the reference range for adults to be reduced to 0.4–2.5 μIU/mL, because research had shown that adults with an initially measured TSH level of over 2.0 μIU/mL had "an increased odds ratio of developing hypothyroidism over the [following] 20 years, especially if thyroid antibodies were elevated". TSH concentrations in children are normally higher than in adults. In 2002, the NACB recommended age-related reference limits starting from about 1.3 to 19 μIU/mL for normal-term infants at birth, dropping to 0.6–10 μIU/mL at 10 weeks old, 0.4–7.0 μIU/mL at 14 months and gradually dropping during childhood and puberty to adult levels, 0.3–3.0 μIU/mL. Diagnosis of disease TSH concentrations are measured as part of a thyroid function test in patients suspected of having an excess (hyperthyroidism) or deficiency (hypothyroidism) of thyroid hormones. Interpretation of the results depends on both the TSH and T4 concentrations. In some situations measurement of T3 may also be useful. A TSH assay is now also the recommended screening tool for thyroid disease. Recent advances in increasing the sensitivity of the TSH assay make it a better screening tool than free T4. Monitoring The therapeutic target range TSH level for patients on treatment ranges between 0.3 and 3.0 μIU/mL. For hypothyroid patients on thyroxine, measurement of TSH alone is generally considered sufficient. An increase in TSH above the normal range indicates under-replacement or poor compliance with therapy. A significant reduction in TSH suggests over-treatment. In both cases, a change in dose may be required. A low or low-normal TSH value may also signal pituitary disease in the absence of replacement. For hyperthyroid patients, both TSH and T4 are usually monitored. In pregnancy, TSH measurements do not seem to be a good marker for the well-known association of maternal thyroid hormone availability with offspring neurocognitive development. TSH distribution progressively shifts toward higher concentrations with age. Difficulties with interpretation of TSH measurement Heterophile antibodies (which include human anti-mouse antibodies (HAMA) and Rheumatoid Factor (RF)), which bind weakly to the test assay's animal antibodies, causing a higher (or less commonly lower) TSH result than the actual true TSH level. Although the standard lab assay panels are designed to remove moderate levels of heterophilic antibodies, these fail to remove higher antibody levels. "Dr. Baumann [from Mayo Clinic] and her colleagues found that 4.4 percent of the hundreds of samples she tested were affected by heterophile antibodies.........The hallmark of this condition is a discrepancy between TSH value and free T4 value, and most important between laboratory values and patient's conditions. Endocrinologists, in particular, should be on alert for this." Macro-TSH - endogenous antibodies bind to TSH reducing its activity, so the pituitary gland would need to produce more TSH to obtain the same overall level of TSH activity. TSH Isomers - natural variations of the TSH molecule, which have lower activity, so the pituitary gland would need to produce more TSH to obtain the same overall level of TSH activity. The same TSH concentration may have a different meaning whether it is used for diagnosis of thyroid dysfunction or for monitoring of substitution therapy with levothyroxine. Reasons for this lack of generalisation are Simpson's paradox and the fact that the TSH-T3 shunt is disrupted in treated hypothyroidism, so that the shape of the relation between free T4 and TSH concentration is distorted. Therapeutic Synthetic recombinant human TSH alpha (rhTSHα or simply rhTSH) or thyrotropin alfa (INN) is manufactured by Genzyme Corp under the trade name Thyrogen. It is used to manipulate endocrine function of thyroid-derived cells, as part of the diagnosis and treatment of thyroid cancer. A Cochrane review compared treatments using recombinant human thyrotropin-aided radioactive iodine to radioactive iodine alone. In this review it was found that the recombinant human thyrotropin-aided radioactive iodine appeared to lead to a greater reduction in thyroid volume at the increased risk of hypothyroidism. No conclusive data on changes in quality of life with either treatments were found. History In 1916, Bennett M. Allen and Philip E. Smith found that the pituitary contained a thyrotropic substance. The first standardised purification protocol for this thyrotropic hormone was described by Charles George Lambie and Victor Trikojus, working at the University of Sydney in 1937. References External links TSH at Lab Tests Online Anterior pituitary hormones Glycoproteins Hormones of the hypothalamus-pituitary-thyroid axis Human hormones Peptide hormones Pituitary gland Sanofi Thyroid
Thyroid-stimulating hormone
[ "Chemistry" ]
2,275
[ "Glycoproteins", "Glycobiology" ]
330,432
https://en.wikipedia.org/wiki/Note%20%28typography%29
In publishing, a note is a brief text in which the author comments on the subject and themes of the book and names supporting citations. In the editorial production of books and documents, typographically, a note is usually several lines of text at the bottom of the page, at the end of a chapter, at the end of a volume, or a house-style typographic usage throughout the text. Notes are usually identified with superscript numbers or a symbol. Footnotes are informational notes located at the foot of the thematically relevant page, whilst endnotes are informational notes published at the end of a chapter, the end of a volume, or the conclusion of a multi-volume book. Unlike footnotes, which require manipulating the page design (text-block and page layouts) to accommodate the additional text, endnotes are advantageous to editorial production because the textual inclusion does not alter the design of the publication. However, graphic designers of contemporary editions of the Bible often place the notes in a narrow column in the page centre, between two columns of biblical text. Numbering and symbols In English-language typesetting, footnotes and endnotes are usually indicated with a superscript number appended to the pertinent block of text. Typographic symbols are sometimes used instead of numbers, with their traditional ordering being: Asterisk (*) Dagger (†) Crossed dagger (‡) Section sign (§) Vertical bar (‖) Pilcrow (¶) Additional typographic characters used to identify notes include the number sign (#), the Greek letter delta (Δ), the diamond-shaped lozenge (◊), the downward arrow (↓), and the manicule (☞), a hand with an extended index finger. Location Footnote reference numbers ("cues") in the body text of a page should be placed at the end of a sentence if possible, after the final punctuation. This minimizes the interruption of the flow of reading and allows the reader to absorb a complete sentence-idea before having their attention redirected to the content of the note.The cue is placed after any punctuation (normally after the closing point of a sentence). ... Notes cued in the middle of a sentence are a distraction to the reader, and cues are best located at the end of sentences. Academic usage Notes are most often used as an alternative to long explanations, citations, comments, or annotations that can be distracting to readers. Most literary style guidelines (including the Modern Language Association and the American Psychological Association) recommend limited use of foot- and endnotes. However, publishers often encourage note references instead of parenthetical references. Aside from use as a bibliographic element, notes are used for additional information, qualification, or explanation that might be too digressive for the main text. Footnotes are heavily utilized in academic institutions to support claims made in academic essays covering myriad topics. In particular, footnotes are the normal form of citation in historical journals. This is due, firstly, to the fact that the most important references are often to archive sources or interviews that do not readily fit standard formats, and secondly, to the fact that historians expect to see the exact nature of the evidence that is being used at each stage. The MLA (Modern Language Association) requires the superscript numbers in the main text to be placed following the punctuation in the phrase or clause the note is about. The exception to this rule occurs when a sentence contains a dash, in which case the superscript would precede it. However, MLA is not known for endnote or footnote citations, and APA and Chicago styles use them more regularly. Historians are known to use Chicago style citations. Aside from their technical use, authors use notes for a variety of reasons: As signposts to direct the reader to information the author has provided or where further useful information is pertaining to the subject in the main text. To attribute a quote or viewpoint. As an alternative to parenthetical references; it is a simpler way to acknowledge information gained from another source. To escape the limitations imposed on the word count of various academic and legal texts which do not take into account notes. Aggressive use of this strategy can lead to a text affected by "foot and note disease" (a derogation coined by John Betjeman). Government documents The US Government Printing Office Style Manual devotes over 660 words to the topic of footnotes. NASA has guidance for footnote usage in its historical documents. Legal writing Former Associate Justice Stephen Breyer of the Supreme Court of the United States is famous in the American legal community for his writing style, in which he never uses notes. He prefers to keep all citations within the text (which is permitted in American legal citation). Richard A. Posner has also written against the use of notes in judicial opinions. Bryan A. Garner, however, advocates using notes instead of inline citations. HTML HTML, the predominant markup language for web pages, has no mechanism for adding notes. Despite a number of different proposals over the years, the working group has been unable to reach a consensus on it. Because of this, MediaWiki, for example, has had to introduce its own <ref></ref> tag for citing references in notes. It might be argued that the hyperlink partially eliminates the need for notes, being the web's way to refer to another document. However, it does not allow citing to offline sources and if the destination of the link changes, the link can become dead or irrelevant. A proposed solution is the use of a digital object identifier. As of 2024, the HTML Living Standard has provided several workarounds for the inclusion of footnotes depending on length or type of annotation. In instances where a user needs to add an endnote or footnote using HTML, they can add the superscript number using <sup></sup>, then link the superscripted text to the reference section using an anchor tag. Create an anchor tag by using and then link the superscripted text to "ref1". History The London printer Richard Jugge is generally credited as the inventor of the footnote, first used in the Bishops' Bible of 1568. Early printings of the Douay Bible used a four-dot punctuation mark (represented in Unicode as U+2E2C “⸬”) to indicate a marginal note. It can often be mistaken for two closely-spaced colons. Literary device At times, notes have been used for their comical effect, or as a literary device. James Joyce's Finnegans Wake (1939) uses footnotes along with left and right marginal notes in Book II Chapter 2. The three types of notes represent comments from the three siblings doing their homework: Shem, Shaun, and Issy. J. G. Ballard's "Notes Towards a Mental Breakdown" (1967) is one sentence ("A discharged Broadmoor patient compiles 'Notes Towards a Mental Breakdown,' recalling his wife's murder, his trial and exoneration.") and a series of elaborate footnotes to each one of the words. Mark Z. Danielewski's House of Leaves (2000) uses what are arguably some of the most extensive and intricate footnotes in literature. Throughout the novel, footnotes are used to tell several different narratives outside of the main story. The physical orientation of the footnotes on the page also works to reflect the twisted feeling of the plot (often taking up several pages, appearing mirrored from page to page, vertical on either side of the page, or in boxes in the center of the page, in the middle of the central narrative). Flann O'Brien's The Third Policeman (1967) utilizes extensive and lengthy footnotes for the discussion of a fictional philosopher, de Selby. These footnotes span several pages and often overtake the main plotline, and add to the absurdist tone of the book. David Foster Wallace's Infinite Jest includes over 400 endnotes, some over a dozen pages long. Several literary critics suggested that the book be read with two bookmarks. Wallace uses footnotes, endnotes, and in-text notes in much of his other writing as well. Manuel Puig's Kiss of the Spider Woman (originally published in Spanish as El beso de la mujer araña) also makes extensive use of footnotes. Garrison Keillor's Lake Wobegon Days includes lengthy footnotes and a parallel narrative. Mark Dunn's Ibid: A Life is written entirely in endnotes. Luis d'Antin van Rooten's Mots d'Heures: Gousses, Rames (the title is in French, but when pronounced, sounds similar to the English "Mother Goose Rhymes"), in which he is allegedly the editor of a manuscript by the fictional François Charles Fernand d’Antin, contains copious footnotes purporting to help explain the nonsensical French text. The point of the book is that each written French poem sounds like an English nursery rhyme. Terry Pratchett has made numerous uses within his novels. The footnotes will often set up running jokes for the rest of the novel. B.L.A. and G.B. Gabbler's meta novel The Automation makes uses of footnotes to break the fourth wall. The narrator of the novel, known as "B.L.A.," tells the fantastical story as if true, while the editor, Gabbler, annotates the story through footnotes and thinks the manuscript is only a prose poem attempting to be a literary masterwork. Susanna Clarke's 2004 novel Jonathan Strange & Mr Norrell has 185 footnotes, adumbrating fictional events before and after those of the main text, in the same archaic narrative voice, and citing fictional scholarly and magical authorities. Jonathan Stroud's The Bartimaeus Trilogy uses footnotes to insert comical remarks and explanations by one of the protagonists, Bartimaeus. Michael Gerber's Barry Trotter parody series used footnotes to expand one-line jokes in the text into paragraph-long comedic monologues that would otherwise break the flow of the narrative. John Green's An Abundance of Katherines uses footnotes, about which he says: "[They] can allow you to create a kind of secret second narrative, which is important if, say, you're writing a book about what a story is and whether stories are significant." Dr Carol Bolton uses extensive footnotes to provide the modern reader with a cipher for a novel about the travels of the fictional Spanish traveller Don Manuel Alvarez Espriella, an early 19th-century construct of Robert Southey's, designed to provide him with vehicle to critique the societal habits of the day. Jasper Fforde's Thursday Next series exploits the use of footnotes as a communication device (the footnoterphone) which allows communication between the main character's universe and the fictional bookworld. Ernest Hemingway's Natural History of the Dead uses a footnote to further satirize the style of a history while making a sardonic statement about the extinction of "humanists" in modern society. Pierre Bayle's Historical and Critical Dictionary follows each brief entry with a footnote (often five or six times the length of the main text) in which saints, historical figures, and other topics are used as examples for philosophical digression. The separate footnotes are designed to contradict each other, and only when multiple footnotes are read together is Bayle's core argument for Fideistic skepticism revealed. This technique was used in part to evade the harsh censorship of 17th-century France. Mordecai Richler's novel Barney's Version uses footnotes as a character device that highlights unreliable passages in the narration. As the editor of his father's autobiography, the narrator's son must correct any of his father's misstated facts. The frequency of these corrections increases as the father falls victim to both hubris and Alzheimer's disease. While most of these changes are minor, a few are essential to plot and character development. In Vladimir Nabokov's Pale Fire, the main plot is told through the annotative endnotes of a fictional editor. Bartleby & Co., a novel by Enrique Vila-Matas, is stylized as footnotes to a nonexistent novel. The works of Jack Vance often have footnotes, detailing and informing the reader of the background of the world in the novel. Stephen Colbert's I Am America (And So Can You!) uses both footnotes and margin notes to offer additional commentary and humor. Doug Dorst's novel S. uses footnotes to explore the story and relationship of characters V.M. Straka and F.X. Caldeira. Terry Pratchett and Neil Gaiman's collaboration, Good Omens, frequently uses footnotes to add humorous asides. The short story "The Fifth Fear" in Terena Elizabeth Bell's collection Tell Me What You See uses footnotes to make the science fiction story resemble a historical document. Douglas Adams used footnotes frequently in his Hitchhiker's Guide to the Galaxy series. See also Annotation Citation Hyperkino Ibid. Nota bene Wikipedia style guide for references References Further reading Bibliography Reference Metadata
Note (typography)
[ "Technology" ]
2,765
[ "Metadata", "Data" ]
330,603
https://en.wikipedia.org/wiki/Disc%20integration
Disc integration, also known in integral calculus as the disc method, is a method for calculating the volume of a solid of revolution of a solid-state material when integrating along an axis "parallel" to the axis of revolution. This method models the resulting three-dimensional shape as a stack of an infinite number of discs of varying radius and infinitesimal thickness. It is also possible to use the same principles with rings instead of discs (the "washer method") to obtain hollow solids of revolutions. This is in contrast to shell integration, which integrates along an axis perpendicular to the axis of revolution. Definition Function of If the function to be revolved is a function of , the following integral represents the volume of the solid of revolution: where is the distance between the function and the axis of rotation. This works only if the axis of rotation is horizontal (example: or some other constant). Function of If the function to be revolved is a function of , the following integral will obtain the volume of the solid of revolution: where is the distance between the function and the axis of rotation. This works only if the axis of rotation is vertical (example: or some other constant). Washer method To obtain a hollow solid of revolution (the “washer method”), the procedure would be to take the volume of the inner solid of revolution and subtract it from the volume of the outer solid of revolution. This can be calculated in a single integral similar to the following: where is the function that is farthest from the axis of rotation and is the function that is closest to the axis of rotation. For example, the next figure shows the rotation along the -axis of the red "leaf" enclosed between the square-root and quadratic curves: The volume of this solid is: One should take caution not to evaluate the square of the difference of the two functions, but to evaluate the difference of the squares of the two functions. (This formula only works for revolutions about the -axis.) To rotate about any horizontal axis, simply subtract from that axis from each formula. If is the value of a horizontal axis, then the volume equals For example, to rotate the region between and along the axis , one would integrate as follows: The bounds of integration are the zeros of the first equation minus the second. Note that when integrating along an axis other than the , the graph of the function that is farthest from the axis of rotation may not be that obvious. In the previous example, even though the graph of is, with respect to the x-axis, further up than the graph of , with respect to the axis of rotation the function is the inner function: its graph is closer to or the equation of the axis of rotation in the example. The same idea can be applied to both the -axis and any other vertical axis. One simply must solve each equation for before one inserts them into the integration formula. See also Solid of revolution Shell integration References Frank Ayres, Elliott Mendelson. Schaum's Outlines: Calculus. McGraw-Hill Professional 2008, . pp. 244–248 (. Retrieved July 12, 2013.) Integral calculus Volume
Disc integration
[ "Physics", "Mathematics" ]
649
[ "Scalar physical quantities", "Physical quantities", "Calculus", "Quantity", "Size", "Extensive quantities", "Volume", "Wikipedia categories named after physical quantities", "Integral calculus" ]
330,604
https://en.wikipedia.org/wiki/Monoidal%20category
In mathematics, a monoidal category (or tensor category) is a category equipped with a bifunctor that is associative up to a natural isomorphism, and an object I that is both a left and right identity for ⊗, again up to a natural isomorphism. The associated natural isomorphisms are subject to certain coherence conditions, which ensure that all the relevant diagrams commute. The ordinary tensor product makes vector spaces, abelian groups, R-modules, or R-algebras into monoidal categories. Monoidal categories can be seen as a generalization of these and other examples. Every (small) monoidal category may also be viewed as a "categorification" of an underlying monoid, namely the monoid whose elements are the isomorphism classes of the category's objects and whose binary operation is given by the category's tensor product. A rather different application, for which monoidal categories can be considered an abstraction, is a system of data types closed under a type constructor that takes two types and builds an aggregate type. The types serve as the objects, and ⊗ is the aggregate constructor. The associativity up to isomorphism is then a way of expressing that different ways of aggregating the same data—such as and —store the same information even though the aggregate values need not be the same. The aggregate type may be analogous to the operation of addition (type sum) or of multiplication (type product). For type product, the identity object is the unit , so there is only one inhabitant of the type, and that is why a product with it is always isomorphic to the other operand. For type sum, the identity object is the void type, which stores no information, and it is impossible to address an inhabitant. The concept of monoidal category does not presume that values of such aggregate types can be taken apart; on the contrary, it provides a framework that unifies classical and quantum information theory. In category theory, monoidal categories can be used to define the concept of a monoid object and an associated action on the objects of the category. They are also used in the definition of an enriched category. Monoidal categories have numerous applications outside of category theory proper. They are used to define models for the multiplicative fragment of intuitionistic linear logic. They also form the mathematical foundation for the topological order in condensed matter physics. Braided monoidal categories have applications in quantum information, quantum field theory, and string theory. Formal definition A monoidal category is a category equipped with a monoidal structure. A monoidal structure consists of the following: a bifunctor called the monoidal product, or tensor product, an object called the monoidal unit, unit object, or identity object, three natural isomorphisms subject to certain coherence conditions expressing the fact that the tensor operation: is associative: there is a natural (in each of three arguments , , ) isomorphism , called associator, with components , has as left and right identity: there are two natural isomorphisms and , respectively called left and right unitor, with components and . Note that a good way to remember how and act is by alliteration; Lambda, , cancels the identity on the left, while Rho, , cancels the identity on the right. The coherence conditions for these natural transformations are: for all , , and in , the pentagon diagram commutes; for all and in , the triangle diagram commutes. A strict monoidal category is one for which the natural isomorphisms α, λ and ρ are identities. Every monoidal category is monoidally equivalent to a strict monoidal category. Examples Any category with finite products can be regarded as monoidal with the product as the monoidal product and the terminal object as the unit. Such a category is sometimes called a cartesian monoidal category. For example: Set, the category of sets with the Cartesian product, any particular one-element set serving as the unit. Cat, the category of small categories with the product category, where the category with one object and only its identity map is the unit. Dually, any category with finite coproducts is monoidal with the coproduct as the monoidal product and the initial object as the unit. Such a monoidal category is called cocartesian monoidal R-Mod, the category of modules over a commutative ring R, is a monoidal category with the tensor product of modules ⊗R serving as the monoidal product and the ring R (thought of as a module over itself) serving as the unit. As special cases one has: K-Vect, the category of vector spaces over a field K, with the one-dimensional vector space K serving as the unit. Ab, the category of abelian groups, with the group of integers Z serving as the unit. For any commutative ring R, the category of R-algebras is monoidal with the tensor product of algebras as the product and R as the unit. The category of pointed spaces (restricted to compactly generated spaces for example) is monoidal with the smash product serving as the product and the pointed 0-sphere (a two-point discrete space) serving as the unit. The category of all endofunctors on a category C is a strict monoidal category with the composition of functors as the product and the identity functor as the unit. Just like for any category E, the full subcategory spanned by any given object is a monoid, it is the case that for any 2-category E, and any object C in Ob(E), the full 2-subcategory of E spanned by {C} is a monoidal category. In the case E = Cat, we get the endofunctors example above. Bounded-above meet semilattices are strict symmetric monoidal categories: the product is meet and the identity is the top element. Any ordinary monoid is a small monoidal category with object set , only identities for morphisms, as tensor product and as its identity object. Conversely, the set of isomorphism classes (if such a thing makes sense) of a monoidal category is a monoid w.r.t. the tensor product. Any commutative monoid can be realized as a monoidal category with a single object. Recall that a category with a single object is the same thing as an ordinary monoid. By an Eckmann-Hilton argument, adding another monoidal product on requires the product to be commutative. Properties and associated notions It follows from the three defining coherence conditions that a large class of diagrams (i.e. diagrams whose morphisms are built using , , , identities and tensor product) commute: this is Mac Lane's "coherence theorem". It is sometimes inaccurately stated that all such diagrams commute. There is a general notion of monoid object in a monoidal category, which generalizes the ordinary notion of monoid from abstract algebra. Ordinary monoids are precisely the monoid objects in the cartesian monoidal category Set. Further, any (small) strict monoidal category can be seen as a monoid object in the category of categories Cat (equipped with the monoidal structure induced by the cartesian product). Monoidal functors are the functors between monoidal categories that preserve the tensor product and monoidal natural transformations are the natural transformations, between those functors, which are "compatible" with the tensor product. Every monoidal category can be seen as the category B(∗, ∗) of a bicategory B with only one object, denoted ∗. The concept of a category C enriched in a monoidal category M replaces the notion of a set of morphisms between pairs of objects in C with the notion of an M-object of morphisms between every two objects in C. Free strict monoidal category For every category C, the free strict monoidal category Σ(C) can be constructed as follows: its objects are lists (finite sequences) A1, ..., An of objects of C; there are arrows between two objects A1, ..., Am and B1, ..., Bn only if m = n, and then the arrows are lists (finite sequences) of arrows f1: A1 → B1, ..., fn: An → Bn of C; the tensor product of two objects A1, ..., An and B1, ..., Bm is the concatenation A1, ..., An, B1, ..., Bm of the two lists, and, similarly, the tensor product of two morphisms is given by the concatenation of lists. The identity object is the empty list. This operation Σ mapping category C to Σ(C) can be extended to a strict 2-monad on Cat. Specializations If, in a monoidal category, and are naturally isomorphic in a manner compatible with the coherence conditions, we speak of a braided monoidal category. If, moreover, this natural isomorphism is its own inverse, we have a symmetric monoidal category. A closed monoidal category is a monoidal category where the functor has a right adjoint, which is called the "internal Hom-functor" . Examples include cartesian closed categories such as Set, the category of sets, and compact closed categories such as FdVect, the category of finite-dimensional vector spaces. Autonomous categories (or compact closed categories or rigid categories) are monoidal categories in which duals with nice properties exist; they abstract the idea of FdVect. Dagger symmetric monoidal categories, equipped with an extra dagger functor, abstracting the idea of FdHilb, finite-dimensional Hilbert spaces. These include the dagger compact categories. Tannakian categories are monoidal categories enriched over a field, which are very similar to representation categories of linear algebraic groups. Preordered monoids A preordered monoid is a monoidal category in which for every two objects , there exists at most one morphism in C. In the context of preorders, a morphism is sometimes notated . The reflexivity and transitivity properties of an order, defined in the traditional sense, are incorporated into the categorical structure by the identity morphism and the composition formula in C, respectively. If and , then the objects are isomorphic which is notated . Introducing a monoidal structure to the preorder C involves constructing an object , called the monoidal unit, and a functor , denoted by "", called the monoidal multiplication. and must be unital and associative, up to isomorphism, meaning: and . As · is a functor, if and then . The other coherence conditions of monoidal categories are fulfilled through the preorder structure as every diagram commutes in a preorder. The natural numbers are an example of a monoidal preorder: having both a monoid structure (using + and 0) and a preorder structure (using ≤) forms a monoidal preorder as and implies . The free monoid on some generating set produces a monoidal preorder, producing the semi-Thue system. See also Skeleton (category theory) Spherical category Monoidal category action References External links
Monoidal category
[ "Mathematics" ]
2,355
[ "Monoidal categories", "Mathematical structures", "Category theory" ]
330,618
https://en.wikipedia.org/wiki/Shell%20integration
Shell integration (the shell method in integral calculus) is a method for calculating the volume of a solid of revolution, when integrating along an axis perpendicular to the axis of revolution. This is in contrast to disc integration which integrates along the axis parallel to the axis of revolution. Definition The shell method goes as follows: Consider a volume in three dimensions obtained by rotating a cross-section in the -plane around the -axis. Suppose the cross-section is defined by the graph of the positive function on the interval . Then the formula for the volume will be: If the function is of the coordinate and the axis of rotation is the -axis then the formula becomes: If the function is rotating around the line then the formula becomes: and for rotations around it becomes The formula is derived by computing the double integral in polar coordinates. Derivation of the formula Example Consider the volume, depicted below, whose cross section on the interval [1, 2] is defined by: With the shell method we simply use the following formula: By expanding the polynomial, the integration is easily done giving cubic units. Comparison With Disc Integration Much more work is needed to find the volume if we use disc integration. First, we would need to solve for . Next, because the volume is hollow in the middle, we would need two functions: one that defined an outer solid and one that defined the inner hollow. After integrating each of these two functions, we would subtract them to yield the desired volume. See also Solid of revolution Disc integration References Frank Ayres, Elliott Mendelson. Schaum's Outlines: Calculus. McGraw-Hill Professional 2008, . pp. 244–248 () Integral calculus
Shell integration
[ "Mathematics" ]
342
[ "Integral calculus", "Calculus" ]
330,675
https://en.wikipedia.org/wiki/Chicago%20Tylenol%20murders
The Chicago Tylenol murders were a series of poisoning deaths resulting from drug tampering in the Chicago metropolitan area in 1982. The victims consumed Tylenol-branded acetaminophen capsules that had been laced with potassium cyanide. Seven people died in the original poisonings, and there were several more deaths in subsequent copycat crimes. No suspect has been charged or convicted of the poisonings as of , but New York City resident James William Lewis was convicted of extortion for sending a letter to Tylenol's manufacturer, Johnson & Johnson, that took responsibility for the deaths and demanded $1 million to stop them. The incidents led to reforms in the packaging of over-the-counter drugs and to federal anti-tampering laws. Deaths and early public-safety efforts On September 28, 1982, 12-year-old Mary Kellerman was hospitalized after consuming a capsule of Extra Strength Tylenol; she died the next day. On September 29, six other individuals consumed contaminated Tylenol, including Adam Janus (27), Stanley Janus (25), and Theresa Janus (20), who each took Tylenol from a single bottle. All six—the Januses, Mary McFarland (31), Paula Prince (35), and Mary Reiner (27)—would ultimately die from consuming the pills. Asked to investigate the Januses' deaths, nurse Helen Jensen, Arlington Heights's only public health official, visited the Janus household and discovered a Tylenol bottle with an accompanying receipt indicating it had been purchased the same day. Noticing that there were six pills missing, she turned the bottle over to investigator Nick Pishos and reported her suspicion that it was related to the Janus' deaths. Pishos called Dr. Edmund R. Donoghue, deputy chief medical examiner for Cook County, who, suspecting that cyanide may be the culprit, asked Pishos to smell the bottle. When Pishos smelled an almond-like scent, Donoghue asked the county's chief toxicologist, Dr. Michael Schaffer, to test the capsules, and Schaffer's team determined that four of the 44 remaining capsules from the Janus' bottle contained nearly three times the fatal amount of cyanide. Authorities held a press conference advising the public not to take Tylenol for the time being. By chance, the bottle of Tylenol that Kellerman used was inventoried by paramedics. Investigators noticed that the Janus bottle and the Kellerman bottle came from the same lot, MC2880, and Johnson & Johnson issued a recall for all Tylenol from that lot. But when tainted bottles from other lots were discovered (for example, the pills in Mary McFarland's possession were traced to lots 1910 MD and MB 2738), the recall expanded to cover those lots and any bottle of extra-strength capsules (from any lot) purchased in the Chicago area, making it one of the largest pharmaceutical recalls ever. A multi-agency investigation found the tampered pills to have been sold or on the shelves at a variety of stores in the Chicago area, including two different Jewel Foods locations (one in Arlington Heights, one in Elk Grove Village); an Osco Drug store (in Schaumburg); a Walgreens and a Dominick's (both in Chicago); and a Frank's Finer Foods (in Winfield). One bottle had been purchased but, due to an off scent, not yet used by Linda Morgan, wife of Judge Lewis V. Morgan. In an effort to reassure the public, Johnson & Johnson, the manufacturer of Tylenol, distributed warnings to hospitals and distributors and halted Tylenol production and advertising. After other incidents, like strychnine added to Tylenol bottles in California, a nationwide recall of Tylenol products was issued on October 5, 1982; an estimated 31 million bottles were in circulation, with a retail value of over US$100 million (equivalent to $ million in ). The company also advertised in the national media for individuals not to consume any of its products that contained acetaminophen after it was determined that only these capsules had been tampered with. Johnson & Johnson also offered to exchange all Tylenol capsules already purchased by the public for solid tablets. Customs at airports outside the U.S. were asking visitors if they brought Tylenol medicine with them. Police investigation The tainted capsules were found to have been manufactured at two different locationsPennsylvania and Texassuggesting that the capsules were tampered with after the product had been placed on store shelves for sale. The police hypothesis was that someone had taken bottles off shelves in local stores of the Chicago area, placed potassium cyanide in some of the capsules, and then placed the packages back on the store shelves to be purchased by unknowing customers. In addition to the five bottles that led to the victims' deaths, a few other contaminated bottles were later discovered in the Chicago area. In early 1983, at the FBI's request, Chicago Tribune columnist Bob Greene published the address and grave location of the first and youngest victim, Mary Kellerman. The story, written with the Kellerman family's consent, was proposed by FBI criminal analyst John Douglas on the theory that the perpetrator might visit the house or gravesite if they were made aware of their locations. Both sites were kept under 24-hour video surveillance for several months, but the killer did not surface. A surveillance photo of Paula Prince purchasing cyanide-tampered Tylenol at a Walgreens at 1601 North Wells Street in Chicago was released by the Chicago Police Department. Police believe that a bearded man seen just feet behind Prince may be the killer. Suspects During the initial investigations, a man named James William Lewis was accused of sending a letter to Johnson & Johnson demanding $1 million to stop the cyanide-induced murders. Upon his arrest, Lewis told authorities how the person behind the attacks may have carried out the killings—by buying Tylenol, adding cyanide to the bottles, and returning them to the store shelves. Lewis was also found to have previously possessed a poisoning book, and, according to a confidential law-enforcement document, his fingerprints were discovered on pages related to cyanide. Lewis denied being responsible for the poisonings, but he admitted to writing the letter, which he said he had worked on for three days. During the trial, his attorneys claimed that Lewis "intended only to focus the attention of the authorities on his wife's former employer." Lewis was convicted of extortion and sentenced to 10 years in prison. In 2007, authorities determined that the letter had an October 1, 1982, postmark, meaning that, if Lewis's three-day timeline was accurate, he would have begun working on the letter prior to the first news reports concerning the poisonings. When confronted with this information, Lewis recanted his timeline. Court documents released in early 2009 "show Department of Justice investigators concluded Lewis was responsible for the poisonings, despite the fact that they did not have enough evidence to charge him". In January 2010, both Lewis and his wife submitted DNA samples and fingerprints to authorities. Lewis said "if the FBI plays it fair, I have nothing to worry about". The DNA samples did not match any DNA recovered on the bottles. Lewis continued to deny responsibility for the poisonings. Lewis died on July 9, 2023, at age 76. Police also investigated a second man, Roger Arnold, a dock worker at a Jewel-Osco in Melrose Park, who told officers that he possessed potassium cyanide. Bar owner Marty Sinclair, whose establishment Arnold frequented, reported Arnold to the police, saying that Arnold had discussed killing people with a white powder and had become increasingly erratic after his marriage had dissolved. Arnold had worked with victim Mary Reiner's father at a warehouse, and Arnold's wife had been treated at a hospital across the street from the store in which Reiner bought her cyanide-laced pills. A copy of The Poor Man's James Bond, which contained instructions on making potassium cyanide, was found in Arnold's home. Arnold was held several times by the police, but never charged. In the summer of 1983, Arnold, mistaking John Stanisha (a random passerby) for Sinclair, shot and killed Stanisha, a computer consultant and father of three, who was leaving a bar with multiple friends. Arnold was convicted of the killing in January 1984 and served 15 years of his 30-year sentence for second-degree murder, saying in 1996 from prison: "I killed a man, a perfectly innocent person. I had choices. I could have walked away." He died in June 2008. In 2010, Arnold's body was exhumed (and subsequently reburied) so that his femur bone could be removed for DNA testing. Arnold's DNA did not match the DNA samples discovered on the bottles. 21st-century investigation efforts In early January 2009, Illinois authorities renewed the investigation. Federal agents searched the home of Lewis in Cambridge, Massachusetts, and seized a number of items. In Chicago, an FBI spokesman declined to comment but said "we'll have something to release later possibly". In 2010, DNA samples were collected from Lewis and Arnold, whose body was exhumed for that purpose; neither's DNA matched DNA samples found on the tainted bottles. Law-enforcement officials received a number of tips related to the case coinciding with its 25th anniversary. In a written statement, the FBI explained, This review was prompted, in part, by the recent 25th anniversary of this crime and the resulting publicity. Further, given the many recent advances in forensic technology, it was only natural that a second look be taken at the case and recovered evidence. On May 19, 2011, the FBI requested DNA samples from "Unabomber" Ted Kaczynski in connection to the Tylenol murders. Kaczynski denied having ever possessed potassium cyanide. The first four Unabomber crimes happened in Chicago and its suburbs from 1978 to 1980, and Kaczynski's parents had a suburban Chicago home in Lombard, Illinois, in 1982, where he stayed occasionally. Aftermath Copycats Hundreds of copycat attacks involving Tylenol, other over-the-counter medications, and other products also took place around the United States immediately following the Chicago deaths. Three more deaths occurred in 1986 from gelatin capsules. 23-year-old Diane Elsroth died in Yonkers, New York, after ingesting "Extra-Strength Tylenol" capsules laced with cyanide. Excedrin capsules in Washington state were tampered with, resulting in the deaths of Susan Snow and Bruce Nickell from cyanide poisoning and the eventual arrest and conviction of Bruce Nickell's wife, Stella Nickell, for her intentional actions in the crimes connected to both murders. That same year, Procter & Gamble's Encaprin was recalled after a spiking hoax in Chicago and Detroit that resulted in a precipitous sales drop and a withdrawal of the pain reliever from the market. In 1991 in Washington state, Kathleen Daneker and Stanley McWhorter were killed from two cyanide-tainted boxes of Sudafed, and Jennifer Meling went into a coma from a similar poisoning but recovered shortly thereafter. Jennifer's husband, Joseph Meling, was convicted on numerous charges in a federal Seattle court regarding the deaths of Daneker and McWhorter and the attempted murder of his wife, who was abused during the Melings' marriage. Meling was sentenced to life imprisonment and lost an appeal for a retrial. In 1986, University of Texas student Kenneth Faries was found dead in his apartment after succumbing to cyanide poisoning. Tampered Anacin capsules were determined to be the source of the cyanide found in his body. His death was ruled as a homicide on May 30, 1986. On June 19, 1986, the AP reported that the Travis County Medical Examiner ruled his death a likely suicide. The FDA determined he obtained the poison from a lab in which he worked. Johnson & Johnson response Johnson & Johnson received positive coverage for its handling of the crisis; for example, an article in The Washington Post said, "Johnson & Johnson has effectively demonstrated how a major business ought to handle a disaster". The article further stated that "this is no Three Mile Island accident in which the company's response did more damage than the original incident", and applauded the company for being honest with the public. In addition to issuing the recall, the company established relations with the Chicago Police Department, the FBI, and the Food and Drug Administration. This way it could have a part in searching for the person who laced the capsules and they could help prevent further tampering. While at the time of the scare the company's market share collapsed from 35 percent to 8 percent, it rebounded in less than a year, a move credited to the company's prompt and aggressive reaction. In November, it reintroduced capsules in a new, triple-sealed package, coupled with heavy price promotions. Within several years, Tylenol regained the highest market share for the over-the-counter analgesic in the US. After the recall, Johnson & Johnson subsidiary McNeil Laboratories submitted a claim to its insurance company, Affiliated FM Insurance, for the cost of carrying out the recall, a claim which was later denied. A lawsuit determined that McNeil Laboratories was ultimately not covered because the parent company Johnson & Johnson elected not to buy more expensive recall insurance. McNeil sued again in court, further contending that the language of its excess liability insurance policy covered the recall and recall-related expenses. The court hearing that case rejected a claim of liability, stating that the recall "was not caused by liability for the seven deaths; it was at best merely related to the seven deaths in that they served as notice to the plaintiff that the Tylenol remaining on the shelves was potentially harmful." In 1991, Johnson & Johnson agreed to settle, for an undisclosed sum, all lawsuits against it for the original Chicago area deaths. Robert Kniffin, a spokesman for Johnson & Johnson, stated that "though there is no way we could have anticipated a criminal tampering with our product or prevented it, we wanted to do something for the families and finally get this tragic event behind us." The crisis management response, taught today as a model of corporate public relations, is chiefly credited to public relations executive Harold Burson. Pharmaceutical changes The 1982 incident inspired the pharmaceutical, food, and consumer product industries to develop tamper-resistant packaging, such as induction seals and improved quality control methods. Moreover, product tampering was made a federal crime. The new laws resulted in Stella Nickell's conviction in the Excedrin tampering case, for which she was sentenced to 90 years in prison. Additionally, the incident prompted the pharmaceutical industry to move away from capsules, which were easy to contaminate as a foreign substance could be placed inside without obvious signs of tampering. Within the year, the FDA introduced more stringent regulations to avoid product tampering. This led to the eventual replacement of the capsule with the solid "caplet", a tablet made in the shape of a capsule, as a drug delivery form and with the addition of tamper-evident safety seals to bottles of many sorts. 1982 Halloween While poisoned candy being given to trick-or-treaters at Halloween is rare, the Tylenol incident, which unfolded across October 1982, raised renewed fears of it. Some communities discouraged trick-or-treating for Halloween, and American grocery stores reported that candy sales were down more than 20%. See also List of multiple homicides in Illinois List of serial killers by country List of unsolved murders Paraquat murders References Further reading Bergmann, Joy (November 2, 2000). "A Bitter Pill – Someone Killed Seven People by Putting Cyanide in Tylenol Capsules – When James Lewis Was Caught for Writing an Extortion Letter, Prosecutors Appeared To Stop Looking for the Killer – Almost 20 Years Later No One Has Been Convicted of the Murders". Chicago Reader. Retrieved May 19, 2011. Solomon, Michael (July 13, 2022). "Poison Pill". Medium. Retrieved July 14, 2022. External links 1982 in Illinois 1982 murders in the United States 1980s in Chicago Adulteration Mass murder in the United States in the 1980s Deaths by cyanide poisoning Drug safety Health disasters in the United States Johnson & Johnson Mass murder in 1982 Mass poisoning Murder in Chicago 1980s crimes in Illinois Product recalls September 1982 events in the United States Unsolved mass murders in the United States Mass murder in Illinois
Chicago Tylenol murders
[ "Chemistry" ]
3,479
[ "Adulteration", "Drug safety" ]
330,770
https://en.wikipedia.org/wiki/Royal%20Aero%20Club
The Royal Aero Club (RAeC) is the national co-ordinating body for air sport in the United Kingdom. It was founded in 1901 as the Aero Club of Great Britain, being granted the title of the "Royal Aero Club" in 1910. History The Aero Club was founded in 1901 by Frank Hedges Butler, his daughter Vera and the Hon Charles Rolls (one of the founders of Rolls-Royce), partly inspired by the Aero Club of France. It was initially concerned more with ballooning but after the demonstrations of heavier-than-air flight made by the Wright Brothers in France in 1908, it embraced the aeroplane. The original club constitution declared that it was dedicated to 'the encouragement of aero auto-mobilism and ballooning as a sport.' As founded, it was primarily a London gentlemen's club, but gradually moved on to a more regulatory role. It had a clubhouse at 119 Piccadilly, which it retained until 1961. The club was granted its Royal prefix on 15 February 1910. From 1910 the club issued Aviators Certificates, which were internationally recognised under the Fédération Aéronautique Internationale (the FAI) to which the club was the UK representative. The club is the governing body in the UK for air sports, as well as for records and competitions. The club established its first flying ground on a stretch of marshland at Shellbeach near Leysdown on the Isle of Sheppey in early 1909. A nearby farmhouse, Mussell Manor (now called Muswell Manor) became the flying ground clubhouse, and club members could construct their own sheds to accommodate their aircraft. Among the first occupants of the ground were Short Brothers. Two of the brothers, Eustace and Oswald, had previously made balloons for Aero Club members and been appointed the official engineers of the Aero Club. They had also enlisted their eldest brother, Horace, when they decided to begin constructing heavier-than-air aircraft. They acquired a licence to build copies of the Wright aircraft and set up the first aircraft production line in the world at Leysdown. On 1 May 1909 John Moore-Brabazon (later Lord Brabazon of Tara) made a flight of 500 yards in his Voisin at Shellbeach. This is officially recognised as the first flight by a British pilot in Britain. The same week the Wright brothers visited the Aero Club flying ground at Shellbeach. After inspecting the Short Brothers' factory, a photograph was taken outside Mussell Manor of the Wright Brothers with all of the early British aviation pioneers to commemorate their visit to Britain. In October 1909, the club recognised the Blackpool Aviation Week, making it Britain's first official air show. On 30 October Moore-Brabazon was also the first to cover a mile (closed circuit) in a British aeroplane, flying the Short Biplane No. 2, and so winning a prize of £1,000 offered by the Daily Mail newspaper. On 4 November 1909, he decided to take up a piglet, which he named Icarus the Second, as a passenger, thereby disproving the adage that "pigs can't fly". It moved the next year to nearby Eastchurch, where the Royal Navy had established a flying school. Until 1911 the British Military did not have any pilot training facilities. As a result, most early military pilots were trained by members of the club and many became members. By the end of the First World War, more than 6,300 military pilots had taken RAeC Aviator's Certificates. After the loss of its Piccadilly clubhouse in 1961, the club was lodged at the Lansdowne Club at 9 Fitzmaurice Place until 1968. It then moved for a short spell to the Junior Carlton Club's modern building at 94 Pall Mall. In June 1973 the club merged with the United Service Club and moved into its premises at 116 Pall Mall. All its aviation-related activities were then transferred to the Aviation Council (United Service and Royal Aero Club) Ltd incorporated on 15 February 1973. In June 1975, the United Service and Royal Aero Club merged with the Naval and Military Club and on 1 August 1975 the Royal Aero Club of the United Kingdom was officially launched and endowed with all its awards, library and memorabilia and took the place of the Aviation Council. By 1977, the club had ceased to be a members club but continued to carry out the function previously carried out by its Aviation Council, with the Secretariat based at the Leicester premises of the British Gliding Association. Today the Royal Aero Club continues to be the national governing and coordinating body of air sport and recreational flying. The governing bodies of the various forms of sporting aviation (for example British Aerobatic Association) are all members of the Royal Aero Club, which is the UK governing body for international sporting purposes. The Royal Aero Club also acts to support and protect the rights of recreational pilots in the context of national and international regulation. First aviator certificates The following were the first ten people to gain their aviator certificates from the Royal Aero Club: J. T. C. Moore-Brabazon – 8 March 1910 Hon. C. S. Rolls – 8 March 1910 Alfred Rawlinson – 5 April 1910 Cecil Stanley Grace – 12 April 1910 George Bertram Cockburn – 26 April 1910 Claude Grahame-White – 26 April 1910 A. Ogilvie – 24 May 1910 A. M. Singer – 31 May 1910 L. D. L. Gibbs – 7 June 1910 S. F. Cody – 14 June 1910: made first aeroplane flight in Britain The first women to be awarded their aviator certificates from the Royal Aero Club were Hilda Hewlett on 29 August 1911(certificate No.122) followed by Cheridah de Beauvoir Stocks (certificate No. 153) on 7 November 1911. Air races and awards Air races A number of air races were organised by the club: The Kings Cup SBAC Cup The Kemsley Trophy The Norton-Griffths Cup The Grosvenor Cup The Siddeley Trophy The Air League Cup Britannia Trophy The Britannia Trophy is presented by the Royal Aero Club for aviators accomplishing the most meritorious performance in aviation during the previous year. See also List of pilots awarded an Aviator's Certificate by the Royal Aero Club in 1910 List of pilots awarded an Aviator's Certificate by the Royal Aero Club in 1911 List of pilots awarded an Aviator's Certificate by the Royal Aero Club in 1912 List of pilots awarded an Aviator's Certificate by the Royal Aero Club in 1913 List of pilots awarded an Aviator's Certificate by the Royal Aero Club in 1914 List of pilots with foreign Aviator's Certificates accredited by the Royal Aero Club 1910-1914 Flight International References External links 1901 establishments in the United Kingdom Air sports in the United Kingdom Flying clubs Air sports governing bodies Aviation organisations based in the United Kingdom Organisations based in Leicestershire Organizations established in 1901 Sports governing bodies in the United Kingdom Fédération Aéronautique Internationale
Royal Aero Club
[ "Engineering" ]
1,399
[ "Fédération Aéronautique Internationale", "Aeronautics organizations" ]
330,879
https://en.wikipedia.org/wiki/Anthropometry
Anthropometry (, ) refers to the measurement of the human individual. An early tool of physical anthropology, it has been used for identification, for the purposes of understanding human physical variation, in paleoanthropology and in various attempts to correlate physical with racial and psychological traits. Anthropometry involves the systematic measurement of the physical properties of the human body, primarily dimensional descriptors of body size and shape. Since commonly used methods and approaches in analysing living standards were not helpful enough, the anthropometric history became very useful for historians in answering questions that interested them. Today, anthropometry plays an important role in industrial design, clothing design, ergonomics and architecture where statistical data about the distribution of body dimensions in the population are used to optimize products. Changes in lifestyles, nutrition, and ethnic composition of populations lead to changes in the distribution of body dimensions (e.g. the rise in obesity) and require regular updating of anthropometric data collections. History The history of anthropometry includes and spans various concepts, both scientific and pseudoscientific, such as craniometry, paleoanthropology, biological anthropology, phrenology, physiognomy, forensics, criminology, phylogeography, human origins, and cranio-facial description, as well as correlations between various anthropometrics and personal identity, mental typology, personality, cranial vault and brain size, and other factors. At various times in history, applications of anthropometry have ranged from accurate scientific description and epidemiological analysis to rationales for eugenics and overtly racist social movements. One of its misuses was the discredited pseudoscience, phrenology. Individual variation Auxologic Auxologic is a broad term covering the study of all aspects of human physical growth. Height Human height varies greatly between individuals and across populations for a variety of complex biological, genetic, and environmental factors, among others. Due to methodological and practical problems, its measurement is also subject to considerable error in statistical sampling. The average height in genetically and environmentally homogeneous populations is often proportional across a large number of individuals. Exceptional height variation (around 20% deviation from a population's average) within such a population is sometimes due to gigantism or dwarfism, which are caused by specific genes or endocrine abnormalities. It is important to note that a great degree of variation occurs between even the most 'common' bodies (66% of the population), and as such no person can be considered 'average'. In the most extreme population comparisons, for example, the average female height in Bolivia is while the average male height in the Dinaric Alps is , an average difference of . Similarly, the shortest and tallest of individuals, Chandra Bahadur Dangi and Robert Wadlow, have ranged from , respectively. The age range where most females stop growing is 15–⁠18 years and the age range where most males stop growing is 18–⁠21 years. Weight Human weight varies extensively both individually and across populations, with the most extreme documented examples of adults being Lucia Zarate who weighed , and Jon Brower Minnoch who weighed , and with population extremes ranging from in Bangladesh to in Micronesia. Organs Adult brain size varies from to in females and to in males, with the average being and , respectively. The right cerebral hemisphere is typically larger than the left, whereas the cerebellar hemispheres are typically of more similar size. Size of the human stomach varies significantly in adults, with one study showing volumes ranging from to and weights ranging from to . Male and female genitalia exhibit considerable individual variation, with penis size differing substantially and vaginal size differing significantly in healthy adults. Aesthetic Human beauty and physical attractiveness have been preoccupations throughout history which often intersect with anthropometric standards. Cosmetology, facial symmetry, and waist–hip ratio are three such examples where measurements are commonly thought to be fundamental. Evolutionary science Anthropometric studies today are conducted to investigate the evolutionary significance of differences in body proportion between populations whose ancestors lived in different environments. Human populations exhibit climatic variation patterns similar to those of other large-bodied mammals, following Bergmann's rule, which states that individuals in cold climates will tend to be larger than ones in warm climates, and Allen's rule, which states that individuals in cold climates will tend to have shorter, stubbier limbs than those in warm climates. On a microevolutionary level, anthropologists use anthropometric variation to reconstruct small-scale population history. For instance, John Relethford's studies of early 20th-century anthropometric data from Ireland show that the geographical patterning of body proportions still exhibits traces of the invasions by the English and Norse centuries ago. Similarly, anthropometric indices, namely comparison of the human stature was used to illustrate anthropometric trends. This study was conducted by Jörg Baten and Sandew Hira and was based on the anthropological founds that human height is predetermined by the quality of the nutrition, which used to be higher in the more developed countries. The research was based on the datasets for Southern Chinese contract migrants who were sent to Suriname and Indonesia and included 13,000 individuals. Measuring instruments 3D body scanners Today anthropometry can be performed with three-dimensional scanners. A global collaborative study to examine the uses of three-dimensional scanners for health care was launched in March 2007. The Body Benchmark Study will investigate the use of three-dimensional scanners to calculate volumes and segmental volumes of an individual body scan. The aim is to establish whether the Body Volume Index has the potential to be used as a long-term computer-based anthropometric measurement for health care. In 2001 the UK conducted the largest sizing survey to date using scanners. Since then several national surveys have followed in the UK's pioneering steps, notably SizeUSA, SizeMexico, and SizeThailand, the latter still ongoing. SizeUK showed that the nation had become taller and heavier but not as much as expected. Since 1951, when the last women's survey had taken place, the average weight for women had gone up from 62 to 65 kg. However, recent research has shown that posture of the participant significantly influences the measurements taken, the precision of 3D body scanner may or may not be high enough for industry tolerances, and measurements taken may or may not be relevant to all applications (e.g. garment construction). Despite these current limitations, 3D Body Scanning has been suggested as a replacement for body measurement prediction technologies which (despite the great appeal) have yet to be as reliable as real human data. Baropodographic Baropodographic devices fall into two main categories: (i) floor-based, and (ii) in-shoe. The underlying technology is diverse, ranging from piezoelectric sensor arrays to light refraction, but the ultimate form of the data generated by all modern technologies is either a 2D image or a 2D image time series of the pressures acting under the plantar surface of the foot. From these data other variables may be calculated (see data analysis.) The spatial and temporal resolutions of the images generated by commercial pedobarographic systems range from approximately 3 to 10 mm and 25 to 500 Hz, respectively. Sensor technology limits finer resolution. Such resolutions yield a contact area of approximately 500 sensors (for a typical adult human foot with surface area of approximately 100 cm2). For a stance phase duration of approximately 0.6 seconds during normal walking, approximately 150,000 pressure values, depending on the hardware specifications, are recorded for each step. Neuroimaging Direct measurements involve examinations of brains from corpses, or more recently, imaging techniques such as MRI, which can be used on living persons. Such measurements are used in research on neuroscience and intelligence. Brain volume data and other craniometric data are used in mainstream science to compare modern-day animal species and to analyze the evolution of the human species in archeology. Epidemiology and medical anthropology Anthropometric measurements also have uses in epidemiology and medical anthropology, for example in helping to determine the relationship between various body measurements (height, weight, percentage body fat, etc.) and medical outcomes. Anthropometric measurements are frequently used to diagnose malnutrition in resource-poor clinical settings. Forensics and criminology Forensic anthropologists study the human skeleton in a legal setting. A forensic anthropologist can assist in the identification of a decedent through various skeletal analyses that produce a biological profile. Forensic anthropologists utilize the Fordisc program to help in the interpretation of craniofacial measurements in regards to ancestry determination. One part of a biological profile is a person's ancestral affinity. People with significant European or Middle Eastern ancestry generally have little to no prognathism; a relatively long and narrow face; a prominent brow ridge that protrudes forward from the forehead; a narrow, tear-shaped nasal cavity; a "silled" nasal aperture; tower-shaped nasal bones; a triangular-shaped palate; and an angular and sloping eye orbit shape. People with considerable African ancestry typically have a broad and round nasal cavity; no dam or nasal sill; Quonset hut-shaped nasal bones; notable facial projection in the jaw and mouth area (prognathism); a rectangular-shaped palate; and a square or rectangular eye orbit shape. A relatively small prognathism often characterizes people with considerable East Asian ancestry; no nasal sill or dam; an oval-shaped nasal cavity; tent-shaped nasal bones; a horseshoe-shaped palate; and a rounded and non-sloping eye orbit shape. Many of these characteristics are only a matter of frequency among those of particular ancestries: their presence or absence of one or more does not automatically classify an individual into an ancestral group. Ergonomics Ergonomics professionals apply an understanding of human factors to the design of equipment, systems and working methods to improve comfort, health, safety, and productivity. This includes physical ergonomics in relation to human anatomy, physiological and bio mechanical characteristics; cognitive ergonomics in relation to perception, memory, reasoning, motor response including human–computer interaction, mental workloads, decision making, skilled performance, human reliability, work stress, training, and user experiences; organizational ergonomics in relation to metrics of communication, crew resource management, work design, schedules, teamwork, participation, community, cooperative work, new work programs, virtual organizations, and telework; environmental ergonomics in relation to human metrics affected by climate, temperature, pressure, vibration, and light; visual ergonomics; and others. Biometrics Biometrics refers to the identification of humans by their characteristics or traits. Biometrics is used in computer science as a form of identification and access control. It is also used to identify individuals in groups that are under surveillance. Biometric identifiers are the distinctive, measurable characteristics used to label and describe individuals. Biometric identifiers are often categorized as physiological versus behavioral characteristics. Subclasses include dermatoglyphics and soft biometrics. United States military research The US Military has conducted over 40 anthropometric surveys of U.S. Military personnel between 1945 and 1988, including the 1988 Army Anthropometric Survey (ANSUR) of men and women with its 240 measures. Statistical data from these surveys encompasses over 75,000 individuals. Civilian American and European Surface Anthropometry Resource Project CAESAR began in 1997 as a partnership between government (represented by the US Air Force and NATO) and industry (represented by SAE International) to collect and organize the most extensive sampling of consumer body measurements for comparison. The project collected and organized data on 2,400 U.S. & Canadian and 2,000 European civilians and a database was developed. This database records the anthropometric variability of men and women, aged 18–65, of various weights, ethnic groups, gender, geographic regions, and socio-economic status. The study was conducted from April 1998 to early 2000 and included three scans per person in a standing pose, full-coverage pose and relaxed seating pose. Data collection methods were standardized and documented so that the database can be consistently expanded and updated. High-resolution measurements of body surfaces were made using 3D Surface Anthropometry. This technology can capture hundreds of thousands of points in three dimensions on the human body surface in a few seconds. It has many advantages over the old measurement system using tape measures, anthropometers, and other similar instruments. It provides detail about the surface shape as well as 3D locations of measurements relative to each other and enables easy transfer to Computer-Aided Design (CAD) or Manufacturing (CAM) tools. The resulting scan is independent of the measurer, making it easier to standardize. Automatic landmark recognition (ALR) technology was used to extract anatomical landmarks from the 3D body scans automatically. Eighty landmarks were placed on each subject. More than 100 univariate measures were provided, over 60 from the scan and approximately 40 using traditional measurements. Demographic data such as age, ethnic group, gender, geographic region, education level, and present occupation, family income and more were also captured. Fashion design Scientists working for private companies and government agencies conduct anthropometric studies to determine a range of sizes for clothing and other items. For just one instance, measurements of the foot are used in the manufacture and sale of footwear: measurement devices may be used either to determine a retail shoe size directly (e.g. the Brannock Device) or to determine the detailed dimensions of the foot for custom manufacture (e.g. ALINEr). See also References Further reading Anthropometric Survey of Army Personnel: Methods and Summary Statistics 1988 ISO 7250: Basic human body measurements for technological design, International Organization for Standardization, 1998. ISO 8559: Garment construction and anthropometric surveys — Body dimensions, International Organization for Standardization, 1989. ISO 15535: General requirements for establishing anthropometric databases, International Organization for Standardization, 2000. ISO 15537: Principles for selecting and using test persons for testing anthropometric aspects of industrial products and designs, International Organization for Standardization, 2003. ISO 20685: 3-D scanning methodologies for internationally compatible anthropometric databases, International Organization for Standardization, 2005. (A classic review of human body sizes.) External links Anthropometry at the Centers for Disease Control and Prevention Anthropometry and Biomechanics at NASA Anthropometry data at faculty of Industrial Design Engineering at Delft University of Technology Manual for Obtaining Anthropometric Measurements Free Full Text Civilian American and European Surface Anthropometry Resource Project—CAESAR at SAE International Articles containing video clips Biological anthropology Biometrics Ergonomics Forensic disciplines Human anatomy Human body Measurement Medical imaging Physiognomy Physiology Racism
Anthropometry
[ "Physics", "Mathematics", "Biology" ]
3,102
[ "Physical quantities", "Human body", "Physiology", "Quantity", "Measurement", "Size", "Physical objects", "Matter" ]
330,970
https://en.wikipedia.org/wiki/Ken%20Saro-Wiwa
Kenule Beeson Saro-Wiwa (10 October 1941 – 10 November 1995) was a Nigerian writer, teacher, television producer, and environmental activist. Saro-Wiwa was a member of the Ogoni people, an ethnic minority in Nigeria whose homeland, Ogoniland, in the Niger Delta, has been targeted for crude oil extraction since the 1950s and has suffered extreme environmental damage from decades of indiscriminate petroleum waste dumping. Initially as a spokesperson, and then as the president, of the Movement for the Survival of the Ogoni People (MOSOP), Saro-Wiwa led a nonviolent campaign against environmental degradation of the land and waters of Ogoniland by the operations of the multiple international oil companies, especially the Royal Dutch Shell company. He criticized the Nigerian government for its reluctance to enforce environmental regulations on the foreign petroleum companies operating in the area. At the peak of his non-violent campaign, he was tried by a special military tribunal for allegedly masterminding the murder of Ogoni chiefs at a pro-government meeting, and hanged in 1995 by the military dictatorship of General Sani Abacha. His execution provoked international outrage and resulted in Nigeria's suspension from the Commonwealth of Nations for more than three years. Biography Early life Kenule Saro-Wiwa was born in Bori, near Port-Harcourt, Nigeria, on 10 October 1941. He was the son of Chief Jim Wiwa, a forest ranger who held a title in the Nigerian chieftaincy system, and his third wife Widu. He officially changed his name to Saro-Wiwa after the Nigerian Civil War. He was married to Maria Saro Wiwa. His father's hometown was the village of Bane, Ogoniland, whose residents speak the Khana dialect of the Ogoni language. He spent his childhood in an Anglican home and eventually proved himself to be an excellent student. He received primary education at a Native Authority school in Bori, then attended secondary school at Government College Umuahia. A distinguished student, he was captain of the table tennis team and amassed school prizes in History and English. On the completion of his secondary education, he obtained a scholarship to study English at the University of Ibadan. At Ibadan, he plunged into academic and cultural interests, he won departmental prizes in 1963 and 1965 and worked for a drama troupe. The travelling drama troupe performed in Kano, Benin, Ilorin and Lagos and collaborated with the Nottingham Playhouse theater group. He briefly became a teaching assistant at the University of Lagos and later at University of Nigeria, Nsukka. He was an African literature lecturer in Nsukka when the civil war broke out, he supported the Federal Government and had to leave the region for his hometown at Bori. On his journey to Port-Harcourt, he witnessed the multitudes of refugees returning to the East, a scene he described as a "sorry sight to see". Three days after his arrival to Bonny, it fell to federal troops. He and his family then stayed in Bonny, he travelled back to Lagos and took a position at the University of Lagos which did not last long as he was called back to Bonny. He was called back to become the Civilian Administrator for the port city of Bonny in the Niger Delta. During the Nigerian Civil War he positioned himself as an Ogoni leader dedicated to the Federal cause. He followed his job as an administrator with an appointment as a commissioner in the old Rivers State. His best known novel, Sozaboy: A Novel in Rotten English (1985), tells the story of a naive village boy recruited to the army during the Nigerian Civil War of 1967 to 1970, and intimates the political corruption and patronage in Nigeria's military regime of the time. His war diaries, On a Darkling Plain (1989), document his experience during the war. He was also a successful businessman and television producer. His satirical television series, Basi & Company, was wildly popular, with an estimated audience of 30 million. In the early 1970s, he served as the Regional Commissioner for Education in the Rivers State Cabinet. But was dismissed in 1973 because of his support for Ogoni autonomy. In the late 1970s, he established a number of successful business ventures in retail and real estate, and during the 1980s concentrated primarily on his writing, journalism and television production. In 1977, he became involved in the political arena running as the candidate to represent Ogoni in the Constituent Assembly. He lost the election in a narrow margin. It was during this time he had a fall out with his friend Edwards Kobani. His intellectual work was interrupted in 1987 when he re-entered the political scene, having been appointed by the newly installed dictator Ibrahim Babangida to aid the country's transition to democracy. But he resigned because he felt Babangida's supposed plans for a return to democracy were disingenuous. His sentiments were proven correct in the coming years, as Babangida failed to relinquish power. In 1993, Babangida annulled Nigeria's general elections that would have transferred power to a civilian government, sparking mass civil unrest and eventually forcing him to step down, at least officially, that same year. Works Saro-Wiwa's works include TV, drama and prose writing. His earlier works from 1970s to 1980s were mostly satirical displays that portray a counter-image of Nigerian society. But his later writings were more inspired by political dimensions such as environmental and social justice than satire. Transistor Radio, one of his best known plays, was written for a revue during his university days at Ibadan but still resonated well with Nigerian society and was adapted into a television series. Some of his works drew inspiration from the play. In 1972, a radio version of the play was produced and in 1985, he produced Basi and Company, a successful screen adaption of the play. He included the play in Four Farcical Plays and Basi and Company: Four Television Plays. Basi and Company, an adaptation of Transistor Radio, ran on television from 1985 to 1990. A farcical comedy, the show chronicles city life and is anchored by the protagonist, Basi, a resourceful and street-wise character looking for ways to achieve his goal of obtaining millions which always ends to become an illusive mission. In 1985, the Biafran Civil War novel Sozaboy was published. The protagonist's language was written in nonstandard English or what He called "Rotten English", a hybrid language of pidgin English, standard English and broken English. Activism In 1990, he began devoting most of his time to human rights and environmental causes, particularly in the land settled by the Ogoni people. He was one of the earliest members of the Movement for the Survival of the Ogoni People (MOSOP), which advocated for the rights of the Ogoni people. The Ogoni Bill of Rights, written by MOSOP, set out the movement's demands, including increased autonomy for the Ogoni people, a fair share of the proceeds of oil extraction, and remediation of environmental damage to Ogoni lands. In particular, MOSOP struggled against the degradation of Ogoni lands by Royal Dutch Shell. In 1992, He was imprisoned for several months, without trial, by the Nigerian military government. He was Vice Chairman of the Unrepresented Nations and Peoples Organization (UNPO) General Assembly from 1993 to 1995. UNPO is an international, nonviolent, and democratic organisation (of which MOSOP is a member). Its members are indigenous peoples, minorities, and under-recognised or occupied territories who have joined together to protect and promote their human and cultural rights, to preserve their environments and to find nonviolent solutions to conflicts which affect them. In January 1993, MOSOP organised peaceful marches of around 300,000 Ogoni people– more than half of the Ogoni population – through four Ogoni urban centres, drawing international attention to their people's plight. The same year the Nigerian government occupied the region militarily. Arrest and execution He was arrested again and detained by Nigerian authorities in June 1993 but was released after a month. On 21 May 1994, four Ogoni chiefs (all on the conservative side of a schism within MOSOP over strategy) were brutally murdered. Saro-Wiwa had been denied entry to Ogoniland on the day of the murders, but he was arrested and accused of inciting them. He denied the charges but was imprisoned for more than a year before being found guilty and sentenced to death by a specially convened tribunal. The same happened to eight other MOSOP leaders who, along with Saro-Wiwa, became known as the Ogoni Nine. Some of the defendants' lawyers resigned in protest against the alleged rigging of the trial by the Abacha regime. The resignations left the defendants to their own means against the tribunal, which continued to bring witnesses to testify against Saro-Wiwa and his peers. Many of these supposed witnesses later admitted that they had been bribed by the Nigerian government to support the criminal allegations. At least two witnesses who testified that Saro-Wiwa was involved in the murders of the Ogoni elders later recanted, stating that they had been bribed with money and offers of jobs with Shell to give false testimony, in the presence of Shell's lawyer. The trial was widely criticised by human rights organisations, and six months later, Saro-Wiwa received the Right Livelihood Award for his courage, as well as the Goldman Environmental Prize. On 8 November 1995, a military ruling council upheld the death sentences. The military government then immediately moved to carry them out. The prison in Port Harcourt was selected as the place of execution. Although the government wanted to carry out the sentences immediately, it had to wait two days for a gallows to be built. Within hours of the sentences being upheld, nine coffins were taken to the prison, and the following day a team of executioners was flown in from Sokoto to Port Harcourt. On 10 November 1995, Saro-Wiwa and the remainder of the Ogoni Nine were taken from the army base where they were being held to Port Harcourt prison. They were told that they were being moved to Port Harcourt because it was feared that the army base they were being held in might be attacked by Ogoni youths. The prison was heavily guarded by riot police and tanks, and hundreds of people lined the streets in anticipation of the executions. After arriving at Port Harcourt prison, Saro-Wiwa and the others were herded into a single room and their wrists and ankles were shackled. They were then led one by one to the gallows and executed by hanging, with Saro-Wiwa being the first. It took five tries to execute him due to faulty equipment. His last words were: "Lord take my soul, but the struggle continues." After the executions, the bodies were taken to the Port Harcourt Cemetery under armed guard and buried. Anticipating disturbances as a result of the executions, the Nigerian government deployed tens of thousands of troops and riot police to two southern provinces and major oil refineries around the country. The Port Harcourt Cemetery was surrounded by soldiers and tanks. The executions provoked a storm of international outrage. The United Nations General Assembly condemned the executions in a resolution which passed by a vote of 101 in favor to 14 against and 47 abstentions. The European Union condemned the executions, which it called a "cruel and callous act", and imposed an arms embargo on Nigeria. The United States recalled its ambassador from Nigeria, imposed an arms embargo on Nigeria, and imposed travel restrictions on members of the Nigerian military regime and their families. The United Kingdom recalled its high commissioner in Nigeria, and British Prime Minister John Major called the executions "judicial murder". South Africa took a primary role in leading international criticism, with President Nelson Mandela urging Nigeria's suspension from the Commonwealth of Nations. Zimbabwe and Kenya also backed Mandela, with Kenyan President Daniel arap Moi and Zimbabwean President Robert Mugabe backing Mandela's demand to suspend Nigeria's Commonwealth membership, but a number of other African leaders criticized the suggestion. Nigeria's membership in the Commonwealth of Nations was ultimately suspended, and Nigeria was threatened with expulsion if it did not transition to democracy in two years. The US and British governments also discussed the possibility of an oil embargo backed by a naval blockade of Nigeria. Ken Saro-Wiwa Foundation The Ken Saro-Wiwa foundation was established in 2017 to work towards improved access to basic resources such as electricity and Internet for entrepreneurs in Port Harcourt. The association founded the Ken Junior Award, named for Saro-Wiwa's son Ken Wiwa, who died in October 2016. The award is presented to innovative start-up technology companies in Port Harcourt. Family lawsuits against Royal Dutch Shell Beginning in 1996, the Center for Constitutional Rights (CCR), Earth Rights International (ERI), Paul Hoffman of Schonbrun, DeSimone, Seplow, Harris & Hoffman and other human rights attorneys have brought a series of cases to hold Shell accountable for alleged human rights violations in Nigeria, including summary execution, crimes against humanity, torture, inhumane treatment and arbitrary arrest and detention. The lawsuits are brought against Royal Dutch Shell and Brian Anderson, the head of its Nigerian operation. The cases were brought under the Alien Tort Statute, a 1789 statute giving non-US citizens the right to file suits in US courts for international human rights violations, and the Torture Victim Protection Act, which allows individuals to seek damages in the US for torture or extrajudicial killing, regardless of where the violations take place. The United States District Court for the Southern District of New York set a trial date of June 2009. On 9 June 2009, Shell agreed to an out-of-court settlement of US$15.5 million to victims' families. However, the company denied any liability for the deaths, stating that the payment was part of a reconciliation process. In a statement given after the settlement, Shell suggested that the money was being provided to the relatives of Saro-Wiwa and the eight other victims, to cover the legal costs of the case and also in recognition of the events that took place in the region. Some of the funding is also expected to be used to set up a development trust for the Ogoni people, who inhabit the Niger Delta region of Nigeria. The settlement was made just days before the trial, which had been brought by Saro-Wiwa's son, was due to begin in New York. Legacy His death provoked international outrage and the immediate suspension of Nigeria from the Commonwealth of Nations, as well as the calling back of many foreign diplomats for consultation. The United States and other countries considered imposing economic sanctions. The execution of Saro-Wiwa marked the beginning of the international business and human rights (BHR) movement. Tributes Tributes to Saro-Wiwa include: Artwork and memorials A memorial to Saro-Wiwa was unveiled in London on 10 November 2006 by London organisation Platform. It consists of a sculpture in the form of a bus and was created by Nigerian-born artist Sokari Douglas Camp. It toured the UK the following year. Awards The Association of Nigerian Authors is a sponsor of the Ken Saro-Wiwa Prize for Prose. Saro-Wiwa is named a Writer hero by The My Hero Project. The American news publication Foreign Policy has listed Ken Saro-Wiwa alongside Mahatma Gandhi, Eleanor Roosevelt, Corazon Aquino and Václav Havel as people "who never won the Nobel Peace Prize, but should have". Literature Richard North Patterson's novel Eclipse (2009) was loosely based on Saro-Wiwa's life. Music The title track of Italian noise rock band Il Teatro degli Orrori's 2009 album A Sangue Freddo ("In Cold Blood") is about Saro-Wiwa's struggle, and includes quotes from his works. Naming The Governor of Rivers State, Ezenwo Nyesom Wike, renamed the Rivers State Polytechnic after Saro-Wiwa. Amsterdam named a street after Saro-Wiwa, the Ken Saro-Wiwastraat. An ant Zasphinctus sarowiwai was named after Saro-Wiwa in 2017. Documentaries A BBC World Service radio documentary, Silence Would Be Treason, was broadcast in January 2022, presented by his daughter Noo Saro-Wiwa and voiced by Ben Arogundade. Personal life Saro-Wiwa and his wife Maria had five children, who grew up with their mother in the United Kingdom while their father remained in Nigeria. They include Ken Wiwa and Noo Saro-Wiwa, both journalists and writers, and Noo's twin Zina Saro-Wiwa, a journalist and filmmaker. In addition, Saro-Wiwa had two daughters (Singto and Adele) with another woman. He also had another son, Kwame Saro-Wiwa, who was only one year old when his father was executed. Biographies Canadian author J. Timothy Hunt's The Politics of Bones (September 2005), published shortly before the 10th anniversary of Saro-Wiwa's execution, documented the flight of Saro-Wiwa's brother Owens Wiwa, after his brother's execution and his own imminent arrest, to London and then on to Canada, where he is now a citizen and continues his brother's fight on behalf of the Ogoni people. It is also the story of Owens' personal battle against the Nigerian government to locate his brother's remains after they were buried in an unmarked mass-grave. Ogoni's Agonies: Ken Saro Wiwa and the Crisis in Nigeria (1998), edited by Abdul Rasheed Naʾallah, provides more information on the struggles of the Ogoni people Onookome Okome's book, Before I Am Hanged: Ken Saro-Wiwa—Literature, Politics, and Dissent (1999) is a collection of essays about Wiwa In the Shadow of a Saint: A Son's Journey to Understanding His Father's Legacy (2000), was written by his son Ken Wiwa. Saro-Wiwa's own diary, A Month and a Day: A Detention Diary, was published in January 1995, two months after his execution. In Looking for Transwonderland - Travels in Nigeria, his daughter Noo Saro-Wiwa tells the story of her return to Nigeria years after her father's murder. Bibliography A collection of handwritten letters and poems by Saro-Wiwa and audio recordings of visits and meetings with family and friends after his death were donated to Maynooth University by Sister Majella McCarron. The letters are now in the Digital Repository of Ireland (DRI). See also History of Nigeria Isaac Adaka Boro List of people from Rivers State Petroleum industry in Nigeria References Sources External links "Standing Before History: Remembering Ken Saro-Wiwa" at PEN World Voices, sponsored by Guernica Magazine in New York City on 2 May 2009. "The perils of activism: Ken Saro-Wiwa" by Anthony Daniels Letter of protest published in the New York Review of Books shortly before Saro-Wiwa's execution. Ken Saro-Wiwa's son, Ken Wiwa, writes a letter on openDemocracy.net about the campaign to seek justice for his father in a lawsuit against Shell – "America in Africa: plunderer or part" The Ken Saro-Wiwa Foundation Remember Saro-Wiwa campaign PEN Centres honour Saro-Wiwa's memory – IFEX The Unrepresented Nations and Peoples Organisation (UNPO) 1995 Ogoni report Right Livelihood Award recipient The Politics of Bones, by J. Timothy Hunt Wiwa v. Shell trial information Ken Saro-Wiwa at Maynooth University Ken Saro-Wiwa at the Digital Repository of Ireland 1941 births 1995 deaths 20th-century executions by Nigeria 20th-century Nigerian male writers 20th-century Nigerian novelists 20th-century Nigerian writers Academic staff of the University of Lagos Activists from Rivers State Burials at the Port Harcourt Cemetery Environmental killings Executed Nigerian people Goldman Environmental Prize awardees Government College Umuahia alumni Land defender Media people from Rivers State Nigerian activists Nigerian democracy activists Nigerian environmentalists Nigerian pacifists Nigerian satirists Nonviolence advocates Ogoni people People associated with Maynooth University People executed by Nigeria by hanging People from Bori People of Rivers State in the Nigerian Civil War Petroleum politics Rivers State Commissioners of Education Shell plc University of Ibadan alumni Victims of human rights abuses Wiwa family Writers from Rivers State
Ken Saro-Wiwa
[ "Chemistry" ]
4,241
[ "Petroleum", "Petroleum politics" ]
330,981
https://en.wikipedia.org/wiki/Tau%20%28particle%29
The tau (), also called the tau lepton, tau particle or tauon, is an elementary particle similar to the electron, with negative electric charge and a spin of . Like the electron, the muon, and the three neutrinos, the tau is a lepton, and like all elementary particles with half-integer spin, the tau has a corresponding antiparticle of opposite charge but equal mass and spin. In the tau's case, this is the "antitau" (also called the positive tau). Tau particles are denoted by the symbol and the antitaus by . Tau leptons have a lifetime of and a mass of /c2 (compared to /c2 for muons and /c2 for electrons). Since their interactions are very similar to those of the electron, a tau can be thought of as a much heavier version of the electron. Because of their greater mass, tau particles do not emit as much bremsstrahlung (braking radiation) as electrons; consequently they are potentially much more highly penetrating than electrons. Because of its short lifetime, the range of the tau is mainly set by its decay length, which is too small for bremsstrahlung to be noticeable. Its penetrating power appears only at ultra-high velocity and energy (above petaelectronvolt energies), when time dilation extends its otherwise very short path-length. As with the case of the other charged leptons, the tau has an associated tau neutrino, denoted by . History The search for tau started in 1960 at CERN by the Bologna-CERN-Frascati (BCF) group led by Antonino Zichichi. Zichichi came up with the idea of a new sequential heavy lepton, now called tau, and invented a method of search. He performed the experiment at the ADONE facility in 1969 once its accelerator became operational; however, the accelerator he used did not have enough energy to search for the tau particle. The tau was independently anticipated in a 1971 article by Yung-su Tsai. Providing the theory for this discovery, the tau was detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his and Tsai's colleagues at the Stanford Linear Accelerator Center (SLAC) and Lawrence Berkeley National Laboratory (LBL) group. Their equipment consisted of SLAC's then-new electron–positron colliding ring, called SPEAR, and the LBL magnetic detector. They could detect and distinguish between leptons, hadrons, and photons. They did not detect the tau directly, but rather discovered anomalous events: The need for at least two undetected particles was shown by the inability to conserve energy and momentum with only one. However, no other muons, electrons, photons, or hadrons were detected. It was proposed that this event was the production and subsequent decay of a new particle pair: This was difficult to verify, because the energy to produce the pair is similar to the threshold for D meson production. The mass and spin of the tau were subsequently established by work done at DESY-Hamburg with the Double Arm Spectrometer (DASP), and at SLAC-Stanford with the SPEAR Direct Electron Counter (DELCO), The symbol was derived from the Greek (triton, meaning "third" in English), since it was the third charged lepton discovered. Martin Lewis Perl shared the 1995 Nobel Prize in Physics with Frederick Reines. The latter was awarded his share of the prize for the experimental discovery of the neutrino. Tau decay The tau is the only lepton with enough mass to decay into hadrons. Like the leptonic decay modes of the tau, the hadronic decay is through the weak interaction. The branching fractions of the dominant hadronic tau decays are: 25.49% for decay into a charged pion, a neutral pion, and a tau neutrino; 10.82% for decay into a charged pion and a tau neutrino; 9.26% for decay into a charged pion, two neutral pions, and a tau neutrino; 8.99% for decay into three charged pions (of which two have the same electrical charge) and a tau neutrino; 2.74% for decay into three charged pions (of which two have the same electrical charge), a neutral pion, and a tau neutrino; 1.04% for decay into three neutral pions, a charged pion, and a tau neutrino. In total, the tau lepton will decay hadronically approximately 64.79% of the time. The branching fractions of the common purely leptonic tau decays are: 17.82% for decay into a tau neutrino, electron and electron antineutrino; 17.39% for decay into a tau neutrino, muon, and muon antineutrino. The similarity of values of the two branching fractions is a consequence of lepton universality. Exotic atoms The tau lepton is predicted to form exotic atoms like other charged subatomic particles. One of such consists of an antitau and an electron: , called tauonium. Another one is an onium atom called ditauonium or true tauonium, which is a challenge to detect due to the difficulty to form it from two (opposite-sign) short-lived tau leptons. Its experimental detection would be an interesting test of quantum electrodynamics. See also Flavour (particle physics) Generation (particle physics) Koide formula Lepton Footnotes References External links — gives the covers of the three original papers announcing the discovery. Elementary particles Leptons
Tau (particle)
[ "Physics" ]
1,190
[ "Elementary particles", "Subatomic particles", "Matter" ]
330,994
https://en.wikipedia.org/wiki/Squeeze%20theorem
In calculus, the squeeze theorem (also known as the sandwich theorem, among other names) is a theorem regarding the limit of a function that is bounded between two other functions. The squeeze theorem is used in calculus and mathematical analysis, typically to confirm the limit of a function via comparison with two other functions whose limits are known. It was first used geometrically by the mathematicians Archimedes and Eudoxus in an effort to compute , and was formulated in modern terms by Carl Friedrich Gauss. Statement The squeeze theorem is formally stated as follows. The functions and are said to be lower and upper bounds (respectively) of . Here, is not required to lie in the interior of . Indeed, if is an endpoint of , then the above limits are left- or right-hand limits. A similar statement holds for infinite intervals: for example, if , then the conclusion holds, taking the limits as . This theorem is also valid for sequences. Let be two sequences converging to , and a sequence. If we have , then also converges to . Proof According to the above hypotheses we have, taking the limit inferior and superior: so all the inequalities are indeed equalities, and the thesis immediately follows. A direct proof, using the -definition of limit, would be to prove that for all real there exists a real such that for all with we have Symbolically, As means that and means that then we have We can choose . Then, if , combining () and (), we have which completes the proof. Q.E.D The proof for sequences is very similar, using the -definition of the limit of a sequence. Examples First example The limit cannot be determined through the limit law because does not exist. However, by the definition of the sine function, It follows that Since , by the squeeze theorem, must also be 0. Second example Probably the best-known examples of finding a limit by squeezing are the proofs of the equalities The first limit follows by means of the squeeze theorem from the fact that for close enough to 0. The correctness of which for positive can be seen by simple geometric reasoning (see drawing) that can be extended to negative as well. The second limit follows from the squeeze theorem and the fact that for close enough to 0. This can be derived by replacing in the earlier fact by and squaring the resulting inequality. These two limits are used in proofs of the fact that the derivative of the sine function is the cosine function. That fact is relied on in other proofs of derivatives of trigonometric functions. Third example It is possible to show that by squeezing, as follows. In the illustration at right, the area of the smaller of the two shaded sectors of the circle is since the radius is and the arc on the unit circle has length . Similarly, the area of the larger of the two shaded sectors is What is squeezed between them is the triangle whose base is the vertical segment whose endpoints are the two dots. The length of the base of the triangle is , and the height is 1. The area of the triangle is therefore From the inequalities we deduce that provided , and the inequalities are reversed if . Since the first and third expressions approach as , and the middle expression approaches the desired result follows. Fourth example The squeeze theorem can still be used in multivariable calculus but the lower (and upper functions) must be below (and above) the target function not just along a path but around the entire neighborhood of the point of interest and it only works if the function really does have a limit there. It can, therefore, be used to prove that a function has a limit at a point, but it can never be used to prove that a function does not have a limit at a point. cannot be found by taking any number of limits along paths that pass through the point, but since therefore, by the squeeze theorem, References Notes References External links Squeeze Theorem by Bruce Atwood (Beloit College) after work by, Selwyn Hollis (Armstrong Atlantic State University), the Wolfram Demonstrations Project. Squeeze Theorem on ProofWiki. Limits (mathematics) Functions and mappings Articles containing proofs Theorems about real number sequences
Squeeze theorem
[ "Mathematics" ]
868
[ "Sequences and series", "Functions and mappings", "Mathematical structures", "Mathematical analysis", "Mathematical objects", "Theorems about real number sequences", "Mathematical relations", "Articles containing proofs" ]
331,080
https://en.wikipedia.org/wiki/Matryoshka%20doll
Matryoshka dolls ( ; ), also known as stacking dolls, nesting dolls, Russian tea dolls, or Russian dolls, are a set of wooden dolls of decreasing size placed one inside another. The name Matryoshka, is a diminutive form of Matryosha (), in turn a hypocoristic of the Russian female first name Matryona (). A set of matryoshkas consists of a wooden figure, which separates at the middle, top from bottom, to reveal a smaller figure of the same sort inside, which has, in turn, another figure inside of it, and so on. The first Russian nested doll set was made in 1890 by wood turning craftsman and wood carver Vasily Zvyozdochkin from a design by Sergey Malyutin, who was a folk crafts painter at Abramtsevo. Traditionally the outer layer is a woman, dressed in a sarafan, a long and shapeless traditional Russian peasant jumper dress. The figures inside may be of any gender; the smallest, innermost doll is typically a baby turned from a single piece of wood. Much of the artistry is in the painting of each doll, which can be very elaborate. The dolls often follow a theme; the themes may vary, from fairy tale characters to Soviet leaders. In some countries, matryoshka dolls are often referred to as babushka dolls, though they are not known by this name in Russian; babushka () means . History The first Russian nested doll set was carved in 1890 at the Children's Education Workshop by Vasily Zvyozdochkin and designed by Sergey Malyutin, who was a folk crafts painter in the Abramtsevo estate of Savva Mamontov, a Russian industrialist and patron of arts. Mamontov's brother, Anatoly Ivanovich Mamontov (1839–1905), created the Children's Education Workshop to make and sell children's toys. The doll set was painted by Malyutin. Malyutin's doll set consisted of eight dolls—the outermost was a mother in a traditional dress holding a red-combed rooster. The inner dolls were her children, girls and a boy, and the innermost a baby. The Children's Education Workshop was closed in the late 1890s, but the tradition of the matryoshka simply relocated to Sergiyev Posad, the Russian city known as a toy-making center since the fourteenth century. The inspiration for matryoshka dolls is not clear. Matryoshka dolls may have been inspired by a nesting doll imported from Japan. The Children's Education workshop where Zvyozdochkin was a lathe operator received a five piece, cylinder-shaped nesting doll featuring Fukuruma (Fukurokuju) in the late 1890s, which is now part of the collection at the Sergiev Posad Museum of Toys. Other east Asian dolls share similarities with matryoshka dolls such as the Kokeshi dolls, originating in Northern Honshū, the main island of Japan, although they cannot be placed one inside another, and the round hollow daruma doll depicting a Buddhist monk. Another possible source of inspiration is the nesting Easter eggs produced on a lathe by Russian woodworkers during the late 19th Century. Savva Mamontov's wife presented a set of matryoshka dolls at the Exposition Universelle in Paris in 1900, and the toy earned a bronze medal. Soon after, matryoshka dolls were being made in several places in Russia and shipped around the world. Manufacture Centers of Production The first matryoshka dolls were produced in the Children's Education (Detskoye vospitanie) workshop in Moscow. After it closed in 1904, production was transferred to the city of Sergiev Posad (Сергиев Посад), known as Sergiev (Сергиев) from 1919 to 1930 and Zagorsk from 1930 to 1991. Matryoshka factories were later established in other cities and villages: the village of Polkhovsky Maidan (Полховский-Майдан), which is the primary producer of matryoshka blanks, and its neighboring villages Krutets (Крутец) and Gorodets (Городец) the city of Semenov, (Семёнов) the city of Kirov (Киров), known as Vyatka (Вя́тка) (from 1780 to 1934 and renamed Kirov in 1934 although many of its institutions reverted to the name Vyatka (Viatka) in 1991 the city of Nolinsk (Нолинск) the city of Yoshkar-Ola (Йошкар-Ола) in the Republic of Mari-El Following the collapse of the Soviet Union, the closure of many matryoshka factories, and the loosening of restrictions, independent artists began to produce matryoshka dolls in homes and art studios. Method Ordinarily, matryoshka dolls are crafted from linden wood. There is a popular misconception that they are carved from one piece of wood. Rather, they are produced using: a lathe equipped with a balance bar; four heavy long distinct types of chisels (hook, knife, pipe, and spoon); and a "set of handmade wooden calipers particular to a size of the doll". The tools are hand forged by a village blacksmith from car axles or other salvage. A wood carver uniquely crafts each set of wooden calipers. Multiple pieces of wood are meticulously carved into the nesting set. Shape, Size, and Pieces per Set The standard shape approximates a human silhouette with a flared base on the largest doll for stability. Other shapes include potbelly, cone, bell, egg, bottle, sphere, and cylinder. The size and number of pieces varies widely. The industry standard from the Soviet period, which accounts for approximately 50% of all matryoshka produced, is six inches tall and consists of 5 dolls except for matryoshka dolls manufactured in Semenov whose standard is five inches tall and consists of 6 pieces. Other common sets are the 3 piece, the 7 piece, and the 10 piece. Common Characteristics Matryoshka dolls painted in the traditional style share common elements. They depict female figures wearing a peasant dress (sarafan) and scarf or shawl usually with an apron and flowers.  Each successively smaller doll is identical or nearly so. Distinctive regional styles developed in different areas of matryoshka manufacture. Themes in dolls Matryoshka dolls are often designed to follow a particular theme; for instance, peasant girls in traditional dress. Originally, themes were often drawn from tradition or fairy tale characters, in keeping with the craft tradition—but since the late 20th century, they have embraced a larger range, including Russian leaders and popular culture. Common themes of matryoshkas are floral and relate to nature. Often Christmas, Easter, and religion are used as themes for the doll. Modern artists create many new styles of nesting dolls, mostly as an alternative purchase option for tourism. These include animal collections, portraits, and caricatures of famous politicians, musicians, athletes, astronauts, "robots", and popular movie stars. Today, some Russian artists specialize in painting themed matryoshka dolls that feature specific categories of subjects, people, or nature. Areas with notable matryoshka styles include Sergiyev Posad, Semionovo (now the town of Semyonov), , and the city of Kirov. Political matryoshkas In the late 1980s and early 1990s during Perestroika, freedom of expression allowed the leaders of the Soviet Union to become a common theme of the matryoshka, with the largest doll featuring then-current leader Mikhail Gorbachev. These became very popular at the time, affectionately earning the nickname of a Gorba or Gorby, the namesake of Gorbachev. With the periodic succession of Russian leadership after the collapse of the Soviet Union, newer versions would start to feature Russian presidents Boris Yeltsin, Vladimir Putin, and Dmitry Medvedev. Most sets feature the current leader as the largest doll, with the predecessors decreasing in size. The remaining smaller dolls may feature other former leaders such as Leonid Brezhnev, Nikita Khrushchev, Joseph Stalin, Vladimir Lenin, and sometimes several historically significant Tsars such as Nicholas II and Peter the Great. Yuri Andropov and Konstantin Chernenko rarely appear due to the short length of their unusually brief tenures. Some less-common sets may feature the current leader as the smallest doll, with the predecessors increasing in size, usually with Stalin or Lenin as the largest doll. Some sets that include Yeltsin preceding Gorbachev were made during the brief period between the establishment of President of the RSFSR and the collapse of the Soviet Union, as both Yeltsin and Gorbachev were concurrently in prominent government positions. During Medvedev's presidency, Medvedev and Putin may both share the largest doll due to Putin still having a prominent role in the government as Prime Minister of Russia. As of Putin's re-election as the fourth President of Russia, Medvedev will usually succeed Yeltsin and precede Putin in stacking order, due to Putin's role solely as the largest doll. Political matryoshkas usually range between five and ten dolls per set. World record The largest set of matryoshka dolls in the world is a 51-piece set hand-painted by Youlia Bereznitskaia of Russia, completed in 2003. The tallest doll in the set measures ; the smallest, . Arranged side-by-side, the dolls span . As metaphor Nesting and onion metaphors Matryoshkas are also used metaphorically, as a design paradigm, known as the "matryoshka principle" or "nested doll principle". It denotes a recognizable relationship of "object-within-similar-object" that appears in the design of many other natural and crafted objects. Examples of this use include the matrioshka brain, the Matroska media-container format, and the Russian Doll model of multi-walled carbon nanotubes. The onion metaphor is similar. If the outer layer is peeled off an onion, a similar onion exists within. This structure is employed by designers in applications such as the layering of clothes or the design of tables, where a smaller table nests within a larger table, and a smaller one within that. The metaphor of the matryoshka doll (or its onion equivalent) is also used in the description of shell companies and similar corporate structures that are used in the context of tax-evasion schemes in low-tax jurisdictions (for example, offshore tax havens). It has also been used to describe satellites and suspected weapons in space. Other metaphors Matryoshka is often seen as a symbol of the feminine side of Russian culture. Matryoshka is associated in Russia with family and fertility. Matryoshka is used as the symbol for the epithet Mother Russia. Matryoshka dolls are a traditional representation of the mother carrying a child within her and can be seen as a representation of a chain of mothers carrying on the family legacy through the child in their wombs. Furthermore, matryoshka dolls are used to illustrate the unity of body, soul, mind, heart, and spirit. As an emoji In 2020, the Unicode Consortium approved the matryoshka doll (🪆) as one of the new emoji characters in release v.13. The matryoshka or nesting doll emoji was submitted to the consortium by Jef Gray and Samantha Sunne, as a non-religious, apolitical symbol of Russian-East European-Far East Asian culture. See also Amish doll Chinese boxes Droste effect Fractal Mise en abyme Infinity Recursion Culture of Russia Self-similarity Shaker-style pantry box Stacking (video game) Turducken Turtles all the way down References External links 1890s toys Culture of Armenia Containers Culture of Georgia (country) Handicrafts Nested containers Products introduced in 1890 Culture of Russia Russian inventions Infinity Recursion Culture of the Soviet Union Traditional dolls Culture of Ukraine Wooden dolls 1890 establishments in the Russian Empire
Matryoshka doll
[ "Mathematics" ]
2,585
[ "Mathematical logic", "Recursion" ]
331,121
https://en.wikipedia.org/wiki/Contig
A contig (from contiguous) is a set of overlapping DNA segments that together represent a consensus region of DNA. In bottom-up sequencing projects, a contig refers to overlapping sequence data (reads); in top-down sequencing projects, contig refers to the overlapping clones that form a physical map of the genome that is used to guide sequencing and assembly. Contigs can thus refer both to overlapping DNA sequences and to overlapping physical segments (fragments) contained in clones depending on the context. Original definition of contig In 1980, Staden wrote: In order to make it easier to talk about our data gained by the shotgun method of sequencing we have invented the word "contig". A contig is a set of gel readings that are related to one another by overlap of their sequences. All gel readings belong to one and only one contig, and each contig contains at least one gel reading. The gel readings in a contig can be summed to form a contiguous consensus sequence and the length of this sequence is the length of the contig. Sequence contigs A sequence contig is a continuous (not contiguous) sequence resulting from the reassembly of the small DNA fragments generated by bottom-up sequencing strategies. This meaning of contig is consistent with the original definition by Rodger Staden (1979). The bottom-up DNA sequencing strategy involves shearing genomic DNA into many small fragments ("bottom"), sequencing these fragments, reassembling them back into contigs and eventually the entire genome ("up"). Because current technology allows for the direct sequencing of only relatively short DNA fragments (300–1000 nucleotides), genomic DNA must be fragmented into small pieces prior to sequencing. In bottom-up sequencing projects, amplified DNA is sheared randomly into fragments appropriately sized for sequencing. The subsequent sequence reads, which are the data that contain the sequences of the small fragments, are put into a database. The assembly software then searches this database for pairs of overlapping reads. Assembling the reads from such a pair (including, of course, only one copy of the identical sequence) produces a longer contiguous read (contig) of sequenced DNA. By repeating this process many times, at first with the initial short pairs of reads but then using increasingly longer pairs that are the result of previous assembly, the DNA sequence of an entire chromosome can be determined. Today, it is common to use paired-end sequencing technology where both ends of consistently sized longer DNA fragments are sequenced. Here, a contig still refers to any contiguous stretch of sequence data created by read overlap. Because the fragments are of known length, the distance between the two end reads from each fragment is known. This gives additional information about the orientation of contigs constructed from these reads and allows for their assembly into scaffolds in a process called scaffolding. Scaffolds consist of overlapping contigs separated by gaps of known length. The new constraints placed on the orientation of the contigs allows for the placement of highly repeated sequences in the genome. If one end read has a repetitive sequence, as long as its mate pair is located within a contig, its placement is known. The remaining gaps between the contigs in the scaffolds can then be sequenced by a variety of methods, including PCR amplification followed by sequencing (for smaller gaps) and BAC cloning methods followed by sequencing for larger gaps. BAC contigs Contig can also refer to the overlapping clones that form a physical map of a chromosome when the top-down or hierarchical sequencing strategy is used. In this sequencing method, a low-resolution map is made prior to sequencing in order to provide a framework to guide the later assembly of the sequence reads of the genome. This map identifies the relative positions and overlap of the clones used for sequencing. Sets of overlapping clones that form a contiguous stretch of DNA are called contigs; the minimum number of clones that form a contig that covers the entire chromosome comprise the tiling path that is used for sequencing. Once a tiling path has been selected, its component BACs are sheared into smaller fragments and sequenced. Contigs therefore provide the framework for hierarchical sequencing. The assembly of a contig map involves several steps. First, DNA is sheared into larger (50–200kb) pieces, which are cloned into BACs or PACs to form a BAC library. Since these clones should cover the entire genome/chromosome, it is theoretically possible to assemble a contig of BACs that covers the entire chromosome. Reality, however, is not always ideal. Gaps often remain, and a scaffold—consisting of contigs and gaps—that covers the map region is often the first result. The gaps between contigs can be closed by various methods outlined below. Construction of BAC contigs BAC contigs are constructed by aligning BAC regions of known overlap via a variety of methods. One common strategy is to use sequence-tagged site (STS) content mapping to detect unique DNA sites in common between BACs. The degree of overlap is roughly estimated by the number of STS markers in common between two clones, with more markers in common signifying a greater overlap. Because this strategy provides only a very rough estimate of overlap, restriction digest fragment analysis, which provides a more precise measurement of clone overlap, is often used. In this strategy, clones are treated with one or two restriction enzymes and the resulting fragments separated by gel electrophoresis. If two clones, they will likely have restriction sites in common, and will thus share several fragments. Because the number of fragments in common and the length of these fragments is known (the length is judged by comparison to a size standard), the degree of overlap can be deduced to a high degree of precision. Gaps between contigs Gaps often remain after initial BAC contig construction. These gaps occur if the Bacterial Artificial Chromosome (BAC) library screened has low complexity, meaning it does not contain a high number of STS or restriction sites, or if certain regions were less stable in cloning hosts and thus underrepresented in the library. If gaps between contigs remain after STS landmark mapping and restriction fingerprinting have been performed, the sequencing of contig ends can be used to close these gaps. This end-sequencing strategy essentially creates a novel STS with which to screen the other contigs. Alternatively, the end sequence of a contig can be used as a primer to primer walk across the gap. See also Staden Package References External links Definition of the term and historical perspective Staden package of sequence assembly: Definitions and background information Molecular biology Genomics
Contig
[ "Chemistry", "Biology" ]
1,400
[ "Biochemistry", "Molecular biology" ]
331,154
https://en.wikipedia.org/wiki/Joseph%20Plateau
Joseph Antoine Ferdinand Plateau (; 14 October 1801 – 15 September 1883) was a Belgian physicist and mathematician. He was one of the first people to demonstrate the illusion of a moving image. To do this, he used counterrotating disks with repeating drawn images in small increments of motion on one and regularly spaced slits in the other. He called this device of 1832 the phenakistiscope. Biography Plateau was born on 14 October 1801, in Brussels. His father, Antoine Plateau (fr) born in Tournai, was a talented flower painter. At the age of six, the younger Plateau already could read, making him a child prodigy in those times. While attending primary school, he was particularly impressed by a lesson of physics; enchanted by the experiments he observed, he vowed to discover their secrets someday. Plateau spent his school holidays in Marche-les-Dames, with his uncle and his family; his cousin and playfellow was Auguste Payen, who later became an architect and the principal designer of the Belgian railways. At the age of fourteen, he lost his father and mother; the trauma caused by this loss made him fall ill. On 27 August 1840, Plateau married Augustine–Thérèse–Aimée–Fanny Clavareau, and they had a son a year later. His daughter Alice Plateau married in 1871, who became his collaborator and later his first biographer. Fascinated by the persistence of luminous impressions on the retina, Plateau performed an experiment in which he gazed directly into the Sun for 25 seconds. He lost his eyesight later in his life and attributed the loss to this experiment. However, this may not have been the case, and he may have instead had chronic uveitis. Plateau became a foreign member of the Royal Netherlands Academy of Arts and Sciences in 1872. Plateau died in Ghent in 1883. Academic career Plateau studied at the State University of Liège, where he graduated as a doctor of physical and mathematical sciences in 1829. In 1827, Plateau became a teacher of mathematics at the "Atheneum" school in Brussels. In 1835, he was appointed Professor of Physics and Applied Physics at the State University in Ghent. Research Optics In 1829, Plateau submitted his doctoral thesis to his mentor Adolphe Quetelet for advice. It contained only 27 pages but formulated a great number of fundamental conclusions. It contained the first results of his research into the effect of colours on the retina (duration, intensity, and colour), his mathematical research into the intersections of revolving curves (locus), the observation of the distortion of moving images, and the reconstruction of distorted images through counter revolving discs (he dubbed these anorthoscopic discs). In 1832, Plateau invented an early stroboscopic device, the "phenakistiscope", the first device to give the illusion of a moving image. It consisted of two disks, one with small equidistant radial windows, through which the viewer could look, and another containing a sequence of images. When the two disks rotated at the correct speed, the synchronization of the windows and the images created an animated effect. The projection of stroboscopic photographs, creating the illusion of motion, eventually led to the development of cinema. Plateau's problem Plateau also studied the phenomena of capillary action and surface tension. The mathematical problem of existence of a minimal surface with a given boundary is named after him. He conducted extensive studies of soap films and formulated Plateau's laws, which describe the structures formed by such films in foams. Works In popular culture On 14 October 2019, the search engine Google commemorated Plateau with a Doodle on his 218th birth anniversary. This doodle was created by animator, filmmaker, and Doodler Olivia Huynh with inspiration and help from Diana Tran and Tom Tabanao. It is the first Google Doodle with different artwork showing up across different device displays—desktop, mobile, and the Google App. See also Patterns in nature Plateau's laws Plateau's problem Plateau–Rayleigh instability Soap bubble Stretched grid method References Sources A commemorative paper of nearly 100 pages describing many aspects of his life and research, including a portrait of him and authored by his son in Law, Gustaaf Van der Mensbrugghe. A biographical paper on Joseph Plateau's son-in-law, collaborator and first biographer. External links Plateau-Rayleigh instability – a 3D-lattice kinetic Monte Carlo simulation 1801 births 1883 deaths Belgian physicists Blind scholars and academics Belgian blind people Flemish scientists Fluid dynamicists Academic staff of Ghent University Members of the Royal Netherlands Academy of Arts and Sciences Foreign members of the Royal Society Scientists from Brussels University of Liège alumni Physicians with disabilities
Joseph Plateau
[ "Chemistry" ]
948
[ "Fluid dynamicists", "Fluid dynamics" ]
331,155
https://en.wikipedia.org/wiki/Nocturnal%20enuresis
Nocturnal enuresis (NE), also informally called bedwetting, is involuntary urination while asleep after the age at which bladder control usually begins. Bedwetting in children and adults can result in emotional stress. Complications can include urinary tract infections. Most bedwetting is a developmental delay—not an emotional problem or physical illness. Only a small percentage (5 to 10%) of bedwetting cases have a specific medical cause. Bedwetting is commonly associated with a family history of the condition. Nocturnal enuresis is considered primary when a child has not yet had a prolonged period of being dry. Secondary nocturnal enuresis is when a child or adult begins wetting again after having stayed dry. Treatments range from behavioral therapy, such as bedwetting alarms, to medication, such as hormone replacement, and even surgery such as urethral dilatation. Since most bedwetting is simply a developmental delay, most treatment plans aim to protect or improve self-esteem. Treatment guidelines recommend that the physician counsel the parents, warning about psychological consequences caused by pressure, shaming, or punishment for a condition children cannot control. Bedwetting is the most common childhood complaint. Impact A review of medical literature shows doctors consistently stressing that a bedwetting child is not at fault for the situation. Many medical studies state that the psychological impacts of bedwetting are more important than the physical considerations. "It is often the child's and family members' reaction to bedwetting that determines whether it is a problem or not." Self-esteem Whether bedwetting causes low self-esteem remains a subject of debate, but several studies have found that self-esteem improved with management of the condition. Children questioned in one study ranked bedwetting as the third most stressful life event, after "parental war of words", divorce and parental fighting. Adolescents in the same study ranked bedwetting as tied for second with parental fighting. Bedwetters face problems ranging from being teased by siblings, being punished by parents, the embarrassment of still having to wear diapers, and being afraid that friends will find out. Psychologists report that the amount of psychological harm depends on whether the bedwetting harms self-esteem or development of social skills. Key factors are: How much the bedwetting limits social activities like sleep-overs and campouts The degree of the social ostracism by peers (Perceived) Anger, punishment, refusal and rejection by caregivers along with subsequent guilt The number of failed treatment attempts How long the child has been wetting Behavioral impact Studies indicate that children with behavioral problems are more likely to wet their beds. For children who have developmental problems, the behavioral problems and the bedwetting are frequently part of/caused by the developmental issues. For bedwetting children without other developmental issues, these behavioral issues can result from self-esteem issues and stress caused by the wetting. As mentioned below, current studies show that it is very rare for a child to intentionally wet the bed as a method of acting out. Punishment for bedwetting Medical literature states, and studies show, that punishing or shaming a child for bedwetting will frequently make the situation worse. It is best described as a downward cycle, where a child punished for bedwetting feels shame and a loss of self-confidence. This can cause increased bedwetting incidents, leading to more punishment and shaming. In the United States, about 25% of enuretic children are punished for wetting the bed. In Hong Kong, 57% of enuretic children are punished for wetting. Parents with only a grade-school level education punish bedwetting children at twice the rate of high-school- and college-educated parents. In Korea and in small parts of Japan, there is a folk tradition whereby bedwetters are made to wear a winnowing basket on their head and sent to ask their neighbors for salt. This is motivated in part by a desire to publicly embarrass the child into compliance, as neighbors would recognize why the child was knocking on their door. Families Parents and family members are frequently stressed by a child's bedwetting. Soiled linens and clothing cause additional laundry. Wetting episodes can cause lost sleep if the child wakes and/or cries, waking the parents. A European study estimated that a family with a child who wets nightly will pay about $1,000 a year for additional laundry, extra sheets, diapers, and mattress replacement. Despite these stressful effects, doctors emphasize that parents should react patiently and supportively. Sociopathy Bedwetting does not indicate a greater possibility of being a sociopath, as long as caregivers do not cause trauma by shaming or punishing a bedwetting child. Bedwetting was part of the Macdonald triad, a set of three behavioral characteristics described by John Macdonald in 1963. The other two characteristics were firestarting and animal abuse. Macdonald suggested that there was an association between a person displaying all three characteristics, then later displaying sociopathic criminal behavior. Up to 60% of multiple murderers, according to some estimates, wet their beds post-adolescence. Enuresis is an "unconscious, involuntary [...] act". Bedwetting can be connected to past emotions and identity. Children under substantial stress, particularly in their home environment, frequently engage in bedwetting, in order to alleviate the stress produced by their surroundings. Trauma can also trigger a return to bedwetting (secondary enuresis) in both children and adults. It is not bedwetting that increases the chance of criminal behavior, but the associated trauma. Parental cruelty can result in "homicidal proneness". Causes The etiology of NE is not fully understood, although there are three common causes: excessive urine volume, poor sleep arousal, and bladder contractions. Differentiation of cause is mainly based on patient history and fluid charts completed by the parent or carer to inform management options. Bedwetting has a strong genetic component. Children whose parents were not enuretic have only a 15% incidence of bedwetting. When one or both parents were bedwetters, the rates jump to 44% and 77% respectively. These first two factors (aetiology and genetic component) are the most common in bedwetting, but current medical technology offers no easy testing for either cause. There is no test to prove that bedwetting is only a developmental delay, and genetic testing offers little or no benefit. As a result, other conditions should be ruled out. The following causes are less common, but are easier to prove and more clearly treated: In some bedwetting children there is no increase in ADH (antidiuretic hormone) production, while other children may produce an increased amount of ADH but their response is insufficient. People with reported bedwetting issues are 2.7 times more likely to be diagnosed with attention deficit hyperactivity disorder. Caffeine increases urine production. Chronic constipation can cause bed wetting. When the bowels are full, it can put pressure on the bladder. Often such children defecate normally, yet they retain a significant mass of material in the bowel which causes bedwetting. Infections and disease are more strongly connected with secondary nocturnal enuresis and with daytime wetting. Less than 5% of all bedwetting cases are caused by infection or disease, the most common of which is a urinary tract infection. Patients with more severe neurological-developmental issues have a higher rate of bedwetting problems. One study of seven-year-olds showed that "handicapped and intellectually disabled children" had a bedwetting rate almost three times higher than "non-handicapped children" (26.6% vs. 9.5%, respectively). Psychological issues (e.g., death in the family, sexual abuse, extreme bullying) are established as a cause of secondary nocturnal enuresis (a return to bedwetting), but are very rarely a cause of PNE-type bedwetting. Bedwetting can also be a symptom of a pediatric neuropsychological disorder called PANDAS. Sleep apnea stemming from an upper airway obstruction has been associated with bedwetting. Snoring and enlarged tonsils or adenoids are a sign of potential sleep apnea problems. Sleepwalking can lead to bedwetting. During sleepwalking, the sleepwalker may think they are in another room. When the sleepwalker urinates during a sleepwalking episode, they usually think they are in the bathroom, and therefore urinate where they think the toilet should be. Cases of this have included opening a closet and urinating in it; urinating on the sofa, and simply urinating in the middle of the room. Stress is a cause of people who return to wetting the bed. Researchers find that moving to a new town, parent conflict or divorce, arrival of a new baby, or loss of a loved one or pet can cause insecurity, contributing to returning bedwetting. Type 1 diabetes mellitus can first present as nocturnal enuresis. It is classically associated with polyuria, polydipsia, and polyphagia; weight loss, lethargy, and diaper candidiasis may also be present in those with new-onset disease. Alcohol intoxication is a leading cause for nocturnal enuresis among adults. Alcohol suppresses the production of anti diuretic hormones and irritates the detrusor muscle in the bladder. These factors, paired with the large amount of fluid ingested, particularly during binge drinking sessions or when paired with caffeinated drinks, can lead to episodes of nocturnal enuresis. Unconfirmed Food allergies may be part of the cause for some patients. This link is not well established, requiring further research. Improper toilet training is another disputed cause of bedwetting. This theory was more widely supported in the last century and is still cited by some authors today. Some say bedwetting can be caused by improper toilet training, either by starting the training when the child is too young or by being too forceful. Recent research has shown more mixed results and a connection to toilet training has not been proven or disproven. According to the American Academy of Pediatrics, more child abuse occurs during potty training than in any other developmental stage. Dandelions are reputed to be a potent diuretic, and anecdotal reports and folk wisdom say children who handle them can end up wetting the bed. English folk names for the plant are "peebeds" and "pissabeds". In French the dandelion is called pissenlit, which means "piss in bed"; likewise "piscialletto", an Italian folkname, and "meacamas" in Spanish. Mechanism Two physical functions prevent bedwetting. The first is a hormone that reduces urine production at night. The second is the ability to wake up when the bladder is full. Children usually achieve nighttime dryness by developing one or both of these abilities. There appear to be some hereditary factors in how and when these develop. The first ability is a hormone cycle that reduces the body's urine production. At about sunset each day, the body releases a minute burst of antidiuretic hormone (also known as arginine vasopressin or AVP). This hormone burst reduces the kidney's urine output well into the night so that the bladder does not get full until morning. This hormone cycle is not present at birth. Many children develop it between the ages of two and six years old, others between six and the end of puberty, and some not at all. The second ability that helps people stay dry is waking when the bladder is full. This ability develops in the same age range as the vasopressin hormone, but is separate from that hormone cycle. The typical development process begins with one- and two-year-old children developing larger bladders and beginning to sense bladder fullness. Two- and three-year-old children begin to stay dry during the day. Four- and five-year-olds develop an adult pattern of urinary control and begin to stay dry at night. Diagnosis Thorough history regarding frequency of bedwetting, any period of dryness in between, associated daytime symptoms, constipation, and encopresis should be sought. Voiding diary People are asked to observe, record and measure when and how much their child voids and drinks, as well as associated symptoms. A voiding diary in the form of a frequency volume chart records voided volume along with the time of each micturition for at least 24 hours. The frequency volume chart is enough for patients with complaints of nocturia and frequency only. If other symptoms are also present then a detailed bladder diary must be maintained. In a bladder diary, times of micturition and voided volume, incontinence episodes, pad usage, and other information such as fluid intake, the degree of urgency, and the degree of incontinence are recorded. Physical examination Each child should be examined physically at least once at the beginning of treatment. A full pediatric and neurological exam is recommended. Measurement of blood pressure is important to rule out any renal pathology. External genitalia and lumbosacral spine should be examined thoroughly. A spinal defect, such as a dimple, hair tuft, or skin discoloration, might be visible in approximately 50% of patients with an intraspinal lesion. Thorough neurologic examination of the lower extremities, including gait, muscle power, tone, sensation, reflexes, and plantar responses should be done during first visit. Classification Nocturnal urinary continence is dependent on three factors: 1) nocturnal urine production, 2) nocturnal bladder function and 3) sleep and arousal mechanisms. Any child will experience nocturnal enuresis if more urine is produced than can be contained in the bladder or if the detrusor is hyperactive, provided that he or she is not awakened by the imminent bladder contraction. Primary nocturnal enuresis Primary nocturnal enuresis is the most common form of bedwetting. Bedwetting becomes a disorder when it persists after the age at which bladder control usually occurs (4–7 years), and is either resulting in an average of at least two wet nights a week with no long periods of dryness or not able to sleep dry without being taken to the toilet by another person. New studies show that anti-psychotic drugs can have a side effect of causing enuresis. It has been shown that diet impacts enuresis in children. Constipation from a poor diet can result in impacted stool in the colon putting undue pressure on the bladder creating loss of bladder control (overflow incontinence). Some researchers, however, recommend a different starting age range. This guidance says that bedwetting can be considered a clinical problem if the child regularly wets the bed after turning 7 years old. Secondary nocturnal enuresis Secondary enuresis occurs after a patient goes through an extended period of dryness at night (six months or more) and then reverts to night-time wetting. Secondary enuresis can be caused by emotional stress or a medical condition, such as a bladder infection. Psychological definition Psychologists are usually allowed to diagnose and write a prescription for diapers if nocturnal enuresis causes the patient significant distress. Psychiatists may instead use a definition from the DSM-IV, defining nocturnal enuresis as repeated urination into bed or clothes, occurring twice per week or more for at least three consecutive months in a child of at least 5 years of age and not due to either a drug side effect or a medical condition. Management There are a number of management options for bedwetting. The following options apply when the bedwetting is not caused by a specifically identifiable medical condition such as a bladder abnormality or diabetes. Treatment is recommended when there is a specific medical condition such as bladder abnormalities, infection, or diabetes. It is also considered when bedwetting may harm the child's self-esteem or relationships with family/friends. Only a small percentage of bedwetting is caused by a specific medical condition, so most treatment is prompted by concern for the child's emotional welfare. Behavioral treatment of bedwetting overall tends to show increased self-esteem for children. Parents become concerned much earlier than doctors. A study in 1980 asked parents and physicians the age that children should stay dry at night. The average parent response was 2.75 years old, while the average physician response was 5.13 years old. Punishment is not effective and can interfere with treatment. Treatment approaches Simple behavioral methods are recommended as initial treatment. Other treatment methods include the following: Motivational therapy in nocturnal enuresis mainly involves parent and child education. Guilt should be allayed by providing facts. Fluids should be restricted 2 hours prior to bed. The child should be encouraged to empty the bladder completely prior to going to bed. Positive reinforcement can be initiated by setting up a diary or chart to monitor progress and establishing a system to reward the child for each night that they are dry. The child should participate in morning cleanup as a natural, nonpunitive consequence of wetting. This method is particularly helpful in younger children (<8 years) and will achieve dryness in 15-20% of the patients. Waiting: Almost all children will outgrow bedwetting. For this reason, urologists and pediatricians frequently recommend delaying treatment until the child is at least six or seven years old. Physicians may begin treatment earlier if they perceive the condition is damaging the child's self-esteem and/or relationships with family/friends. Bedwetting alarms: Physicians also frequently suggest bedwetting alarms which sound a loud tone when they sense moisture. This can help condition the child to wake at the sensation of a full bladder. These alarms are considered more effective than no treatment and may have a lower risk of adverse events than some medical therapies but it is still uncertain if alarms are more effective than other treatments. There may be a 29% to 69% relapse rate, so the treatment may need to be repeated. DDAVP (desmopressin) tablets are a synthetic replacement for antidiuretic hormone, the hormone that reduces urine production during sleep. Desmopressin is usually used in the form of desmopressin acetate, DDAVP. Patients taking DDAVP are 4.5 times more likely to stay dry than those taking a placebo. The drug replaces the hormone for that night with no cumulative effect. US drug regulators have banned using desmopressin nasal sprays for treating bedwetting since the oral form is considered safer. DDAVP is most efficient in children with nocturnal polyuria (nocturnal urine production greater than 130% of expected bladder capacity for age) and normal bladder reservoir function (maximum voided volume greater than 70% of expected bladder capacity for age). Other children who are likely candidates for desmopressin treatment are those in whom alarm therapy has failed or those considered unlikely to comply with alarm therapy. It can be very useful for summer camp and sleepovers to prevent enuresis. Tricyclic antidepressants: Tricyclic antidepressant prescription drugs with anti-muscarinic properties have been proven successful in treating bedwetting, but also have an increased risk of side effects, including death from overdose. These drugs include amitriptyline, imipramine and nortriptyline. Studies find that patients using these drugs are 4.2 times as likely to stay dry as those taking a placebo. The relapse rates after stopping the medicines are close to 50%. Condition management Diapers: Wearing a diaper can reduce embarrassment for bedwetters and make cleanup easier for caregivers. These products are known as training pants or diapers when used for younger children, and as absorbent underwear or incontinence briefs when marketed for older children and adults. Some diapers are marketed especially for people with bedwetting. A major benefit is the reduced stress on both the bedwetter and caregivers. Wearing diapers can be especially beneficial for bedwetting children wishing to attend sleepovers or campouts, reducing emotional problems caused by social isolation and/or embarrassment in front of peers. According to a study of one adult with severe disabilities, extended diaper usage may interfere with learning to stay dry. Waterproof mattress pads are used in some cases to ease clean-up of bedwetting incidents, however they only protect the mattress, and the sheets, bedding or sleeping partner may be soiled. Unproven Acupuncture: While acupuncture is safe in most adolescents, studies done to assess its effectiveness for nocturnal enuresis are of low quality. Dry bed training: Dry bed training is frequently waking the child at night. Studies show this training is ineffective by itself and does not increase the success rate when used in conjunction with a bedwetting alarm. Star chart: A star chart allows a child and parents to track dry nights, as a record and/or as part of a reward program. This can be done either alone or with other treatments. There is no research to show effectiveness, either in reducing bedwetting or in helping self-esteem. Some psychologists, however, recommend star charts as a way to celebrate successes and help a child's self-esteem. Epidemiology Doctors frequently consider bedwetting as a self-limiting problem, since most children will outgrow it. Children 5 to 9 years old have a spontaneous cure rate of 14% per year. Adolescents 10 to 17 years old have a spontaneous cure rate of 16% per year. As can be seen from the numbers above, a portion of bedwetting children will not outgrow the problem. Adult rates of bedwetting show little change due to spontaneous cure. Persons who are still enuretic at age 17 are likely to deal with bedwetting throughout their lives. Studies of bedwetting in adults have found varying rates. The most quoted study in this area was done in the Netherlands. It found a 0.5% rate for 20- to 79-year-olds. A Hong Kong study, however, found a much higher rate. The Hong Kong researchers found a bedwetting rate of 2.3% in 16- to 40-year-olds. History In the first century B.C., at lines 1026-29 of the fourth book of his On the Nature of Things, Lucretius gave a high-style description of bed-wetting: "Innocent children often, when they are bound up by sleep, believe they are raising up their clothing by a latrine or shallow pot; they pour out the urine from their whole body, and the Babylonian bedding with its magnificent splendor is soaked." An early psychological perspective on bedwetting was given in 1025 by Avicenna in The Canon of Medicine: "Urinating in bed is frequently predisposed by deep sleep: when urine begins to flow, its inner nature and hidden will (resembling the will to breathe) drives urine out before the child awakes. When children become stronger and more robust, their sleep is lighter and they stop urinating." Psychological theory through the 1960s placed much greater focus on the possibility that a bedwetting child might be acting out, purposefully striking back against parents by soiling linens and bedding. However, more recent research and medical literature states that this is very rare. See also Enuresis Nocturnal emission References External links Childhood Mental disorders diagnosed in childhood Pediatrics Sleep disorders Symptoms and signs: Urinary system Toilet training Urine Urology
Nocturnal enuresis
[ "Biology" ]
4,888
[ "Behavior", "Toilet training", "Urine", "Excretion", "Animal waste products", "Sleep disorders", "Sleep" ]
331,203
https://en.wikipedia.org/wiki/Mysophilia
Mysophilia is a paraphilia where erotic pleasure is derived from filth. Mysophiles may find dirt, soiled underwear, feces, unwashed people, or vomit to be sexually arousing. People with mysophilia have been known to be aroused by unclean locations, such as an alleyway or a dirty bathroom, and behaviors, such as not bathing for many days at a time. In culture The protagonist of the novel Wetlands, and the film based on the book, would be considered a mysophiliac, deriving pleasure from not washing and from dirty locations, such as toilets. Napoleon Bonaparte, while campaigning in 1796, wrote to his wife Joséphine: "Please don't wash, will arrive in three days". This can be interpreted as mysophiliac behaviour if it is assumed this was to ensure her clothes, as well as her person, were soiled. See also Mud wrestling Salirophilia References Paraphilias
Mysophilia
[ "Biology" ]
202
[ "Behavior", "Sexuality stubs", "Sexuality" ]
331,211
https://en.wikipedia.org/wiki/Saharon%20Shelah
Saharon Shelah ( , ; born July 3, 1945) is an Israeli mathematician. He is a professor of mathematics at the Hebrew University of Jerusalem and Rutgers University in New Jersey. Biography Shelah was born in Jerusalem on July 3, 1945. He is the son of the Israeli poet and political activist Yonatan Ratosh. He received his PhD for his work on stable theories in 1969 from the Hebrew University. Shelah is married to Yael, and has three children. His brother, magistrate judge Hamman Shelah was murdered along with his wife and daughter by an Egyptian soldier in the Ras Burqa massacre in 1985. Shelah planned to be a scientist while at primary school, but initially was attracted to physics and biology, not mathematics. Later he found mathematical beauty in studying geometry: He said, "But when I reached the ninth grade I began studying geometry and my eyes opened to that beauty—a system of demonstration and theorems based on a very small number of axioms which impressed me and captivated me." At the age of 15, he decided to become a mathematician, a choice cemented after reading Abraham Halevy Fraenkel's book An Introduction to Mathematics. He received a B.Sc. from Tel Aviv University in 1964, served in the Israel Defense Forces Army between 1964 and 1967, and obtained a M.Sc. from the Hebrew University (under the direction of Haim Gaifman) in 1967. He then worked as a teaching assistant at the Institute of Mathematics of the Hebrew University of Jerusalem while completing a Ph.D. there under the supervision of Michael Oser Rabin, on a study of stable theories. Shelah was a lecturer at Princeton University during 1969–70, and then worked as an assistant professor at the University of California, Los Angeles during 1970–71. He became a professor at Hebrew University in 1974, a position he continues to hold. He has been a visiting professor at the following universities: the University of Wisconsin (1977–78), the University of California, Berkeley (1978 and 1982), the University of Michigan (1984–85), at Simon Fraser University, Burnaby, British Columbia (1985), and Rutgers University, New Jersey (1985). He has been a distinguished visiting professor at Rutgers University since 1986. Academic career Shelah's main interests lie in mathematical logic, model theory in particular, and in axiomatic set theory. In model theory, he developed classification theory, which led him to a solution of Morley's problem. In set theory, he discovered the notion of proper forcing, an important tool in iterated forcing arguments. With PCF theory, he showed that in spite of the undecidability of the most basic questions of cardinal arithmetic (such as the continuum hypothesis), there are still highly nontrivial ZFC theorems about cardinal exponentiation. Shelah constructed a Jónsson group, an uncountable group for which every proper subgroup is countable. He showed that Whitehead's problem is independent of ZFC. He gave the first primitive recursive upper bound to van der Waerden's numbers V(C,N). He extended Arrow's impossibility theorem on voting systems. Shelah's work has had a deep impact on model theory and set theory. The tools he developed for his classification theory have been applied to a wide number of topics and problems in model theory and have led to great advances in stability theory and its uses in algebra and algebraic geometry as shown for example by Ehud Hrushovski and many others. Classification theory involves deep work developed in many dozens of papers to completely solve the spectrum problem on classification of first order theories in terms of structure and number of nonisomorphic models, a huge tour de force. Following that he has extended the work far beyond first order theories, for example for abstract elementary classes. This work also has had important applications to algebra by works of Boris Zilber. Awards Three times speaker at the International Congress of Mathematicians (1974 invited, 1983 plenary, 1986 plenary) The first recipient of the Erdős Prize, in 1977 The Karp Prize of the Association for Symbolic Logic in 1983 The Israel Prize, for mathematics, in 1998 The Bolyai Prize in 2000 The Wolf Prize in Mathematics in 2001 The EMET Prize for Art, Science and Culture in 2011 The Leroy P. Steele Prize, for Seminal Contribution to Research, in 2013 Honorary member of the Hungarian Academy of Sciences, in 2013 Advanced grant of the European Research Council (2013) Hausdorff Medal of the European Set Theory Society, joint with Maryanthe Malliaris, 2017 Schock Prize in Logic and Philosophy of the Royal Swedish Academy of Sciences, 2018 Honorary doctorate from the Technische Universität Wien, 2019 Selected works Proper forcing, Springer 1982 Proper and improper forcing (2nd edition of Proper forcing), Springer 1998 Around classification theory of models, Springer 1986 Classification theory and the number of non-isomorphic models, Studies in Logic and the Foundations of Mathematics, 1978, 2nd edition 1990, Elsevier Classification Theory for Abstract Elementary Classes, College Publications 2009 Classification Theory for Abstract Elementary Classes, Volume 2, College Publications 2009 Cardinal Arithmetic, Oxford University Press 1994 See also List of Israel Prize recipients References External links Archive of Shelah's mathematical papers, shelah.logic.at 1945 births 20th-century Israeli mathematicians 21st-century Israeli mathematicians Einstein Institute of Mathematics alumni Academic staff of the Hebrew University of Jerusalem Israel Prize in mathematics recipients Israeli Jews Jewish scientists Living people Members of the Hungarian Academy of Sciences Members of the Israel Academy of Sciences and Humanities Model theorists Rutgers University faculty Set theorists Wolf Prize in Mathematics laureates European Research Council grantees Hausdorff Medal winners Erdős Prize recipients
Saharon Shelah
[ "Mathematics" ]
1,170
[ "Model theorists", "Model theory" ]
331,221
https://en.wikipedia.org/wiki/Creatine
Creatine ( or ) is an organic compound with the nominal formula . It exists in various tautomers in solutions (among which are neutral form and various zwitterionic forms). Creatine is found in vertebrates, where it facilitates recycling of adenosine triphosphate (ATP), primarily in muscle and brain tissue. Recycling is achieved by converting adenosine diphosphate (ADP) back to ATP via donation of phosphate groups. Creatine also acts as a buffer. History Creatine was first identified in 1832 when Michel Eugène Chevreul isolated it from the basified water-extract of skeletal muscle. He later named the crystallized precipitate after the Greek word for meat, κρέας (kreas). In 1928, creatine was shown to exist in equilibrium with creatinine. Studies in the 1920s showed that consumption of large amounts of creatine did not result in its excretion. This result pointed to the ability of the body to store creatine, which in turn suggested its use as a dietary supplement. In 1912, Harvard University researchers Otto Folin and Willey Glover Denis found evidence that ingesting creatine can dramatically boost the creatine content of the muscle. In the late 1920s, after finding that the intramuscular stores of creatine can be increased by ingesting creatine in larger than normal amounts, scientists discovered phosphocreatine (creatine phosphate), and determined that creatine is a key player in the metabolism of skeletal muscle. It is naturally formed in vertebrates. The discovery of phosphocreatine was reported in 1927. In the 1960s, creatine kinase (CK) was shown to phosphorylate ADP using phosphocreatine (PCr) to generate ATP. It follows that ATP - not PCr - is directly consumed in muscle contraction. CK uses creatine to "buffer" the ATP/ADP ratio. While creatine's influence on physical performance has been well documented since the early twentieth century, it came into public view following the 1992 Olympics in Barcelona. An August 7, 1992 article in The Times reported that Linford Christie, the gold medal winner at 100 meters, had used creatine before the Olympics (however, it should also be noted that Christie was found guilty of doping later in his career). An article in Bodybuilding Monthly named Sally Gunnell, who was the gold medalist in the 400-meter hurdles, as another creatine user. In addition, The Times also noted that 100 meter hurdler Colin Jackson began taking creatine before the Olympics. At the time, low-potency creatine supplements were available in Britain, but creatine supplements designed for strength enhancement were not commercially available until 1993 when a company called Experimental and Applied Sciences (EAS) introduced the compound to the sports nutrition market under the name Phosphagen. Research performed thereafter demonstrated that the consumption of high glycemic carbohydrates in conjunction with creatine increases creatine muscle stores. Metabolic role Creatine is a naturally occurring non-protein compound and the primary constituent of phosphocreatine, which is used to regenerate ATP within the cell. 95% of the human body's total creatine and phosphocreatine stores are found in skeletal muscle, while the remainder is distributed in the blood, brain, testes, and other tissues. The typical creatine content of skeletal muscle (as both creatine and phosphocreatine) is 120 mmol per kilogram of dry muscle mass, but can reach up to 160 mmol/kg through supplementation. Approximately 1–2% of intramuscular creatine is degraded per day and an individual would need about 1–3 grams of creatine per day to maintain average (unsupplemented) creatine storage. An omnivorous diet provides roughly half of this value, with the remainder synthesized in the liver and kidneys. Creatine is not an essential nutrient. It is an amino acid derivative, naturally produced in the human body from the amino acids glycine and arginine, with an additional requirement for S-adenosyl methionine (a derivative of methionine) to catalyze the transformation of guanidinoacetate to creatine. In the first step of the biosynthesis, the enzyme arginine:glycine amidinotransferase (AGAT, EC:2.1.4.1) mediates the reaction of glycine and arginine to form guanidinoacetate. This product is then methylated by guanidinoacetate N-methyltransferase (GAMT, EC:2.1.1.2), using S-adenosyl methionine as the methyl donor. Creatine itself can be phosphorylated by creatine kinase to form phosphocreatine, which is used as an energy buffer in skeletal muscles and the brain. A cyclic form of creatine, called creatinine, exists in equilibrium with its tautomer and with creatine. Phosphocreatine system Creatine is transported through the blood and taken up by tissues with high energy demands, such as the brain and skeletal muscle, through an active transport system. The concentration of ATP in skeletal muscle is usually 2–5 mM, which would result in a muscle contraction of only a few seconds. During times of increased energy demands, the phosphagen (or ATP/PCr) system rapidly resynthesizes ATP from ADP with the use of phosphocreatine (PCr) through a reversible reaction catalysed by the enzyme creatine kinase (CK). The phosphate group is attached to an NH center of the creatine. In skeletal muscle, PCr concentrations may reach 20–35 mM or more. Additionally, in most muscles, the ATP regeneration capacity of CK is very high and is therefore not a limiting factor. Although the cellular concentrations of ATP are small, changes are difficult to detect because ATP is continuously and efficiently replenished from the large pools of PCr and CK. A proposed representation has been illustrated by Krieder et al. Creatine has the ability to increase muscle stores of PCr, potentially increasing the muscle's ability to resynthesize ATP from ADP to meet increased energy demands. Creatine supplementation appears to increase the number of myonuclei that satellite cells will 'donate' to damaged muscle fibers, which increases the potential for growth of those fibers. This increase in myonuclei probably stems from creatine's ability to increase levels of the myogenic transcription factor MRF4. Genetic deficiencies Genetic deficiencies in the creatine biosynthetic pathway lead to various severe neurological defects. Clinically, there are three distinct disorders of creatine metabolism, termed cerebral creatine deficiencies. Deficiencies in the two synthesis enzymes can cause L-arginine:glycine amidinotransferase deficiency caused by variants in GATM and guanidinoacetate methyltransferase deficiency, caused by variants in GAMT. Both biosynthetic defects are inherited in an autosomal recessive manner. A third defect, creatine transporter defect, is caused by mutations in SLC6A8 and is inherited in a X-linked manner. This condition is related to the transport of creatine into the brain. Vegans and vegetarians Vegan and vegetarian diets are associated with lower levels of muscle creatine, and athletes on these diets may benefit from creatine supplementation. Pharmacokinetics Most of the research to-date on creatine has predominantly focused on the pharmacological properties of creatine, yet there is a lack of research into the pharmacokinetics of creatine. Studies have not established pharmacokinetic parameters for clinical usage of creatine such as volume of distribution, clearance, bioavailability, mean residence time, absorption rate, and half life. A clear pharmacokinetic profile would need to be established prior to optimal clinical dosing. Dosing Loading phase An approximation of 0.3 g/kg/day divided into 4 equal spaced intervals has been suggested since creatine needs may vary based on body weight. It has also been shown that taking a lower dose of 3 grams a day for 28 days can also increase total muscle creatine storage to the same amount as the rapid loading dose of 20 g/day for 6 days. However, a 28-day loading phase does not allow for ergogenic benefits of creatine supplementation to be realized until fully saturated muscle storage. This elevation in muscle creatine storage has been correlated with ergogenic benefits discussed in the research section. However, higher doses for longer periods of time are being studied to offset creatine synthesis deficiencies and mitigating diseases. Maintenance phase After the 5–7 day loading phase, muscle creatine stores are fully saturated and supplementation only needs to cover the amount of creatine broken down per day. This maintenance dose was originally reported to be around 2–3 g/day (or 0.03 g/kg/day), however, some studies have suggested 3–5 g/day maintenance dose to maintain saturated muscle creatine. Absorption Endogenous serum or plasma creatine concentrations in healthy adults are normally in a range of 2–12 mg/L. A single 5 gram (5000 mg) oral dose in healthy adults results in a peak plasma creatine level of approximately 120 mg/L at 1–2 hours post-ingestion. Creatine has a fairly short elimination half life, averaging just less than 3 hours, so to maintain an elevated plasma level it would be necessary to take small oral doses every 3–6 hours throughout the day. Exercise and sport Creatine supplements are marketed in ethyl ester, gluconate, monohydrate, and nitrate forms. Creatine supplementation for sporting performance enhancement is considered safe for short-term use but there is a lack of safety data for long term use, or for use in children and adolescents. Some athletes choose to cycle on and off creatine. A 2018 review article in the Journal of the International Society of Sports Nutrition said that creatine monohydrate might help with energy availability for high-intensity exercise. Creatine use can increase maximum power and performance in high-intensity anaerobic repetitive work (periods of work and rest) by 5% to 15%. Creatine has no significant effect on aerobic endurance, though it will increase power during short sessions of high-intensity aerobic exercise. Creatine is proven to boost the recovery and work capacity of an athlete, and multi-applicable capabilities upon athletes have given it a lot of interest over the course of the past decade. A survey of 21,000 college athletes showed that 14% of athletes take creatine supplements to try to improve performance. Compared to normal athletes, those with creatine supplementation have been shown to produce better athletic performance. Non-athletes report taking creatine supplements to improve appearance. Research Cognitive performance Creatine is sometimes reported to have a beneficial effect on brain function and cognitive processing, although the evidence is difficult to interpret systematically and the appropriate dosing is unknown. The greatest effect appears to be in individuals who are stressed (due, for instance, to sleep deprivation) or cognitively impaired. A 2018 systematic review found that "generally, there was evidence that short term memory and intelligence/reasoning may be improved by creatine administration", whereas for other cognitive domains "the results were conflicting". Another 2023 review initially found evidence of improved memory function. However, it was later determined that faulty statistics lead to the statistical significance and after fixing the "double counting", the effect was only significant in older adults. A 2023 review study "...supported claims that creatine supplementation can increases [sic] brain creatine content but also demonstrated somewhat equivocal results for effects on cognition. It does, however, provide evidence to suggest that more research is required with stressed populations, as supplementation does appear to significantly affect brain content. Muscular disease A meta-analysis found that creatine treatment increased muscle strength in muscular dystrophies, and potentially improved functional performance. Creatine treatment does not appear to improve muscle strength in people who have metabolic myopathies. High doses of creatine lead to increased muscle pain and an impairment in activities of daily living when taken by people who have McArdle disease. According to a clinical study focusing on people with various muscular dystrophies, using a pure form of creatine monohydrate can be beneficial in rehabilitation after injuries and immobilization. Mitochondrial diseases Parkinson's disease Creatine's impact on mitochondrial function has led to research on its efficacy and safety for slowing Parkinson's disease. As of 2014, the evidence did not provide a reliable foundation for treatment decisions, due to risk of bias, small sample sizes, and the short duration of trials. Huntington's disease Several primary studies have been completed but no systematic review on Huntington's disease has been completed yet. ALS It is ineffective as a treatment for amyotrophic lateral sclerosis. Testosterone A 2021 systemic review of studies found that "the current body of evidence does not indicate that creatine supplementation increases total testosterone, free testosterone, DHT or causes hair loss/baldness". Adverse effects Side effects include: Weight gain due to extra water retention to the muscle Potential muscle cramps / strains / pulls Upset stomach Diarrhea Dizziness One well-documented effect of creatine supplementation is weight gain within the first week of the supplement schedule, likely attributable to greater water retention due to the increased muscle creatine concentrations by means of osmosis. A 2009 systematic review discredited concerns that creatine supplementation could affect hydration status and heat tolerance and lead to muscle cramping and diarrhea. Despite weight gain due to water retention and potential cramps being two seemingly "common" side effects, new research indicates that these side effects are likely not the result of creatine usage. In addition, the initial water retention is attributed to more short-term creatine use (the "loading" phase). Studies have shown that creatine usage does not necessarily affect total body water relative to muscle mass in the long-term. Renal function A 2019 systematic review published by the National Kidney Foundation investigated whether creatine supplementation had adverse effects on renal function. They identified 15 studies from 1997 to 2013 that looked at standard creatine loading and maintenance protocols of 4–20 g/day of creatine versus placebo. They utilized serum creatinine, creatinine clearance, and serum urea levels as a measure of renal damage. While in general creatine supplementation resulted in slightly elevated creatinine levels that remained within normal limits, supplementation did not induce renal damage (P value< 0.001). Special populations included in the 2019 Systematic review included type 2 diabetic patients and post-menopausal women, bodybuilders, athletes, and resistance trained populations. The study also discussed 3 case studies where there were reports that creatine affected renal function. In a joint statement between the American College of Sports Medicine, Academy of Nutrition and Dietetics, and Dietitians in Canada on performance enhancing nutrition strategies, creatine was included in their list of ergogenic aids and they do not list renal function as a concern for use. The most recent position stand on creatine from the Journal of International Society of Sports Nutrition states that creatine is safe to take in healthy populations from infants to the elderly to performance athletes. They also state that long term (5 years) use of creatine has been considered safe. It is important to mention that kidneys themselves, for normal physiological function, need phosphocreatine and creatine and indeed kidneys express significant amounts of creatine kinases (BB-CK and u-mtCK isoenzymes). At the same time, the first of two steps for endogenous creatine synthesis takes place in the kidneys themselves. Patients with kidney disease and those undergoing dialysis treatment generally show significantly lower levels of creatine in their organs, since the pathological kidneys are both hampered in creatine synthesis capability and are in back-resorption of creatine from the urine in the distal tubules. In addition, dialysis patients lose creatine due to wash out by the dialysis treatment itself and thus become chronically creatine depleted. This situation is exacerbated by the fact that dialysis patients generally consume less meat and fish, the alimentary sources of creatine. Therefore, to alleviate chronic creatine depletion in these patients and allow organs to replenish their stores of creatine, it was proposed in a 2017 article in Medical Hypotheses to supplement dialysis patients with extra creatine, preferably by intra-dialytic administration. Such a supplementation with creatine in dialysis patients is expected to significantly improve the health and quality of the patients by improving muscle strength, coordination of movement, brain function and to alleviate depression and chronic fatigue that are common in these patients. Safety Contamination A 2011 survey of 33 supplements commercially available in Italy found that over 50% of them exceeded the European Food Safety Authority recommendations in at least one contaminant. The most prevalent of these contaminants was creatinine, a breakdown product of creatine also produced by the body. Creatinine was present in higher concentrations than the European Food Safety Authority recommendations in 44% of the samples. About 15% of the samples had detectable levels of dihydro-1,3,5-triazine or a high dicyandiamide concentration. Heavy metals contamination was not found to be a concern, with only minor levels of mercury being detectable. Two studies reviewed in 2007 found no impurities. Food and cooking When creatine is mixed with protein and sugar at high temperatures (above 148 °C), the resulting reaction produces carcinogenic heterocyclic amines (HCAs). Such a reaction happens when grilling or pan-frying meat. Creatine content (as a percentage of crude protein) can be used as an indicator of meat quality. Dietary considerations Creatine-monohydrate is suitable for vegetarians and vegans, as the raw materials used for the production of the supplement have no animal origin. See also Beta-Alanine Creatine methyl ester References External links Creatine bound to proteins in the PDB Alpha-Amino acids Bodybuilding supplements Dietary supplements Ergogenic aids Guanidines Myostatin inhibitors
Creatine
[ "Chemistry", "Biology" ]
3,974
[ "Biochemistry", "Guanidines", "Functional groups", "Exercise biochemistry" ]
331,448
https://en.wikipedia.org/wiki/Color%20depth
Color depth, also known as bit depth, is either the number of bits used to indicate the color of a single pixel, or the number of bits used for each color component of a single pixel. When referring to a pixel, the concept can be defined as bits per pixel (bpp). When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps). Modern standards tend to use bits per component, but historical lower-depth systems used bits per pixel more often. Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space. The number of bits of resolved intensity in a color channel is also known as radiometric resolution, especially in the context of satellite images. Comparison Indexed color With the relatively low color depth, the stored value is typically a number representing the index into a color map or palette (a form of vector quantization). The colors available in the palette itself may be fixed by the hardware or modifiable by software. Modifiable palettes are sometimes referred to as pseudocolor palettes. Old graphics chips, particularly those used in home computers and video game consoles, often have the ability to use a different palette per sprites and tiles in order to increase the maximum number of simultaneously displayed colors, while minimizing use of then-expensive memory (and bandwidth). For example, in the ZX Spectrum the picture is stored in a two-color format, but these two colors can be separately defined for each rectangular block of 8×8 pixels. The palette itself has a color depth (number of bits per entry). While the best VGA systems only offered an 18-bit (262,144 color) palette from which colors could be chosen, all color Macintosh video hardware offered a 24-bit (16 million color) palette. 24-bit palettes are nearly universal on any recent hardware or file format using them. If instead the color can be directly figured out from the pixel values, it is "direct color". Palettes were rarely used for depths greater than 12 bits per pixel, as the memory consumed by the palette would exceed the necessary memory for direct color on every pixel. List of common depths 1-bit color 2 colors, often black and white direct color. Sometimes 1 meant black and 0 meant white, the inverse of modern standards. Most of the first graphics displays were of this type, the X Window System was developed for such displays, and this was assumed for a 3M computer. In the late 1980s there were professional displays with resolutions up to 300 dpi (the same as a contemporary laser printer) but color proved more popular. 2-bit color 4 colors, usually from a selection of fixed palettes. Gray-scale early NeXTstation, color Macintoshes, Atari ST medium resolution. 3-bit color 8 colors, almost always all combinations of full-intensity red, green, and blue. Many early home computers with TV displays, including the ZX Spectrum and BBC Micro. 4-bit color 16 colors, usually from a selection of fixed palettes. Used by IBM CGA (at the lowest resolution), EGA, and by the least common denominator VGA standard at higher resolution. Color Macintoshes, Atari ST low resolution, Commodore 64, and Amstrad CPCs also supported 4-bit color. 5-bit color 32 colors from a programmable palette, used by the Original Amiga chipset. 6-bit color 64 colors. Used by the Master System, Enhanced Graphics Adapter, GIME for TRS-80 Color Computer 3, Pebble Time smartwatch (64 color e-paper display), and Parallax Propeller using the reference VGA circuit. 8-bit color 256 colors, usually from a fully-programmable palette: Most early color Unix workstations, Super VGA, color Macintosh, Atari TT, Amiga AGA chipset, Falcon030, Acorn Archimedes. Both X and Windows provided elaborate systems to try to allow each program to select its own palette, often resulting in incorrect colors in any window other than the one with focus. Some systems placed a color cube in the palette for a direct-color system (and so all programs would use the same palette). Usually fewer levels of blue were provided than others, since the normal human eye is less sensitive to the blue component than to either red or green (two thirds of the eye's receptors process the longer wavelengths). Popular sizes were: 6×6×6 (web-safe colors), leaving 40 colors for a gray ramp, or for programmable palette entries. 8×8×4. 3 bits of R and G, 2 bits of B, the correct value can be computed from a color without using multiplication. Used, among others, in the MSX2 system series of computers. a 6×7×6 color cube, leaving 4 colors for a programmable palette or grays. a 6×8×5 cube, leaving 16 colors for a programmable palette or grays. 12-bit color 4,096 colors, usually from a fully-programmable palette (though it was often set to a 16×16×16 color cube). Some Silicon Graphics systems, Color NeXTstation systems, and Amiga systems in HAM mode have this color depth. RGBA4444, a related 16 bpp representation providing the color cube and 16 levels of transparency, is a common texture format in mobile graphics. High color (15/16-bit) In high-color systems, two bytes (16 bits) are stored for each pixel. Most often, each component (R, G, and B) is assigned 5 bits, plus one unused bit (or used for a mask channel or to switch to indexed color); this allows 32,768 colors to be represented. However, an alternate assignment which reassigns the unused bit to the G channel allows 65,536 colors to be represented, but without transparency. These color depths are sometimes used in small devices with a color display, such as mobile phones, and are sometimes considered sufficient to display photographic images. Occasionally 4 bits per color are used plus 4 bits for alpha, giving 4,096 colors. Among the first hardware to use the standard were the Sharp X68000 and IBM's Extended Graphics Array (XGA). The term "high color" has recently been used to mean color depths greater than 24 bits. 18-bit Almost all of the least expensive LCDs (such as typical twisted nematic types) provide 18-bit color (64×64×64 = 262,144 combinations) to achieve faster color transition times, and use either dithering or frame rate control to approximate 24-bit-per-pixel true color, or throw away 6 bits of color information entirely. More expensive LCDs (typically IPS) can display 24-bit color depth or greater. True color (24-bit) 24 bits almost always use 8 bits each of R, G, and B (8 bpc). As of 2018, 24-bit color depth is used by virtually every computer and phone display and the vast majority of image storage formats. Almost all cases of 32 bits per pixel assigns 24 bits to the color, and the remaining 8 are the alpha channel or unused. 224 gives 16,777,216 color variations. The human eye can discriminate up to ten million colors, and since the gamut of a display is smaller than the range of human vision, this means this should cover that range with more detail than can be perceived. However, displays do not evenly distribute the colors in human perception space, so humans can see the changes between some adjacent colors as color banding. Monochromatic images set all three channels to the same value, resulting in only 256 different colors; some software attempts to dither the gray level into the color channels to increase this, although in modern software this is more often used for subpixel rendering to increase the space resolution on LCD screens where the colors have slightly different positions. The DVD-Video and Blu-ray Disc standards support a bit depth of 8 bits per color in YCbCr with 4:2:0 chroma subsampling. YCbCr can be losslessly converted to RGB. MacOS refers to 24-bit colour as "millions of colours". The term true colour is sometimes used to mean what this article is calling direct colour. It is also often used to refer to all color depths greater or equal to 24. Deep color (30-bit) Deep color consists of a billion or more colors. 230 is 1,073,741,824. Usually this is 10 bits each of red, green, and blue (10 bpc). If an alpha channel of the same size is added then each pixel takes 40 bits. Some earlier systems placed three 10-bit channels in a 32-bit word, with 2 bits unused (or used as a 4-level alpha channel); the Cineon file format, for example, used this. Some SGI systems had 10- (or more) bit digital-to-analog converters for the video signal and could be set up to interpret data stored this way for display. BMP files define this as one of its formats, and it is called "HiColor" by Microsoft. Video cards with 10 bits per component started coming to market in the late 1990s. An early example was the Radius ThunderPower card for the Macintosh, which included extensions for QuickDraw and Adobe Photoshop plugins to support editing 30-bit images. Some vendors call their 24-bit color depth with FRC panels 30-bit panels; however, true deep color displays have 10-bit or more color depth without FRC. The HDMI 1.3 specification defines a bit depth of 30 bits (as well as 36 and 48 bit depths). In that regard, the Nvidia Quadro graphics cards manufactured after 2006 support 30-bit deep color and Pascal or later GeForce and Titan cards when paired with the Studio Driver as do some models of the Radeon HD 5900 series such as the HD 5970. The ATI FireGL V7350 graphics card supports 40- and 64-bit pixels (30 and 48 bit color depth with an alpha channel). The DisplayPort specification also supports color depths greater than 24 bpp in version 1.3 through "VESA Display Stream Compression, which uses a visually lossless low-latency algorithm based on predictive DPCM and YCoCg-R color space and allows increased resolutions and color depths and reduced power consumption." At WinHEC 2008, Microsoft announced that color depths of 30 bits and 48 bits would be supported in Windows 7, along with the wide color gamut scRGB. High Efficiency Video Coding (HEVC or H.265) defines the Main 10 profile, which allows for 8 or 10 bits per sample with 4:2:0 chroma subsampling. The Main 10 profile was added at the October 2012 HEVC meeting based on proposal JCTVC-K0109 which proposed that a 10-bit profile be added to HEVC for consumer applications. The proposal stated that this was to allow for improved video quality and to support the Rec. 2020 color space that will be used by UHDTV. The second version of HEVC has five profiles that allow for a bit depth of 8 bits to 16 bits per sample. As of 2020, some smartphones have started using 30-bit color depth, such as the OnePlus 8 Pro, Oppo Find X2 & Find X2 Pro, Sony Xperia 1 II, Xiaomi Mi 10 Ultra, Motorola Edge+, ROG Phone 3 and Sharp Aquos Zero 2. 36-bit Using 12 bits per color channel produces 36 bits, 68,719,476,736 colors. If an alpha channel of the same size is added then there are 48 bits per pixel. 48-bit Using 16 bits per color channel produces 48 bits, 281,474,976,710,656 colors. If an alpha channel of the same size is added then there are 64 bits per pixel. Image editing software such as Adobe Photoshop started using 16 bits per channel fairly early in order to reduce the quantization on intermediate results (i.e. if an operation is divided by 4 and then multiplied by 4, it would lose the bottom 2 bits of 8-bit data, but if 16 bits were used it would lose none of the 8-bit data). In addition, digital cameras are able to produce 10 or 12 bits per channel in their raw data; as 16 bits is the smallest addressable unit larger than that, using it would make it easier to manipulate the raw data. Expansions High dynamic range and wide gamut Some systems started using those bits for numbers outside the 0–1 range rather than for increasing the resolution. Numbers greater than 1 were for colors brighter than the display could show, as in high-dynamic-range imaging (HDRI). Negative numbers can increase the gamut to cover all possible colors, and for storing the results of filtering operations with negative filter coefficients. The Pixar Image Computer used 12 bits to store numbers in the range [-1.5, 2.5), with 2 bits for the integer portion and 10 for the fraction. The Cineon imaging system used 10-bit professional video displays with the video hardware adjusted so that a value of 95 was black and 685 was white. The amplified signal tended to reduce the lifetime of the CRT. Linear color space and floating point More bits also encouraged the storage of light as linear values, where the number directly corresponds to the amount of light emitted. Linear levels makes calculation of computer graphics much easier. However, linear color results in disproportionately more samples near white and fewer near black, so the quality of 16-bit linear is about equal to 12-bit sRGB. Floating point numbers can represent linear light levels spacing the samples semi-logarithmically. Floating point representations also allow for drastically larger dynamic ranges as well as negative values. Most systems first supported 32-bit per channel single-precision, which far exceeded the accuracy required for most applications. In 1999, Industrial Light & Magic released the open standard image file format OpenEXR which supported 16-bit-per-channel half-precision floating-point numbers. At values near 1.0, half precision floating point values have only the precision of an 11-bit integer value, leading some graphics professionals to reject half-precision in situations where the extended dynamic range is not needed. More than three primaries Virtually all television displays and computer displays form images by varying the strength of just three primary colors: red, green, and blue. For example, bright yellow is formed by roughly equal red and green contributions, with no blue contribution. For storing and manipulating images, alternative ways of expanding the traditional triangle exist: One can convert image coding to use fictitious primaries, that are not physically possible but that have the effect of extending the triangle to enclose a much larger color gamut. An equivalent, simpler change is to allow negative numbers in color channels, so that the represented colors can extend out of the color triangle formed by the primaries. However these only extend the colors that can be represented in the image encoding; neither trick extends the gamut of colors that can actually be rendered on a display device. Supplementary colors can widen the color gamut of a display, since it is no longer limited to the interior of a triangle formed by three primaries at its corners, e.g. the CIE 1931 color space. Recent technologies such as Texas Instruments's BrilliantColor augment the typical red, green, and blue channels with up to three other primaries: cyan, magenta, and yellow. Cyan would be indicated by negative values in the red channel, magenta by negative values in the green channel, and yellow by negative values in the blue channel, validating the use of otherwise fictitious negative numbers in the color channels. Mitsubishi and Samsung (among others) use BrilliantColor in some of their TV sets to extend the range of displayable colors. The Sharp Aquos line of televisions has introduced Quattron technology, which augments the usual RGB pixel components with a yellow subpixel. However, formats and media that allow or make use of the extended color gamut are at present extremely rare. Because humans are overwhelmingly trichromats or dichromats one might suppose that adding a fourth "primary" color could provide no practical benefit. However humans can see a broader range of colors than a mixture of three colored lights can display. The deficit of colors is particularly noticeable in saturated shades of bluish green (shown as the left upper grey part of the horseshoe in the diagram) of RGB displays: Most humans can see more vivid blue-greens than any color video screen can display. See also Footnotes References Television technology
Color depth
[ "Technology" ]
3,566
[ "Information and communications technology", "Television technology" ]
331,535
https://en.wikipedia.org/wiki/Nucleic%20acid%20sequence
A nucleic acid sequence is a succession of bases within the nucleotides forming alleles within a DNA (using GACT) or RNA (GACU) molecule. This succession is denoted by a series of a set of five different letters that indicate the order of the nucleotides. By convention, sequences are usually presented from the 5' end to the 3' end. For DNA, with its double helix, there are two possible directions for the notated sequence; of these two, the sense strand is used. Because nucleic acids are normally linear (unbranched) polymers, specifying the sequence is equivalent to defining the covalent structure of the entire molecule. For this reason, the nucleic acid sequence is also termed the primary structure. The sequence represents genetic information. Biological deoxyribonucleic acid represents the information which directs the functions of an organism. Nucleic acids also have a secondary structure and tertiary structure. Primary structure is sometimes mistakenly referred to as "primary sequence". However there is no parallel concept of secondary or tertiary sequence. Nucleotides Nucleic acids consist of a chain of linked units called nucleotides. Each nucleotide consists of three subunits: a phosphate group and a sugar (ribose in the case of RNA, deoxyribose in DNA) make up the backbone of the nucleic acid strand, and attached to the sugar is one of a set of nucleobases. The nucleobases are important in base pairing of strands to form higher-level secondary and tertiary structures such as the famed double helix. The possible letters are A, C, G, and T, representing the four nucleotide bases of a DNA strand – adenine, cytosine, guanine, thymine – covalently linked to a phosphodiester backbone. In the typical case, the sequences are printed abutting one another without gaps, as in the sequence AAAGTCTGAC, read left to right in the 5' to 3' direction. With regards to transcription, a sequence is on the coding strand if it has the same order as the transcribed RNA. One sequence can be complementary to another sequence, meaning that they have the base on each position in the complementary (i.e., A to T, C to G) and in the reverse order. For example, the complementary sequence to TTAC is GTAA. If one strand of the double-stranded DNA is considered the sense strand, then the other strand, considered the antisense strand, will have the complementary sequence to the sense strand. Notation While A, T, C, and G represent a particular nucleotide at a position, there are also letters that represent ambiguity which are used when more than one kind of nucleotide could occur at that position. The rules of the International Union of Pure and Applied Chemistry (IUPAC) are as follows: For example, W means that either an adenine or a thymine could occur in that position without impairing the sequence's functionality. These symbols are also valid for RNA, except with U (uracil) replacing T (thymine). Apart from adenine (A), cytosine (C), guanine (G), thymine (T) and uracil (U), DNA and RNA also contain bases that have been modified after the nucleic acid chain has been formed. In DNA, the most common modified base is 5-methylcytidine (m5C). In RNA, there are many modified bases, including pseudouridine (Ψ), dihydrouridine (D), inosine (I), ribothymidine (rT) and 7-methylguanosine (m7G). Hypoxanthine and xanthine are two of the many bases created through mutagen presence, both of them through deamination (replacement of the amine-group with a carbonyl-group). Hypoxanthine is produced from adenine, and xanthine is produced from guanine. Similarly, deamination of cytosine results in uracil. Example of comparing and determining the % difference between two nucleotide sequences AATCCGCTAG AAACCCTTAG Given the two 10-nucleotide sequences, line them up and compare the differences between them. Calculate the percent difference by taking the number of differences between the DNA bases divided by the total number of nucleotides. In this case there are three differences in the 10 nucleotide sequence. Thus there is a 30% difference. Biological significance In biological systems, nucleic acids contain information which is used by a living cell to construct specific proteins. The sequence of nucleobases on a nucleic acid strand is translated by cell machinery into a sequence of amino acids making up a protein strand. Each group of three bases, called a codon, corresponds to a single amino acid, and there is a specific genetic code by which each possible combination of three bases corresponds to a specific amino acid. The central dogma of molecular biology outlines the mechanism by which proteins are constructed using information contained in nucleic acids. DNA is transcribed into mRNA molecules, which travel to the ribosome where the mRNA is used as a template for the construction of the protein strand. Since nucleic acids can bind to molecules with complementary sequences, there is a distinction between "sense" sequences which code for proteins, and the complementary "antisense" sequence, which is by itself nonfunctional, but can bind to the sense strand. Sequence determination DNA sequencing is the process of determining the nucleotide sequence of a given DNA fragment. The sequence of the DNA of a living thing encodes the necessary information for that living thing to survive and reproduce. Therefore, determining the sequence is useful in fundamental research into why and how organisms live, as well as in applied subjects. Because of the importance of DNA to living things, knowledge of a DNA sequence may be useful in practically any biological research. For example, in medicine it can be used to identify, diagnose and potentially develop treatments for genetic diseases. Similarly, research into pathogens may lead to treatments for contagious diseases. Biotechnology is a burgeoning discipline, with the potential for many useful products and services. RNA is not sequenced directly. Instead, it is copied to a DNA by reverse transcriptase, and this DNA is then sequenced. Current sequencing methods rely on the discriminatory ability of DNA polymerases, and therefore can only distinguish four bases. An inosine (created from adenosine during RNA editing) is read as a G, and 5-methyl-cytosine (created from cytosine by DNA methylation) is read as a C. With current technology, it is difficult to sequence small amounts of DNA, as the signal is too weak to measure. This is overcome by polymerase chain reaction (PCR) amplification. Digital representation Once a nucleic acid sequence has been obtained from an organism, it is stored in silico in digital format. Digital genetic sequences may be stored in sequence databases, be analyzed (see Sequence analysis below), be digitally altered and be used as templates for creating new actual DNA using artificial gene synthesis. Sequence analysis Digital genetic sequences may be analyzed using the tools of bioinformatics to attempt to determine its function. Genetic testing The DNA in an organism's genome can be analyzed to diagnose vulnerabilities to inherited diseases, and can also be used to determine a child's paternity (genetic father) or a person's ancestry. Normally, every person carries two variations of every gene, one inherited from their mother, the other inherited from their father. The human genome is believed to contain around 20,000–25,000 genes. In addition to studying chromosomes to the level of individual genes, genetic testing in a broader sense includes biochemical tests for the possible presence of genetic diseases, or mutant forms of genes associated with increased risk of developing genetic disorders. Genetic testing identifies changes in chromosomes, genes, or proteins. Usually, testing is used to find changes that are associated with inherited disorders. The results of a genetic test can confirm or rule out a suspected genetic condition or help determine a person's chance of developing or passing on a genetic disorder. Several hundred genetic tests are currently in use, and more are being developed. Sequence alignment In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be due to functional, structural, or evolutionary relationships between the sequences. If two sequences in an alignment share a common ancestor, mismatches can be interpreted as point mutations and gaps as insertion or deletion mutations (indels) introduced in one or both lineages in the time since they diverged from one another. In sequence alignments of proteins, the degree of similarity between amino acids occupying a particular position in the sequence can be interpreted as a rough measure of how conserved a particular region or sequence motif is among lineages. The absence of substitutions, or the presence of only very conservative substitutions (that is, the substitution of amino acids whose side chains have similar biochemical properties) in a particular region of the sequence, suggest that this region has structural or functional importance. Although DNA and RNA nucleotide bases are more similar to each other than are amino acids, the conservation of base pairs can indicate a similar functional or structural role. Computational phylogenetics makes extensive use of sequence alignments in the construction and interpretation of phylogenetic trees, which are used to classify the evolutionary relationships between homologous genes represented in the genomes of divergent species. The degree to which sequences in a query set differ is qualitatively related to the sequences' evolutionary distance from one another. Roughly speaking, high sequence identity suggests that the sequences in question have a comparatively young most recent common ancestor, while low identity suggests that the divergence is more ancient. This approximation, which reflects the "molecular clock" hypothesis that a roughly constant rate of evolutionary change can be used to extrapolate the elapsed time since two genes first diverged (that is, the coalescence time), assumes that the effects of mutation and selection are constant across sequence lineages. Therefore, it does not account for possible differences among organisms or species in the rates of DNA repair or the possible functional conservation of specific regions in a sequence. (In the case of nucleotide sequences, the molecular clock hypothesis in its most basic form also discounts the difference in acceptance rates between silent mutations that do not alter the meaning of a given codon and other mutations that result in a different amino acid being incorporated into the protein.) More statistically accurate methods allow the evolutionary rate on each branch of the phylogenetic tree to vary, thus producing better estimates of coalescence times for genes. Sequence motifs Frequently the primary structure encodes motifs that are of functional importance. Some examples of sequence motifs are: the C/D and H/ACA boxes of snoRNAs, Sm binding site found in spliceosomal RNAs such as U1, U2, U4, U5, U6, U12 and U3, the Shine-Dalgarno sequence, the Kozak consensus sequence and the RNA polymerase III terminator. Sequence entropy In bioinformatics, a sequence entropy, also known as sequence complexity or information profile, is a numerical sequence providing a quantitative measure of the local complexity of a DNA sequence, independently of the direction of processing. The manipulations of the information profiles enable the analysis of the sequences using alignment-free techniques, such as for example in motif and rearrangements detection. See also Gene structure Nucleic acid structure determination Quaternary numeral system Single-nucleotide polymorphism (SNP) References External links A bibliography on features, patterns, correlations in DNA and protein texts DNA Molecular biology Nucleic acids RNA
Nucleic acid sequence
[ "Chemistry", "Biology" ]
2,458
[ "Biochemistry", "Biomolecules by chemical classification", "Molecular biology", "Nucleic acids" ]
331,560
https://en.wikipedia.org/wiki/St.%20Elmo%27s%20fire
St. Elmo's fire (also called witchfire or witch's fire) is a weather phenomenon in which luminous plasma is created by a corona discharge from a rod-like object such as a mast, spire, chimney, or animal horn in an atmospheric electric field. It has also been observed on the leading edges of aircraft, as in the case of British Airways Flight 009, and by US Air Force pilots. The intensity of the effect, a blue or violet glow around the object, often accompanied by a hissing or buzzing sound, is proportional to the strength of the electric field and therefore noticeable primarily during thunderstorms or volcanic eruptions. St. Elmo's fire is named after St. Erasmus of Formia (also known as St. Elmo), the patron saint of sailors. The phenomenon, which can warn of an imminent lightning strike, was regarded by sailors with awe and sometimes considered to be a good omen. Cause St. Elmo's fire is a reproducible and demonstrable form of plasma. The electric field around the affected object causes ionization of the air molecules, producing a faint glow easily visible in low-light conditions. Conditions that can generate St. Elmo's fire are present during thunderstorms, when high-voltage differentials are present between clouds and the ground underneath. A local electric field of about is required to begin a discharge in moist air. The magnitude of the electric field depends greatly on the geometry (shape and size) of the object. Sharp points lower the necessary voltage because electric fields are more concentrated in areas of high curvature, so discharges preferentially occur and are more intense at the ends of pointed objects. The nitrogen and oxygen in the Earth's atmosphere cause St. Elmo's fire to fluoresce with blue or violet light; this is similar to the mechanism that causes neon lights to glow, albeit at a different colour due to the different gas involved. In 1751, Benjamin Franklin hypothesized that a pointed iron rod would light up at the tip during a lightning storm, similar in appearance to St. Elmo's fire. In an August 2020 paper, researchers in MIT's Department of Aeronautics and Astronautics demonstrated that St. Elmo's fire behaves differently in airborne objects versus grounded structures. They show that electrically isolated structures accumulate charge more effectively in high wind, in contrast to the corona discharge observed in grounded structures. Research Vacuum ultraviolet light Researchers at Rutgers University have devised a method to generate vacuum ultraviolet light using different forms of lighting, by employing sharp conductive needles placed within a dense gas, such as xenon, contained in a cell. They achieve this by applying a high negative voltage to the needles in the xenon-filled cell, resulting in the efficient production of vacuum ultraviolet light. St. Elmo's Fire being similar, they believe it could be used as lighting but with a higher power source, thus increasing efficiency by over 50%. In history and culture In ancient Greece, the appearance of a single instance of St. Elmo's fire was called (), literally meaning "torch", with two instances referred to as Castor and Pollux, names of the mythological twin brothers of Helen. After the medieval period, St. Elmo's fire was sometimes associated with the Greek element of fire, such as with one of Paracelsus's elementals, specifically the salamander, or, alternatively, with a similar creature referred to as an acthnici. Welsh mariners referred to St. Elmo's fire as or ("candles of the Holy Ghost" or the "candles of St. David"). Russian sailors also historically documented instances of St. Elmo's fire, known as "Saint Nicholas" or "Saint Peter's lights", also sometimes called St. Helen's or St. Hermes' fire, perhaps through linguistic confusion. St. Elmo's fire is reported to have been seen during the Siege of Constantinople by the Ottoman Empire in 1453. It was reportedly seen emitting from the top of the Hippodrome. The Byzantines attributed it to a sign that the Christian God would soon come and destroy the conquering Muslim army. According to George Sphrantzes, it disappeared just days before Constantinople fell, ending the Byzantine Empire. Accounts of Magellan's first circumnavigation of the globe refer to St. Elmo's fire (calling it the body of St. Anselm) being seen around the fleet's ships multiple times off the coast of South America. The sailors saw these as favourable omens. En route to Nagasaki with the Fat Man atom bomb on 9 August 1945, the B-29 Bockscar experienced an uncanny luminous blue plasma forming around the spinning propellers, "as though we were riding the whirlwind through space on a chariot of blue fire." St Elmo's fire was seen during the 1955 Great Plains tornado outbreak in Kansas and Oklahoma. Among the phenomena experienced on British Airways Flight 9 on 24 June 1982, were glowing light flashes along the leading edges of the aircraft, including the wings and cockpit windscreen, which were seen by both passengers and crew. While the bright flashes of light shared similarities with St Elmo's fire, the glow experienced was from the impact of ash particles on the leading edges of the aircraft, similar to that seen by operators of sandblasting equipment. St. Elmo's fire was observed and its optical spectrum recorded during a University of Alaska research flight over the Amazon in 1995 to study sprites. Ill-fated Air France Flight 447 from Rio de Janeiro–Galeão International Airport to Paris Charles de Gaulle Airport in 2009 is understood to have experienced St. Elmo's fire 23 minutes prior to crashing into the Atlantic Ocean; however, the phenomenon was not a factor in the disaster. Apoy ni San Elmo – commonly shortened to santelmo – is a bad omen or a flying spirit in Filipino folklore, although the description for santelmo is more similar to ball lightning than St. Elmo's fire. There are various indigenous names for santelmo which has existed before the term santelmo was coined. The term santelmo originated from Spanish colonial rule in the Philippines. Notable observations Classical texts St. Elmo's fire is referenced in the works of Julius Caesar (De Bello Africo, 47) and Pliny the Elder (Naturalis Historia, book 2, par. 101), Alcaeus frag. 34. Earlier, Xenophanes of Colophon had alluded to the phenomenon. Zheng He In 15th-century Ming China, Admiral Zheng He and his associates composed the Liujiagang and Changle inscriptions, the two epitaphs of the Ming treasure voyages, where they made a reference to St. Elmo's fire as a divine omen of Tianfei, the goddess of sailors and seafarers. Accounts associated with Magellan and da Gama Mention of St. Elmo's fire can be found in Antonio Pigafetta's journal of his voyage with Ferdinand Magellan. St. Elmo's fire, also known as "corposants" or "corpusants" from the Portuguese corpo santo ("holy body"), is also described in The Lusiads, the epic account of Vasco da Gama's voyages of discovery. Robert Burton Robert Burton wrote of St. Elmo's fire in his Anatomy of Melancholy (1621): "Radzivilius, the Lithuanian duke, calls this apparition Sancti Germani sidus; and saith moreover that he saw the same after in a storm, as he was sailing, 1582, from Alexandria to Rhodes". This refers to the voyage made by Mikołaj Krzysztof "the Orphan" Radziwiłł in 1582–1584. John Davis On 9 May 1605, while on the second voyage of John Davis commanded by Sir Edward Michelborne to the East Indies, an unknown writer aboard the Tiger describes the phenomenon: "In the extremity of our storm appeared to us in the night, upon our maine Top-mast head, a flame about the bigness of a great Candle, which the Portugals call Corpo Sancto, holding it a most divine token that when it appeareth the worst is past. As, thanked be God, we had better weather after it". Pierre Testu-Brissy Pierre Testu-Brissy was a pioneering French balloonist. On 18 June 1786, he flew for 11 hours and made the first electrical observations as he ascended into thunderclouds. He stated that he drew remarkable discharges from the clouds by means of an iron rod carried in the basket. He also experienced Saint Elmo's fire.<ref name="Ballooning Who's Who"/ William Bligh William Bligh recorded in his log on Sunday 4 May 1788, on board HMS Bounty of 'Mutiny On The Bounty' fame: 'Corpo-Sant. Some electrical Vapour seen about the Iron at the Yard Arms about the Size of the blaze of a Candle.' The location of this event was in the South Atlantic sailing from Cape Horn, (having failed to round the cape in the winter months), en route to Cape of Good Hope and west of Tristan da Cunha. The log records the ship's location as: Latd. 42°:34'S, Longd (by the time keeper K2) as 34°:38'W. Reference: Log of the Proceedings of His Majestys Ship Bounty in a Voyage to the South Seas, (to take the Breadfruit plant from the Society Islands to the West Indies,) under the Command of Lieutenant William Bligh, 1 December 1787 – 22 October 1788 Safe 1/46, Mitchell Library, State Library of NSW William Noah William Noah, a silversmith convicted in London of stealing 2,000 pounds of lead, while en route to Sydney, New South Wales on the convict transport ship , recorded two such observations in his detailed daily journal. The first was in the Southern Ocean midway between Cape Town and Sydney and the second was in the Tasman Sea, a day out of Port Jackson: While the exact nature of these weather phenomena cannot be certain, they appear to be mostly about two observations of St. Elmo's fire with perhaps some ball lightning and even a direct lightning strike to the ship thrown into the mix. James Braid On 20 February 1817, during a severe electrical storm, James Braid, surgeon at Lord Hopetoun's mines at Leadhills, Lanarkshire, had an extraordinary experience whilst on horseback: Weeks earlier, reportedly on 17 January 1817, a luminous snowstorm occurred in Vermont and New Hampshire. Saint Elmo's fire appeared as static discharges on roof peaks, fence posts, and the hats and fingers of people. Thunderstorms prevailed over central New England. Charles Darwin Charles Darwin noted the effect while aboard the Beagle. He wrote of the episode in a letter to J. S. Henslow that one night when the Beagle was anchored in the estuary of the Río de la Plata: He also describes the above night in his book The Voyage of the Beagle: Richard Henry Dana In Two Years Before the Mast, Richard Henry Dana Jr., (1815–1882) describes seeing a corposant in the horse latitudes of the northern Atlantic Ocean. However, he may have been talking about ball lightning; as mentioned earlier, it is often erroneously identified as St. Elmo's fire: The observation by R. H. Dana of this phenomenon in Two Years Before the Mast is a straightforward description of an extraordinary experience apparently only known to mariners and airline pilots. Nikola Tesla Nikola Tesla created St. Elmo's fire in 1899 while testing a Tesla coil at his laboratory in Colorado Springs, Colorado, United States. St. Elmo's fire was seen around the coil and was said to have lit up the wings of butterflies with blue halos as they flew around. Mark Heald A minute before the crash of the Luftschiffbau Zeppelin's LZ 129 Hindenburg on 6 May 1937, Professor Mark Heald (1892–1971) of Princeton saw St. Elmo's Fire flickering along the airship's back. Standing outside the main gate to the Naval Air Station, he watched, together with his wife and son, as the airship approached the mast and dropped her bow lines. A minute thereafter, by Heald's estimation, he first noticed a dim "blue flame" flickering along the backbone girder about one-quarter the length abaft the bow to the tail. There was time for him to remark to his wife, "Oh, heavens, the thing is afire," for her to reply, "Where?" and for him to answer, "Up along the top ridge" – before there was a big burst of flaming hydrogen from a point he estimated to be about one-third the ship's length from the stern. William L. Laurence St. Elmo's fire was reported by The New York Times reporter William L. Laurence on 9 August 1945, as he was aboard a plane following Bockscar on the way to Nagasaki. In popular culture In literature One of the earliest references to the phenomenon appears in Alcaeus's Fragment 34a about the Dioscuri, or Castor and Pollux. It is also referenced in Homeric Hymn 33 to the Dioscuri who were from Homeric times associated with it. Whether the Homeric Hymn antedates the Alcaeus fragment is unknown. The phenomenon appears to be described first in the Gesta Herwardi, written around 1100 and concerning an event of the 1070s. However, one of the earliest direct references to St. Elmo's fire made in fiction can be found in Ludovico Ariosto's epic poem Orlando Furioso (1516). It is located in the 17th canto (19th in the revised edition of 1532) after a storm has punished the ship of Marfisa, Astolfo, Aquilant, Grifon, and others, for three straight days, and is positively associated with hope: In William Shakespeare's The Tempest (c. 1623), Act I, Scene II, St. Elmo's fire acquires a more negative association, appearing as evidence of the tempest inflicted by Ariel according to the command of Prospero: The fires are also mentioned as "death fires" in Samuel Taylor Coleridge's The Rime of the Ancient Mariner: Later in the 18th and 19th centuries, literature associated St. Elmo's fire with a bad omen or divine judgment, coinciding with the growing conventions of Romanticism and the Gothic novel. For example, in Ann Radcliffe's The Mysteries of Udolpho (1794), during a thunderstorm above the ramparts of the castle: In the 1864 novel Journey to the Centre of the Earth by Jules Verne, the author describes the fire occurring while sailing during a subterranean electrical storm (chapter 35, page 191): In Herman Melville's novel Moby-Dick, Starbuck points out "corpusants" during a thunder storm in the Japanese sea in chapter 119, "The Candles". St. Elmo's fire makes an appearance in The Adventures of Tintin comic, Tintin in Tibet, by Hergé. Tintin recognizes the phenomenon on Captain Haddock's ice-axe. The phenomenon appears in the first stanza of Robert Hayden's poem "The Ballad of Nat Turner"; it is also referred to with the term "corposant" in the first section of his long poem "Middle Passage". In Kurt Vonnegut's Slaughterhouse-Five, Billy Pilgrim sees the phenomenon on soldiers' helmets and on rooftops. Vonnegut's The Sirens of Titan also notes the phenomenon affecting Winston Niles Rumfoord's dog, Kazak, the Hound of Space, in conjunction with solar disturbances of the chrono-synclastic infundibulum. In Robert Aickman's story "Niemandswasser" (1975), the protagonist, Prince Albrecht von Allendorf, is "known as Elmo to his associates, because of the fire which to them emanated from him". "There was an inspirational force in Elmo of which the sensitive soon became aware, and which had led to his Spottname or nickname." In On the Banks of Plum Creek by Laura Ingalls Wilder, St. Elmo's fire is seen by the girls and Ma during one of the blizzards. It was described as coming down the stove pipe and rolling across the floor following Ma's knitting needles; it did not burn the floor (pages 309–310). The phenomenon as described, however, is more similar to ball lightning. In Voyager, the third major novel in Diana Gabaldon's popular Outlander series, the primary characters experience St. Elmo's fire while lost at sea in a thunderstorm between Hispaniola and coastal Georgia. St. Elmo's fire is also mentioned in the novel, Castaways of the Flying Dutchman by Brian Jacques. It is referenced multiple times in the novel Pet Sematary by Stephen King. It is referenced multiple times in the Urban-Fantasy series The Dresden Files by Jim Butcher, particularly when magical beings such as the protagonist's dog are exerting power, especially during conflict, or to describe the visual effects of magic being used. In television On the children's television series The Mysterious Cities of Gold (1982), episode four shows St. Elmo's fire affecting the ship as it sailed past the Strait of Magellan. The real-life footage at the end of the episode has snippets of an interview with Japanese sailor Fukunari Imada, whose comments were translated to: "Although I've never seen St. Elmo's fire, I'd certainly like to. It was often considered a bad omen, as it played havoc with compasses and equipment". The TV series also referred to St. Elmo's fire as being a bad omen during the cartoon. The footage was captured as part of his winning solo yacht race in 1981. On the American television series Rawhide, in a 1959 episode titled "Incident of the Blue Fire", cattle drovers on a stormy night see St. Elmo's fire glowing on the horns of their steers, which the men regard as a deadly omen. St. Elmo's fire is also referenced in a 1965 episode of Bonanza in which religious pilgrims staying on the Cartwright property believe an experience with St. Elmo's fire is the work of Satan. On The Waltons episode "The Grandchild" (1977), Mary Ellen witnesses St. Elmo's Fire while running through the woods. On the American animated television series Futurama episode titled "Möbius Dick", Turanga Leela refers to the phenomenon as "Tickle me Elmo's Fire." On the Netflix original Singaporean animated series Trese (2021), the Santelmo (St. Elmo's Fire) is one of the protagonist's, Alexandra Trese's, allies whom she contacts using her old Nokia phone, dialing the date of the Great Binondo fire, 0003231870. In film In Moby Dick (1956), St. Elmo's fire stops Captain Ahab from killing Starbuck. In The Last Sunset (1961), outlaw/cowhand Brendan "Bren" O'Malley (Kirk Douglas) rides in from the herd and leads the recently widowed Belle Breckenridge (Dorothy Malone) to an overview of the cattle. As he takes the rifle from her, he proclaims, "Something out there, you could live five lifetimes, and never see again," the audience is then shown a shot of the cattle with a blue or violet glow coming from their horns. "Look. St. Elmo's fire. Never seen it except on ships," O'Malley says as Belle says, "I've never seen it anywhere. What is it?" Trying to win her back, he says, "Well, a star fell and smashed and scattered its glow all over the place." In St. Elmo's Fire (1985), Rob Lowe's character Billy Hicks erroneously claims that the phenomenon is "not even a real thing." In the Western miniseries Lonesome Dove (1989–1990), lightning strikes a herd of cattle during a storm, causing their horns to glow blue. In The Hunt for Red October (film) (1990) during a scene where the USS Dallas, a Los-Angeles-class submarine, is attempting to evade a torpedo, the crew discusses the presence of St. Elmo's fire on the sub's periscope. In The Perfect Storm (film); based on the true story of the Andrea Gail fishing vessel, there is a scene where the crew encounters St. Elmo's fire during the height of a storm. In Lars von Trier's 2011 film Melancholia, the phenomenon features in the opening sequence and later in the film as the rogue planet Melancholia approaches the Earth for an impact event. In Robert Eggers's 2019 horror film The Lighthouse, it appears in reference to the mysterious salvation that lighthouse keeper Thomas Wake (Willem Dafoe) is hiding from Ephraim Winslow (Robert Pattinson) inside the Fresnel lens of the lantern. In music Brian Eno's third studio album Another Green World (1975) contains a song titled "St. Elmo's Fire" in which guesting King Crimson guitarist Robert Fripp (credited with playing "Wimshurst guitar" in the liner notes) improvises a lightning-fast solo that would imitate an electrical charge between two poles on a Wimshurst high-voltage generator. "St. Elmo's Fire (Man in Motion)" is a song recorded by John Parr. It hit number one on the Billboard Hot 100 on 7 September 1985, remaining there for two weeks. It was the main theme for Joel Schumacher's 1985 film St. Elmo's Fire. "St. Elmo's Fire" by Michael Franks. The Sammarinese entry for the 2017 Eurovision Song Contest in Kyiv "Spirit of the Night" contains references to St. Elmo's Fire. See also Earthquake light Foo fighter, WWII UFO observations Hessdalen lights Naga fireball, rising from Mekong River Plasma globe Stellar Corona Triboelectric effect Will-o'-the-wisp Notes References External links St. Elmo's fire photographed on the flight deck of an airliner Atmospheric ghost lights Terrestrial plasmas Electrical phenomena Light sources Castor and Pollux
St. Elmo's fire
[ "Physics", "Astronomy" ]
4,740
[ "Physical phenomena", "Electrical phenomena", "Astronomical myths", "Castor and Pollux" ]
331,569
https://en.wikipedia.org/wiki/Happened-before
In computer science, the happened-before relation (denoted: ) is a relation between the result of two events, such that if one event should happen before another event, the result must reflect that, even if those events are in reality executed out of order (usually to optimize program flow). This involves ordering events based on the potential causal relationship of pairs of events in a concurrent system, especially asynchronous distributed systems. It was formulated by Leslie Lamport. The happened-before relation is formally defined as the least strict partial order on events such that: If events and occur on the same process, if the occurrence of event preceded the occurrence of event . If event is the sending of a message and event is the reception of the message sent in event , . If two events happen in different isolated processes (that do not exchange messages directly or indirectly via third-party processes), then the two processes are said to be concurrent, that is neither nor is true. If there are other causal relationships between events in a given system, such as between the creation of a process and its first event, these relationships are also added to the definition. For example, in some programming languages such as Java, C, C++ or Rust, a happens-before edge exists if memory written to by statement A is visible to statement B, that is, if statement A completes its write before statement B starts its read. Like all strict partial orders, the happened-before relation is transitive, irreflexive (and vacuously, asymmetric), i.e.: , if and , then (transitivity). This means that for any three events , if happened before , and happened before , then must have happened before . (irreflexivity). This means that no event can happen before itself. if then (asymmetry). This means that for any two events , if happened before then cannot have happened before . Let us observe that the asymmetry property directly follows from the previous properties: by contradiction, let us suppose that we have and . Then by transitivity we have which contradicts irreflexivity. The processes that make up a distributed system have no knowledge of the happened-before relation unless they use a logical clock, like a Lamport clock or a vector clock. This allows one to design algorithms for mutual exclusion, and tasks like debugging or optimising distributed systems. See also Race condition Java Memory Model Lamport timestamps Logical clock Citations References Logical clock algorithms Distributed computing problems Transitive relations
Happened-before
[ "Physics", "Mathematics" ]
525
[ "Physical quantities", "Time", "Distributed computing problems", "Computational problems", "Spacetime", "Mathematical problems", "Logical clock algorithms" ]
331,579
https://en.wikipedia.org/wiki/Digital%20microfluidics
Digital microfluidics (DMF) is a platform for lab-on-a-chip systems that is based upon the manipulation of microdroplets. Droplets are dispensed, moved, stored, mixed, reacted, or analyzed on a platform with a set of insulated electrodes. Digital microfluidics can be used together with analytical analysis procedures such as mass spectrometry, colorimetry, electrochemical, and electrochemiluminescense. Overview ] In analogy to digital microelectronics, digital microfluidic operations can be combined and reused within hierarchical design structures so that complex procedures (e.g. chemical synthesis or biological assays) can be built up step-by-step. And in contrast to continuous-flow microfluidics, digital microfluidics works much the same way as traditional bench-top protocols, only with much smaller volumes and much higher automation. Thus a wide range of established chemical procedures and protocols can be seamlessly transferred to a nanoliter droplet format. Electrowetting, dielectrophoresis, and immiscible-fluid flows are the three most commonly used principles, which have been used to generate and manipulate microdroplets in a digital microfluidic device. A digital microfluidic (DMF) device set-up depends on the substrates used, the electrodes, the configuration of those electrodes, the use of a dielectric material, the thickness of that dielectric material, the hydrophobic layers, and the applied voltage. A common substrate used in this type of system is glass. Depending if the system is open or closed, there would be either one or two layers of glass. The bottom layer of the device contains a patterned array of individually controllable electrodes. When looking at a closed system, there is usually a continuous ground electrode found through the top layer made usually of indium tin oxide (ITO). The dielectric layer is found around the electrodes in the bottom layer of the device and is important for building up charges and electrical field gradients on the device. A hydrophobic layer is applied to the top layer of the system to decrease the surface energy where the droplet will actually we be in contact with. The applied voltage activates the electrodes and allows changes in the wettability of droplet on the device’s surface. In order to move a droplet, a control voltage is applied to an electrode adjacent to the droplet, and at the same time, the electrode just under the droplet is deactivated. By varying the electric potential along a linear array of electrodes, electrowetting can be used to move droplets along this line of electrodes. Modifications to this foundation can also be fabricated into the basic design structure. One example of this is the addition of electrochemiluminescence detectors within the indium tin oxide layer (the ground electrode in a closed system) which aid in the detection of luminophores in droplets. In general, different materials may also be used to replace basic components of a DMF system such as the use of PDMS instead of glass for the substrate. Liquid materials can be added, such as oil or another substance, to a closed system to prevent evaporation of materials and decrease surface contamination. Also, DMF systems can be compatible with ionic liquid droplets with the use of an oil in a closed device or with the use of a catena (a suspended wire) over an open DMF device. Digital microfluidics can be light-activated. Optoelectrowetting can be used to transport sessile droplets around a surface containing patterned photoconductors. The photoelectrowetting effect can also be used to achieve droplet transport on a silicon wafer without the necessity of patterned electrodes. Working principle Droplets are formed using the surface tension properties of a liquid. For example, water placed on a hydrophobic surface such as wax paper will form spherical droplets to minimize its contact with the surface. Differences in surface hydrophobicity affect a liquid’s ability to spread and ‘wet’ a surface by changing the contact angle. As the hydrophobicity of a surface increases, the contact angle increases, and the ability of the droplet to wet the surface decreases. The change in contact angle, and therefore wetting, is regulated by the Young-Lippmann equation. where is the contact angle with an applied voltage ; is the contact angle with no voltage; is the relative permittivity of the dielectric; is the permittivity of free space; is the liquid/filler media surface tension; is the dielectric thickness. In some cases, the hydrophobicity of a substrate can be controlled by using electrical fields. This refers to the phenomenon Electrowetting On Dielectric (EWOD).[3][4] For example, when no electric field is applied to an electrode, the surface will remain hydrophobic and a liquid droplet will form a more spherical droplet with a greater contact angle. When an electric field is applied, a polarized hydrophilic surface is created. The water droplet then becomes flattened and the contact angle decreases. By controlling the localization of this polarization, we can create an interfacial tension gradient that allows controlled displacement of the droplet across the surface of the DMF device. Droplet formation There are two ways to make new droplets with a digital microfluidic device. Either an existing droplet can be split in two, or a new droplet can be made from a reservoir of material. Both processes are only known to work in closed devices, though this often is not a problem as the top plates of DMF devices are typically removable, so an open device can be made temporarily closed should droplet formation be necessary. From an existing droplet A droplet can be split by charging two electrodes on opposite sides of a droplet on an uncharged electrode. In the same way a droplet on an uncharged electrode will move towards an adjacent, charged electrode, this droplet will move towards both active electrodes. Liquid moves to either side, which causes the middle of the droplet to neck. For a droplet of the same size as the electrodes, splitting will occur approximately when , as the neck will be at its thinnest. is the radius of curvature of the menisci at the neck, which is negative for a concave curve, and is the radius of curvature of the menisci at the elongated ends of the droplet. This process is simple and consistently results in two droplets of equal volume. The conventional method of splitting an existing droplet by simply turning the splitting electrodes on and off produces new droplets of relatively equal volume. However, the new droplets formed by the conventional method show considerable difference in volume. This difference is caused by local perturbations due to the rapid mass transport. Even though the difference is negligible in some applications, it can still pose a problem in applications that are highly sensitive to variations in volume, such as immunoassays and DNA amplification. To overcome the limitation of the conventional method, an existing droplet can be split by gradually changing the potential of the electrodes at the splitting region instead of simply switching them on and off. Using this method, a noticeable improvement in droplet volume variation, from around 10% variation in volume to less than 1% variation in volume, has been reported. From a reservoir Creating a new droplet from a reservoir of liquid can be done in a similar fashion to splitting a droplet. In this case, the reservoir remains stationary while a sequence of electrodes are used to draw liquid out of the reservoir. This drawn liquid and the reservoir form a neck of liquid, akin to the neck of a splitting droplet but longer, and the collapsing of this neck forms a dispensed droplet from the drawn liquid. In contrast to splitting, though, dispensing droplets in this manner is inconsistent in scale and results. There is no reliable distance liquid will need to be pulled from the reservoir for the neck to collapse, if it even collapses at all. Because this distance varies, the volumes of dispensed droplets will also vary within the same device. Due to these inconsistencies, alternative techniques for dispensing droplets have been used and proposed, including drawing liquid out of reservoirs in geometries that force a thinner neck, using a continuous and replenishable electrowetting channel, and moving reservoirs into corners so as to cut the reservoir down the middle. Multiple iterations of the latter can produce droplets of more manageable sizes. Droplet manipulation Droplet merging As an existing droplet can be split to form discrete droplets using electrodes (see From an existing droplet), droplets can be merged into one droplet by electrodes as well. Utilizing the same concept applied for creating new droplets through splitting an existing droplet with electrodes, an aqueous droplet resting on an uncharged electrode can move towards a charged electrode where droplets will join and merge into one droplet. However, the merged droplet might not always form a circular shape even after the merging process is over due to surface tension. This problem can be solved by implementing a superhydrophobic surface between the droplets and the electrodes. Oil droplets can be merged in the same way as well, but oil droplets will move towards uncharged electrodes unlike aqueous droplets. Droplet transportation Discrete droplets can be transported in a highly controlled way using an array of electrodes. In the same way droplets move from an uncharged electrode to a charged electrode, or vice versa, droplets can be continuously transported along the electrodes by sequentially energizing the electrodes. Since droplet transportation involves an array of electrodes, multiple electrodes can be programmed to selectively apply a voltage to each electrode for a better control over transporting multiple droplets. Displacement by electrostatic actuation Three-dimensional droplet actuation has been made possible by implementing a closed system; this system contains a μL sized droplet in immiscible fluid medium. The droplet and medium are then sandwiched between two electromagnetic plates, creating an EM field between the two plates. The purpose of this method is to transfer the droplet from a lower planar surface to an upper parallel planar surface and back down via electrostatic forces. The physics behind such particle actuation and perpendicular movement can be understood from early works of N. N. Lebedev and I. P. Skal’skaya. In their research, they attempted to model the Maxwell electrical charge acquired by a perfectly round conducting particle in the presence of a uniform magnetic field caused by a perfectly-conducting and infinitely-stretching surface. Their model helps to predict the Z-direction motion of the microdroplets within the device as it points to the magnitude and direction of forces acting upon a micro droplet. This can be used to help accurately predict and correct for unwanted and uncontrollable particle movement. The model explains why failing to employ dielectric coating on one of the two surfaces causes reversal of charge within the droplet upon contact with each electrode and in turn causes the droplets to uncontrollably bounce of between electrodes. Digital microfluidics (DMF), has already been readily adapted in many biological fields. By enabling three-dimensional movement within DMF, the technology can be used even more extensively in biological applications, as it could more accurately mimic 3-D microenvironments. A large benefit of employing this type of method is that it allows for two different environments to be accessible by the droplet, which can be taken advantage of by splitting the microfluidic tasks among the two surfaces. For example, while the lower plane can be used to move droplets, the upper plate can carry out the necessary chemical and/or biological processes. This advantage can be translated into practical experiment protocols in the biological community, such as coupling with DNA amplification. This also allows for the chip to be smaller, and to give researchers more freedom in designing platforms for microdroplet analysis. All-terrain droplet actuation (ATDA) All-terrain microfluidics is a method used to transport liquid droplets over non-traditional surface types. Unlike traditional microfluidics platform, which are generally restricted to planar and horizontal surfaces, ATDA enables droplet manipulation over curved, non-horizontal, and inverted surfaces. This is made possible by incorporating flexible thin sheets of copper and polyimide into the surface via a rapid prototyping method. This device works very well with many liquids, including aqueous buffers, solutions of proteins and DNA, and undiluted bovine serum. ATDA is compatible with silicone oil or pluronic additives, such as F-68, which reduce non-specific absorption and biofouling when dealing with biological fluids such as proteins, biological serums, and DNA. A drawback of a setup like this is accelerated droplet evaporation. ATDA is a form of open digital microfluidics, and as such the device needs to be encapsulated in a humidified environment in order to minimize droplet evaporation. Implementation In one of various embodiments of EWOD-based microfluidic biochips, investigated first by Cytonix in 1987 and subsequently commercialized by Advanced Liquid Logic, there are two parallel glass plates. The bottom plate contains a patterned array of individually controllable electrodes and the top plate is coated with a continuous grounding electrode. A dielectric insulator coated with a hydrophobic is added to the plates to decrease the wet-ability of the surface and to add capacitance between the droplet and the control electrode. The droplet containing biochemical samples and the filler medium, such as the silicone oil, a fluorinated oil, or air, are sandwiched between the plates and the droplets travel inside the filler medium. In order to move a droplet, a control voltage is applied to an electrode adjacent to the droplet, and at the same time, the electrode just under the droplet is deactivated. By varying the electric potential along a linear array of electrodes, electrowetting can be used to move droplets along this line of electrodes. Applications Laboratory automation In research fields such as synthetic biology, where highly iterative experimentation is common, considerable efforts have been made to automate workflows. Digital microfluidics is often touted as a laboratory automation solution, with a number of advantages over alternative solutions such as pipetting robots and droplet microfluidics. These stated advantages often include a reduction in the required volume of experimental reagents, a reduction in the likelihood of contamination and cross-contamination, potential improvements in reproducibility, increased throughput, individual droplet addressability, and the ability to integrate with sensor and detector modules to perform end-to-end or even closed loop workflow automation. Reduced experimental footprint One of the core advantages of digital microfluidics, and of microfluidics in general, is the use and actuation of picoliter to microliter scale volumes. Workflows adapted from the bench to a DMF system are miniaturized, meaning working volumes are reduced to fractions of what is normally required for conventional methods. For example, Thaitrong et al. developed a DMF system with a capillary electrophoresis (CE) module with the purpose of automating the process of next generation sequencing (NGS) library characterization. Compared to an Agilent BioAnalyzer (an instrument commonly used to measure sequencing library size distribution), the DMF-CE system consumed ten-fold less sample volume. Reducing volumes for a workflow can be especially beneficial if the reagents are expensive or when manipulating rare samples such as circulating tumor cells and prenatal samples. Miniaturization also means a reduction in waste product volumes. Reduced probability of contamination DMF-based workflows, particularly those using a closed configuration with a top-plate ground electrode, have been shown to be less susceptible to outside contamination compared to some conventional laboratory workflows. This can be attributed to minimal user interaction during automated steps, and the fact that the smaller volumes are less exposed to environmental contaminants than larger volumes which would need to be exposed to open air during mixing. Ruan et al. observed minimal contamination from exogenous nonhuman DNA and no cross-contamination between samples while using their DMF-based digital whole genome sequencing system. Improved reproducibility Overcoming issues of reproducibility has become a topic of growing concern across scientific disciplines. Reproducibility can be especially salient when multiple iterations of the same experimental protocol need to be repeated. Using liquid handling robots that can minimize volume loss between experimental steps are often used to reduce error rates and improve reproducibility. An automated DMF system for CRISPR-Cas9 genome editing was described by Sinha et al, and was used to culture and genetically modify H1299 lung cancer cells. The authors noted that no variation in knockout efficiencies across loci was observed when cells were cultured on the DMF device, whereas cells cultured in well-plates showed variability in upstream loci knockout efficiencies. This reduction in variability was attributed to culturing on a DMF device being more homogenous and reproducible compared with well plate methods. Increased throughput While DMF systems cannot match the same throughput achieved by some liquid handling pipetting robots, or by some droplet-based microfluidic systems, there are still throughput advantages when compared to conventional methods carried out manually. Individual droplet addressability DMF allows for droplet level addressability, meaning individual droplets can be treated as spatially distinct microreactors. This level of droplet control is important for workflows where reactions are sensitive to the order of reagent mixing and incubation times, but where the optimal values of these parameters may still need to be determined. These types of workflows are common in cell-free biology, and Liu et al. were able to demonstrate a proof-of-concept DMF-based strategy for carrying out remote-controlled cell-free protein expression on an OpenDrop chip. Detector module integration for end-to-end and closed-loop automation An often cited advantage DMF platforms have is their potential to integrate with on-chip sensors and off-chip detector modules. In theory, real-time and end-point data can be used in conjunction with machine learning methods to automate the process of parameter optimization. Separation and extraction Digital microfluidics can be used for separation and extraction of target analytes. These methods include the use of magnetic particles, liquid-liquid extraction, optical tweezers, and hydrodynamic effects. Magnetic particles For magnetic particle separations a droplet of solution containing the analyte of interest is placed on a digital microfluidics electrode array and moved by the changes in the charges of the electrodes. The droplet is moved to an electrode with a magnet on one side of the array with magnetic particles functionalized to bind to the analyte. Then it is moved over the electrode, the magnetic field is removed and the particles are suspended in the droplet. The droplet is swirled on the electrode array to ensure mixing. The magnet is reintroduced and the particles are immobilized and the droplet is moved away. This process is repeated with wash and elution buffers to extract the analyte. Magnetic particles coated with antihuman serum albumin antibodies have been used to isolate human serum albumin, as proof of concept work for immunoprecipitation using digital microfluidics.5 DNA extraction from a whole blood sample has also been performed with digital microfluidics.3 The procedure follows the general methodology as the magnetic particles, but includes pre-treatment on the digital microfluidic platform to lyse the cells prior to DNA extraction. Liquid-liquid extraction Liquid-liquid extractions can be carried out on digital microfluidic device by taking advantage of immiscible liquids.9 Two droplets, one containing the analyte in aqueous phase, and the other an immiscible ionic liquid are present on the electrode array. The two droplets are mixed and the ionic liquid extracts the analyte, and the droplets are easily separable. Optical tweezers Optical tweezers have also been used to separate cells in droplets. Two droplets are mixed on an electrode array, one containing the cells, and the other with nutrients or drugs. The droplets are mixed and then optical tweezers are used to move the cells to one side of the larger droplet before it is split. For a more detailed explanation on the underlying principles, see Optical tweezers. Hydrodynamic separation Particles have been applied for use outside of magnetic separation, with hydrodynamic forces to separate particles from the bulk of a droplet. This is performed on electrode arrays with a central electrode and ‘slices’ of electrodes surrounding it. Droplets are added onto the array and swirled in a circular pattern, and the hydrodynamic forces from the swirling cause the particles to aggregate onto the central electrode. Chemical synthesis Digital Microfluidics (DMF) allows for precise manipulation and coordination in small-scale chemical synthesis reactions due to its ability to control micro scale volumes of liquid reagents, allowing for overall less reagent use and waste. This technology can be used in the synthesis compounds such as peptidomimetics and PET tracers. PET tracers require nanogram quantities and as such, DMF allows for automated and rapid synthesis of tracers with 90-95% efficiency compared to conventional macro-scale techniques. Organic reagents are not commonly used in DMF because they tend to wet the DMF device and cause flooding; however synthesis of organic reagents can be achieved through DMF techniques by carrying the organic reagents through an ionic liquid droplet, thus preventing the organic reagent from flooding the DMF device. Droplets are combined together by inducing opposite charges thus attracting them to each other. This allows for automated mixing of droplets. Mixing of droplets are also used to deposit MOF crystals for printing by delivering reagents into wells and evaporating the solutions for crystal deposition. This method of MOF crystal deposition is relatively cheap and does not require extensive robotic equipment. Chemical synthesis using digital microfluidics (DMF) has been applied to many noteworthy biological reactions. These include polymerase chain reaction (PCR), as well as the formation of DNA and peptides. Reduction, alkylation, and enzymatic digestion have also shown robustness and reproducibility utilizing DMF, indicating potential in the synthesis and manipulation of proteomics. Spectra obtained from the products of these reactions are often identical to their library spectra, while only utilizing a small fraction of bench-scale reactants. Thus, conducting these syntheses on the microscale has the benefit of limiting money spent on purchasing reagents and waste products produced while yielding desirable experimental results. However, numerous challenges need to be overcome to push these reactions to completion through DMF. There have been reports of reduced efficiency in chemical reactions as compared to bench-scale versions of the same syntheses, as lower product yields have been observed. Furthermore, since picoliter and nanoliter size samples must be analyzed, any instrument used in analysis needs to be high in sensitivity. In addition, system setup is often difficult due to extensive amounts of wiring and pumps that are required to operate microchannels and reservoirs. Finally, samples are often subject to solvent evaporation which leads to changes in volume and concentration of reactants, and in some cases reactions to not go to completion. The composition and purity of molecules synthesized by DMF are often determined utilizing classic analytical techniques. Nuclear magnetic resonance (NMR) spectroscopy has been successfully applied to analyze corresponding intermediates, products, and reaction kinetics. A potential issue that arises through the use of NMR is low mass sensitivity, however this can be corrected for by employing microcoils that assist in distinguishing molecules of differing masses. This is necessary since the signal-to-noise ratio of sample sizes in the microliter to nanoliter range is dramatically reduced compared to bench-scale sample sizes, and microcoils have been shown to resolve this issue. Mass spectrometry (MS) and high-performance liquid chromatography (HPLC) have also been used to overcome this challenge. Although MS is an attractive analytical technique for distinguishing the products of reactions accomplished through DMF, it poses its own weaknesses. Matrix-assisted laser desorption ionization (MALDI) and electrospray ionization (ESI) MS have recently been paired with analyzing microfluidic chemical reactions. However, crystallization and dilution associated with these methods often leads to unfavorable side effects, such as sample loss and side reactions occurring. The use of MS in DMF is discussed in more detail in a later section. Cell culture Connecting the DMF chip to use in the field or world-to-chip interfaces have been accomplished by means of manual pumps and reservoirs which deliver microbes, cells, and media to the device. The lack of extensive pumps and valves allow for elaborate multi step applications involving cells performed in a simple and compact system. In one application, microbial cultures have been transferred onto the chip and allowed to grow with the use of sterile procedures and temperature required for microbial incubation. To validate that this was a viable space for microbial growth, a transformation assay was carried out in the device. This involves exposing E.coli to a vector and heat shocking the bacteria until they take up the DNA. This is then followed by running a DNA gel to assure that the wanted vector was taken up by the bacteria. This study found that the DNA indeed was taken up by the bacteria and expressed as predicted. Human cells have also been manipulated in Digital Microfluidic Immunocytochemistry in Single Cells (DISC) where DMF platforms were used to culture and use antibodies to label phosphorylated proteins in the cell. Cultured cells are then removed and taken off chip for screening. Another technique synthesizes hydrogels within DMF platforms. This process uses electrodes to deliver reagents to produce the hydrogel, and delivery of cell culture reagents for absorption into the gel. The hydrogels are an improvement over 2D cell culture because 3D cell culture have increased cell-cell interactions and cel-extracellular matrix interactions. Spherical cell cultures are another method developed around the ability of DMF to deliver droplets to cells. Application of an electric potential allows for automation of droplet transfer directly to the hanging cell culture.] This is beneficial as 3 dimensional cell culture and spheroids better mimic in vivo tissue by allowing for more biologically relevant cultures that have cells growing in an extracellular matrix similarly resembling that in the human body. Another use of DMF platforms in cell culture is its ability to conduct in vitro cell-free cloning using single molecule PCR inside droplets. PCR amplified products are then validated by transfection into yeast cells and a Western blot protein identification. Problems arising from cell culture applications using DMF include protein adsorption to the device floor, and cytotoxicity to cells. To prevent adsorption of protein to the platform's floor, a surfactant stabilized Silicon oil or hexane was used to coat the surface of the device, and droplets were manipulated atop of the oil or hexane. Hexane was later rapidly evaporated from cultures to prevent a toxic effect on cell cultures. Another approach to solve protein adhesion is the addition of Pluronic additives to droplets in the device. Pluronic additives are generally not cytotoxic but some have been shown to be harmful to cell cultures. Bio-compatibility of device set up is important for biological analyses. Along with finding Pluronic additives that are not cytotoxic, creating a device whose voltage and disruptive movement would not affect cell viability was accomplished. Through the readout of live/dead assays it was shown that neither voltage required to move droplets, nor the motion of moving cultures affected cell viability. Biological extraction Biological separations usually involve low concentration high volume samples. This can pose an issue for digital microfluidics due to the small sample volume necessary. Digital microfluidic systems can be combined with a macrofluidic system designed to decrease sample volume, in turn increasing analyte concentration. It follows the same principles as the magnetic particles for separation, but includes pumping of the droplet to cycle a larger volume of fluid around the magnetic particles. Extraction of drug analytes from dried urine samples has also been reported. A droplet of extraction solvent, in this case methanol, is repeatedly flowed over a sample of dried urine sample then moved to a final electrode where the liquid is extracted through a capillary and then analyzed using mass spectrometry. Immunoassays The advanced fluid handling capabilities of digital microfluidics (DMF) allows for the adoption of DMF as an immunoassay platform as DMF devices can precisely manipulate small quantities of liquid reagents. Both heterogeneous immunoassays (antigens interacting with immobilized antibodies) and homogeneous immunoassays (antigens interacting with antibodies in solution) have been developed using a DMF platform. With regards to heterogeneous immunoassays, DMF can simplify the extended and intensive procedural steps by performing all delivery, mixing, incubation, and washing steps on the surface of the device (on-chip). Further, existing immunoassay techniques and methods, such as magnetic bead-based assays, ELISAs, and electrochemical detection, have been incorporated onto DMF immunoassay platforms. The incorporation of magnetic bead-based assays onto a DMF immunoassay platform has been demonstrated for the detection of multiple analytes, such as human insulin, IL-6, cardiac marker Troponin I (cTnI), thyroid stimulating hormone (TSH), sTNF-RI, and 17β-estradiol. For example, a magnetic bead-based approached has been used for the detection of cTnI from whole blood in less than 8 minutes. Briefly, magnetic beads containing primary antibodies were mixed with labeled secondary antibodies, incubated, and immobilized with a magnet for the washing steps. The droplet was then mixed with a chemiluminescent reagent and detection of the accompanying enzymatic reaction was measured on-chip with a photomultiplier tube. The ELISA template, commonly used for performing immunoassays and other enzyme-based biochemical assays, has been adapted for use with the DMF platform for the detection of analytes such as IgE and IgG. In one example, a series of bioassays were conducted to establish the quantification capabilities of DMF devices, including an ELISA-based immunoassay for the detection of IgE. Superparamagnetic nanoparticles were immobilized with anti-IgE antibodies and fluorescently labeled aptamers to quantify IgE using an ELISA template. Similarly, for the detection of IgG, IgG can be immobilized onto a DMF chip, conjugated with horseradish-peroxidase (HRP)-labeled IgG, and then quantified through measurement of the color change associated with product formation of the reaction between HRP and tetramethylbenzidine. To further expand the capabilities and applications of DMF immunoassays beyond colorimetric detection (i.e., ELISA, magnetic bead-based assays), electrochemical detection tools (e.g., microelectrodes) have been incorporated into DMF chips for the detection of analytes such as TSH and rubella virus. For example, Rackus et al. integrated microelectrodes onto a DMF chip surface and substituted a previously reported chemiluminescent IgG immunoassay with an electroactive species, enabling detection of rubella virus. They coated magnetic beads with rubella virus, anti-rubella IgG, and anti-human IgG coupled with alkaline phosphatase, which in turn catalyzed an electron transfer reaction that was detected by the on-chip microelectrodes. Mass spectrometry The coupling of digital microfluidics (DMF) and Mass Spectrometry can largely be categorized into indirect off-line analysis, direct off-line analysis, and in-line analysis and the main advantages of this coupling are decreased solvent and reagent use, as well as decreased analysis times. Indirect off-line analysis is the usage of DMF devices to combine reactants and isolate products, which are then removed and manually transferred to a mass spectrometer. This approach takes advantage of DMF for the sample preparation step but also introduces opportunities for contamination as manual intervention is required to transfer the sample. In one example of this technique, a Grieco three-component condensation was carried out on chip and was taken off the chip by micropipette for quenching and further analysis. Direct off-line analysis is the usage of DMF devices that have been fabricated and incorporated partially or totally into a mass spectrometer. This process is still considered off-line, however as some post-reaction procedures may be carried out manually (but on chip), without the use of the digital capabilities of the device. Such devices are most often used in conjugation with MALDI-MS. In MALDI-based direct off-line devices, the droplet must be dried and recrystallized along with matrix – operations that oftentimes require vacuum chambers. The chip with crystallized analyte is then placed in to the MALDI-MS for analysis. One issue raised with MALDI-MS coupling to DMF is that the matrix necessary for MALDI-MS can be highly acidic, which may interfere with the on-chip reactions Inline analysis is the usage of devices that feed directly into mass spectrometers, thereby eliminating any manual manipulation. Inline analysis may require specially fabricated devices and connecting hardware between the device and the mass spectrometer. Inline analysis is often coupled with electrospray ionization. In one example, a DMF chip was fabricated with a hole that led to a microchannel This microchannel was, in turn, connected to an electrospray ionizer that emitted directly into a mass spectrometer. Integration ambient ionization techniques where ions are formed outside of the mass spectrometer with little or no treatment pairs well with the open or semi-open microfluidic nature of DMF and allows easy inline couping between DMF and MS systems. Ambient Ionization techniques such as Surface Acoustic Wave (SAW) ionization generate surface waves on a flat piezoelectric surface that imparts enough acoustic energy on the liquid interface to overcome surface tension and desorb ions off the chip into the mass analyzer. Some couplings utilize an external high-voltage pulse source at the physical inlet to the mass spectrometer but the true role of such additions is uncertain. A significant barrier to the widespread integration of DMF with mass spectrometry is biological contamination, often termed bio-fouling. High throughput analysis is a significant advantage in the use of DMF systems, but means that they are particularly susceptible to cross contamination between experiments. As a result, the coupling of DMF with mass spectrometry often requires the integration of a variety of methods to prevent cross contamination such as multiple washing steps, biologically compatible surfactants, and or super hydrophobic surfaces to prevent droplet adsorption. In one example, a reduction in cross contaminant signal during the characterization of an amino acid required 4-5 wash steps between each sample droplet for the contamination intensity to fall below the limit of detection. Miniature Mass Spectrometers Conventional mass spectrometers are often large as well as prohibitively expensive and complex in their operation which has led to the increased attractiveness of miniature mass spectrometers (MMS) for a variety of applications. MMS are optimized towards affordability and simple operation, often forgoing the need for experienced technicians, having a low cost of manufacture, and being small enough in size to allow for the transfer of data collection from the laboratory into the field. These advantages often come at the cost of reduced performance where MMS resolution, as well as the limits of detection and quantitation, are often barely adequate to perform specialized tasks. The integration of DMF with MMS has the potential for significant improvement of MMS systems by increasing throughput, resolution, and automation, while decreasing solvent cost, enabling lab grade analysis at a much reduced cost. In one example the use of a custom DMF system for urine drug testing enabled the creation of an instrument weighing only 25 kg with performance comparable to standard laboratory analysis. Nuclear magnetic resonance spectroscopy Nuclear magnetic resonance (NMR) spectroscopy can be used in conjunction with digital microfluidics (DMF) through the use of NMR microcoils, which are electromagnetic conducting coils that are less than 1 mm in size. Due to their size, these microcoils have several limitations, directly influencing the sensitivity of the machinery they operate within. Microchannel/microcoil interfaces, previous to digital microfluidics, had several drawbacks such as in that many created large amounts of solvent waste and were easily contaminated. In this way, the use of digital microfluidics and its capability to manipulate singlet droplets is promising. The interface between digital microfluidics and NMR relaxometry has led to the creation of systems such as those used to detect and quantify the concentrations of specific molecules on microscales with some such systems using two step processes in which DMF devices guide droplets to the NMR detection site. Introductory systems of high-field NMR and 2D NMR in conjunction with microfluidics have also been developed. These systems use single plate DMF devices with NMR microcoils in place of the second plate. Recently, further modified version of this interface included pulsed field gradients (PFG) units that enabled this platform to perform more sophisticated NMR measurements (e.g. NMR diffusometry, gradients encoded pulse measurements). This system has been successfully applied into monitoring rapid organic reactions. References Biotechnology Microfluidics
Digital microfluidics
[ "Materials_science", "Biology" ]
8,074
[ "Biotechnology", "nan", "Microfluidics", "Microtechnology" ]
331,597
https://en.wikipedia.org/wiki/Kuratowski%20closure%20axioms
In topology and related branches of mathematics, the Kuratowski closure axioms are a set of axioms that can be used to define a topological structure on a set. They are equivalent to the more commonly used open set definition. They were first formalized by Kazimierz Kuratowski, and the idea was further studied by mathematicians such as Wacław Sierpiński and António Monteiro, among others. A similar set of axioms can be used to define a topological structure using only the dual notion of interior operator. Definition Kuratowski closure operators and weakenings Let be an arbitrary set and its power set. A Kuratowski closure operator is a unary operation with the following properties: A consequence of preserving binary unions is the following condition: In fact if we rewrite the equality in [K4] as an inclusion, giving the weaker axiom [K4''] (subadditivity): then it is easy to see that axioms [K4'] and [K4''] together are equivalent to [K4] (see the next-to-last paragraph of Proof 2 below). includes a fifth (optional) axiom requiring that singleton sets should be stable under closure: for all , . He refers to topological spaces which satisfy all five axioms as T1-spaces in contrast to the more general spaces which only satisfy the four listed axioms. Indeed, these spaces correspond exactly to the topological T1-spaces via the usual correspondence (see below). If requirement [K3] is omitted, then the axioms define a Čech closure operator. If [K1] is omitted instead, then an operator satisfying [K2], [K3] and [K4'] is said to be a Moore closure operator. A pair is called Kuratowski, Čech or Moore closure space depending on the axioms satisfied by . Alternative axiomatizations The four Kuratowski closure axioms can be replaced by a single condition, given by Pervin: Axioms [K1]–[K4] can be derived as a consequence of this requirement: Choose . Then , or . This immediately implies [K1]. Choose an arbitrary and . Then, applying axiom [K1], , implying [K2]. Choose and an arbitrary . Then, applying axiom [K1], , which is [K3]. Choose arbitrary . Applying axioms [K1]–[K3], one derives [K4]. Alternatively, had proposed a weaker axiom that only entails [K2]–[K4]: Requirement [K1] is independent of [M] : indeed, if , the operator defined by the constant assignment satisfies [M] but does not preserve the empty set, since . Notice that, by definition, any operator satisfying [M] is a Moore closure operator. A more symmetric alternative to [M] was also proven by M. O. Botelho and M. H. Teixeira to imply axioms [K2]–[K4]: Analogous structures Interior, exterior and boundary operators A dual notion to Kuratowski closure operators is that of Kuratowski interior operator, which is a map satisfying the following similar requirements: For these operators, one can reach conclusions that are completely analogous to what was inferred for Kuratowski closures. For example, all Kuratowski interior operators are isotonic, i.e. they satisfy [K4'], and because of intensivity [I2], it is possible to weaken the equality in [I3] to a simple inclusion. The duality between Kuratowski closures and interiors is provided by the natural complement operator on , the map sending . This map is an orthocomplementation on the power set lattice, meaning it satisfies De Morgan's laws: if is an arbitrary set of indices and , By employing these laws, together with the defining properties of , one can show that any Kuratowski interior induces a Kuratowski closure (and vice versa), via the defining relation (and ). Every result obtained concerning may be converted into a result concerning by employing these relations in conjunction with the properties of the orthocomplementation . further provides analogous axioms for Kuratowski exterior operators and Kuratowski boundary operators, which also induce Kuratowski closures via the relations and . Abstract operators Notice that axioms [K1]–[K4] may be adapted to define an abstract unary operation on a general bounded lattice , by formally substituting set-theoretic inclusion with the partial order associated to the lattice, set-theoretic union with the join operation, and set-theoretic intersections with the meet operation; similarly for axioms [I1]–[I4]. If the lattice is orthocomplemented, these two abstract operations induce one another in the usual way. Abstract closure or interior operators can be used to define a generalized topology on the lattice. Since neither unions nor the empty set appear in the requirement for a Moore closure operator, the definition may be adapted to define an abstract unary operator on an arbitrary poset . Connection to other axiomatizations of topology Induction of topology from closure A closure operator naturally induces a topology as follows. Let be an arbitrary set. We shall say that a subset is closed with respect to a Kuratowski closure operator if and only if it is a fixed point of said operator, or in other words it is stable under , i.e. . The claim is that the family of all subsets of the total space that are complements of closed sets satisfies the three usual requirements for a topology, or equivalently, the family of all closed sets satisfies the following: Notice that, by idempotency [K3], one may succinctly write . [T1] By extensivity [K2], and since closure maps the power set of into itself (that is, the image of any subset is a subset of ), we have . Thus . The preservation of the empty set [K1] readily implies . [T2] Next, let be an arbitrary set of indices and let be closed for every . By extensivity [K2], . Also, by isotonicity [K4'], if for all indices , then for all , which implies . Therefore, , meaning . [T3] Finally, let be a finite set of indices and let be closed for every . From the preservation of binary unions [K4], and using induction on the number of subsets of which we take the union, we have . Thus, . Induction of closure from topology Conversely, given a family satisfying axioms [T1]–[T3], it is possible to construct a Kuratowski closure operator in the following way: if and is the inclusion upset of , then defines a Kuratowski closure operator on . [K1] Since , reduces to the intersection of all sets in the family ; but by axiom [T1], so the intersection collapses to the null set and [K1] follows. [K2] By definition of , we have that for all , and thus must be contained in the intersection of all such sets. Hence follows extensivity [K2]. [K3] Notice that, for all , the family contains itself as a minimal element w.r.t. inclusion. Hence , which is idempotence [K3]. [K4'] Let : then , and thus . Since the latter family may contain more elements than the former, we find , which is isotonicity [K4']. Notice that isotonicity implies and , which together imply . [K4] Finally, fix . Axiom [T2] implies ; furthermore, axiom [T2] implies that . By extensivity [K2] one has and , so that . But , so that all in all . Since then is a minimal element of w.r.t. inclusion, we find . Point 4. ensures additivity [K4]. Exact correspondence between the two structures In fact, these two complementary constructions are inverse to one another: if is the collection of all Kuratowski closure operators on , and is the collection of all families consisting of complements of all sets in a topology, i.e. the collection of all families satisfying [T1]–[T3], then such that is a bijection, whose inverse is given by the assignment . First we prove that , the identity operator on . For a given Kuratowski closure , define ; then if its primed closure is the intersection of all -stable sets that contain . Its non-primed closure satisfies this description: by extensivity [K2] we have , and by idempotence [K3] we have , and thus . Now, let such that : by isotonicity [K4'] we have , and since we conclude that . Hence is the minimal element of w.r.t. inclusion, implying . Now we prove that . If and is the family of all sets that are stable under , the result follows if both and . Let : hence . Since is the intersection of an arbitrary subfamily of , and the latter is complete under arbitrary intersections by [T2], then . Conversely, if , then is the minimal superset of that is contained in . But that is trivially itself, implying . We observe that one may also extend the bijection to the collection of all Čech closure operators, which strictly contains ; this extension is also surjective, which signifies that all Čech closure operators on also induce a topology on . However, this means that is no longer a bijection. Examples As discussed above, given a topological space we may define the closure of any subset to be the set , i.e. the intersection of all closed sets of which contain . The set is the smallest closed set of containing , and the operator is a Kuratowski closure operator. If is any set, the operators such that are Kuratowski closures. The first induces the indiscrete topology , while the second induces the discrete topology . Fix an arbitrary , and let be such that for all . Then defines a Kuratowski closure; the corresponding family of closed sets coincides with , the family of all subsets that contain . When , we once again retrieve the discrete topology (i.e. , as can be seen from the definitions). If is an infinite cardinal number such that , then the operator such thatsatisfies all four Kuratowski axioms. If , this operator induces the cofinite topology on ; if , it induces the cocountable topology. Properties Since any Kuratowski closure is isotonic, and so is obviously any inclusion mapping, one has the (isotonic) Galois connection , provided one views as a poset with respect to inclusion, and as a subposet of . Indeed, it can be easily verified that, for all and , if and only if . If is a subfamily of , then If , then . Topological concepts in terms of closure Refinements and subspaces A pair of Kuratowski closures such that for all induce topologies such that , and vice versa. In other words, dominates if and only if the topology induced by the latter is a refinement of the topology induced by the former, or equivalently . For example, clearly dominates (the latter just being the identity on ). Since the same conclusion can be reached substituting with the family containing the complements of all its members, if is endowed with the partial order for all and is endowed with the refinement order, then we may conclude that is an antitonic mapping between posets. In any induced topology (relative to the subset A) the closed sets induce a new closure operator that is just the original closure operator restricted to A: , for all . Continuous maps, closed maps and homeomorphisms A function is continuous at a point iff , and it is continuous everywhere iff for all subsets . The mapping is a closed map iff the reverse inclusion holds, and it is a homeomorphism iff it is both continuous and closed, i.e. iff equality holds. Separation axioms Let be a Kuratowski closure space. Then is a T0-space iff implies ; is a T1-space iff for all ; is a T2-space iff implies that there exists a set such that both and , where is the set complement operator. Closeness and separation A point is close to a subset if This can be used to define a proximity relation on the points and subsets of a set. Two sets are separated iff . The space is connected iff it cannot be written as the union of two separated subsets. See also Notes References . . . . . External links Alternative Characterizations of Topological Spaces Closure operators Mathematical axioms
Kuratowski closure axioms
[ "Mathematics" ]
2,703
[ "Mathematical logic", "Order theory", "Closure operators", "Mathematical axioms" ]
7,037,059
https://en.wikipedia.org/wiki/Nudity%20and%20sexuality
The relationship between nudity and sexuality can be complicated. When people are nude, this often leads to sexual arousal, which is why indecent exposure is often considered a crime. There are also social movements to promote a greater degree of nudity, such as the topfreedom movement to promote female toplessness, as well as the movement to promote breastfeeding in public. Furthermore, some psychiatric disorders that can lead to greater nudity include exhibitionistic disorder, voyeuristic disorder, and gymnophobia. Background Nudity Nudity is one of the physiological characteristics of humans, who alone among primates evolved to be effectively hairless. Human sexuality includes the physiological, psychological, and social aspects of sexual feelings and behaviors. In many societies, a strong link between nudity and sexuality is taken for granted. Other societies maintain their traditional practices of being completely or partially naked in social as well as private situations, such as going to a beach or spa. The meaning of nudity and sexuality remains ambivalent, often leading to cultural misunderstandings and psychological problems. Sexualization The American Psychological Association (APA) defines sexualization as limiting a person's value to sexual appeal to the exclusion of other characteristics, and equating physical attractiveness with being sexual. A person may also be sexually objectified, made into a object for others' sexual use, rather than seen as a person with the capacity for independent action and decision making; or sexuality is inappropriately imposed upon a person. Being sexualized is particularly damaging to young people who are in the process of developing their own self-image. Girls may have sexualized expectations imposed upon them, or internalize norms that lead to self-sexualization. Sexualization of girls includes both age-inappropriate "sexy" attire for girls, and adult models dressing as girls. In movies and television, women are shown nude much more frequently than men, and generally in the context of sexual behavior. Some see the APA position as viewing sexual images as uniformly negative, and overestimating the influence of these images on young people by assuming that exposure leads directly to negative effects, as if it were a disease. Studies also fail to address the effect of sexual images on boys which influences how they view their own masculinity and appropriate sexual relationships. While there has been considerable media and political discussion of sexualization, there has been little psychological research on what effect media images actually have on the well-being of young people, for example how and to what degree sexual objectification is internalized, becoming self-evaluation. In interviews with Dutch pre-teens, the effects are complicated given the general liberal attitudes toward sexuality, including the legalization of prostitution, which is highly visible. Researchers see the cultural force of commodification (or "pornification") as resulting in the sexualization of athletic bodies, negating the naturalness and beauty of nudity. This is in contrast to the sacredness of the nude athlete in the ancient world, particularly Greece; and the aesthetic appreciation of the nude in art. Sexual response to social nudity The link between the nude body and a sexual response is reflected in the legal prohibition of indecent exposure in the majority of societies. Worldwide, some societies recognize certain places and activities that, although public, are appropriate for partial or complete nudity. These include societies that maintain traditional norms regarding nudity, as well as modern societies that have large numbers of people who have adopted naturism in recreational activities. Naturists typically adopt a number of behaviors, such as refraining from touch, in order to avoid sexual responses while participating in nude activities. Many nude beaches serve as an example of this. Studies have shown that biologically humans are disposed to react to nudity more than any other presentation of bodies. This biological reaction includes sexual arousal, although sexuality presents and is wired differently between sexes. Because of this there is often a separation or focus on one groups perspective of sexuality and less of an emphasis on another's, ideas can become skewed. There are also cultural differences on nudity and sexuality that often comes from their past, religious views, or region. Naturism and sex Some naturists do not maintain this non-sexual atmosphere. In a 1991 article in Off Our Backs, Nina Silver presents an account of mainstream sexual culture's intrusion into some American naturist groups. Nudist resorts may attract misogynists or pedophiles who are not always dealt with properly, and some resorts may cater to "swingers" or have sexually provocative events to generate revenue or attract members. Nudity movements In many societies, the breast continues to be associated with both nurturing babies and sexuality. The "topfreedom" movement promotes equal rights for women to be naked above the waist in public under the same circumstances that are considered socially acceptable for men to do so. Breastfeeding in public is forbidden in some jurisdictions, not regulated in others, and protected as a legal right in still others. Where public breastfeeding is a legal right, some mothers may be reluctant to breastfeed, and some people may object to the practice. Psychological disorders of bodily display Exhibitionistic disorder is a condition marked by the urge, fantasy, or act of exposing one's genitals to non-consenting people, particularly strangers; and voyeuristic disorder is a sexual interest in, or practice of, spying on people engaged in intimate behaviors like undressing or sexual activity. While similar terms may be used loosely to refer to everyday activity, these feelings and behaviors are indicative of a mental disorder only if they interfere with normal functioning or well-being, or involve causing discomfort or alarm to others. Much rarer is gymnophobia, an abnormal and persistent fear of nudity. See also Nudity in film Sex in film References Sources Books Journal articles News Websites Nudity Sexuality
Nudity and sexuality
[ "Biology" ]
1,193
[ "Behavior", "Sexuality", "Sex" ]
7,037,175
https://en.wikipedia.org/wiki/Ordem%20dos%20Engenheiros
The Ordem dos Engenheiros (OE, ) is the regulatory and licensing body for the engineering profession in Portugal. It is headquartered in Lisbon, and has several regional branches in other Portuguese cities. The OE was established by law in 1936. It succeeded the Portuguese Association of Civil Engineers, founded nearly 70 years earlier. The OE is a member of many international engineering organizations, including general engineering ones (e.g. FEANI) and those for specific engineering disciplines (e.g. ECCE, EUREL, EFCE). The OE's mission is to contribute to the progress of engineering by supporting the efforts of its members in scientific, professional and social areas, as well as to ensure compliance with professional regulations and ethics. It is illegal to provide engineering services or sign engineering projects in Portugal without being a member of the OE. However, many other professionals in engineering (such as technical engineers, short-cycle degree engineers, or engineers graduating from unaccredited courses) are allowed to work in the field as long as they do not provide engineering services or sign engineering projects, and they cannot officially use the title "engineer". The OE is the entity responsible for the accreditation of engineering degrees and engineering courses in Portugal. Engineers graduating with an accredited degree are exempt from the licensing exams conducted by the Order. According to the chairman () of the OE, only 30 to 50 percent of the candidates with an unaccredited degree pass the licensing exams, depending on the particular engineering field. Over three hundred engineering degrees are awarded in Portugal by public universities, public polytechnic schools, and private institutions. However, only about one hundred of these are accredited degrees. Accreditation A full chartered engineer (Engenheiro) in Portugal used to have a compulsory five-year course known as licenciatura (licentiate) which was granted exclusively by universities. Only engineers having the licenciatura diploma, graduated at the universities, were capacitated to develop any kind of project in engineering and were universally recognized by the Engineers Association of Portugal (Ordem dos Engenheiros). The polytechnic institutions of engineering, born after 1974, used to award the professional title of (Technical Engineer), a title conferred after a three years course; the degree was known as bacharelato. Polytechnic institutions conferred 3-years bacharelato degrees in several technical engineering specializations, until the late 1990s. At this time new legal decrees were adopted by Portuguese State (Administrative Rule 413A/98 of 17 July 1998), and it started to award 3 + 2 licenciaturas bietápicas (bacharelato plus one or two extra years, conferring the licenciatura degree - a degree that had been awarded exclusively by the universities). In the mid-2000s those institutions adopted new more selective admission rules which were imposed to every Portuguese higher education institution by the State, excluding for the first time in their history the applicants with negative (less than 95/200) admission marks (in Portugal admission marks to higher education institutions are based on a combination of high school marks, and results of the entrance exams, and competition is based in a numerus clausus system). However, in many cases, polytechnic courses from several institutions across the country, started to require admission entrance exams in fields not directly related with the course (for instance, an electrical engineering or computer engineering course allows a biology entrance exam instead of mathematics and/or physics, unlike what is seen in most universities for the same engineering fields). This is the main reason many engineering courses awarded by several Portuguese polytechnic institutions and a few universities, are not currently accredited by Ordem dos Engenheiros. This is not exclusive of polytechnic engineerings since that in other polytechnic fields, like in polytechnic accountancy and management institutes or schools, history, geography, or even Portuguese language entrance exams are allowed instead of mathematics and economics, unlike what is allowed for the university courses in similar fields, although some departments of certain university institutions are using the same criteria to fight the increasing number of places left vacant every year. Today, after many reforms and changes in higher education occurred since 1998 to the 2000s, the formal differences between polytechnic and university licenciatura degrees in engineering are in general null, and due to the Bologna process both graduates should be recognized equally all across Europe. However, there are many engineering courses whose degrees are still not recognized by the Ordem dos Engenheiros (the highest Portuguese authority in accreditation of professional engineers), especially engineering courses conferred by several polytechnical institutes and many private institutions. Among the oldest recognized and most extensively accredited engineering courses in Portugal, are those engineering degrees awarded by the state-run universities. After the large 1998 - 2000s reforms and upgrades, some polytechnic engineering licenciatura degrees started to be offered by the largest state-run polytechnic institutes, have been accredited in the same way with official recognition by Ordem dos Engenheiros. See also Educational accreditation Higher education in Portugal Ordem dos Advogados Ordem dos Biólogos References External links WEC 2008 – World Engineers` Convention - in Brazil (Congresso Mundial de Engenheiros 2008 - no Brasil) Professional associations based in Portugal Engineering societies
Ordem dos Engenheiros
[ "Engineering" ]
1,079
[ "Engineering societies" ]
7,038,870
https://en.wikipedia.org/wiki/Recombination-activating%20gene
The recombination-activating genes (RAGs) encode parts of a protein complex that plays important roles in the rearrangement and recombination of the genes encoding immunoglobulin and T cell receptor molecules. There are two recombination-activating genes RAG1 and RAG2, whose cellular expression is restricted to lymphocytes during their developmental stages. The enzymes encoded by these genes, RAG-1 and RAG-2, are essential to the generation of mature B cells and T cells, two types of lymphocyte that are crucial components of the adaptive immune system. Function In the vertebrate immune system, each antibody is customized to attack one particular antigen (foreign proteins and carbohydrates) without attacking the body itself. The human genome has at most 30,000 genes, and yet it generates millions of different antibodies, which allows it to be able to respond to invasion from millions of different antigens. The immune system generates this diversity of antibodies by shuffling, cutting and recombining a few hundred genes (the VDJ genes) to create millions of permutations, in a process called V(D)J recombination. RAG-1 and RAG-2 are proteins at the ends of VDJ genes that separate, shuffle, and rejoin the VDJ genes. This shuffling takes place inside B cells and T cells during their maturation. RAG enzymes work as a multi-subunit complex to induce cleavage of a single double stranded DNA (dsDNA) molecule between the antigen receptor coding segment and a flanking recombination signal sequence (RSS). They do this in two steps. They initially introduce a ‘nick’ in the 5' (upstream) end of the RSS heptamer (a conserved region of 7 nucleotides) that is adjacent to the coding sequence, leaving behind a specific biochemical structure on this region of DNA: a 3'-hydroxyl (OH) group at the coding end and a 5'-phosphate (PO4) group at the RSS end. The next step couples these chemical groups, binding the OH-group (on the coding end) to the PO4-group (that is sitting between the RSS and the gene segment on the opposite strand). This produces a 5'-phosphorylated double-stranded break at the RSS and a covalently closed hairpin at the coding end. The RAG proteins remain at these junctions until other enzymes (notably, TDT) repair the DNA breaks. The RAG proteins initiate V(D)J recombination, which is essential for the maturation of pre-B and pre-T cells. Activated mature B cells also possess two other remarkable, RAG-independent phenomena of manipulating their own DNA: so-called class-switch recombination (AKA isotype switching) and somatic hypermutation (AKA affinity maturation). Current studies have indicated that RAG-1 and RAG-2 must work in a synergistic manner to activate VDJ recombination. RAG-1 was shown to inefficiently induce recombination activity of the VDJ genes when isolated and transfected into fibroblast samples. When RAG-1 was cotransfected with RAG-2, recombination frequency increased by a 1000-fold. This finding has fostered the newly revised theory that RAG genes may not only assist in VDJ recombination, but rather, directly induce the recombinations of the VDJ genes. Structure As with many enzymes, RAG proteins are fairly large. For example, mouse RAG-1 contains 1040 amino acids and mouse RAG-2 contains 527 amino acids. The enzymatic activity of the RAG proteins is concentrated largely in a core region; Residues 384–1008 of RAG-1 and residues 1–387 of RAG-2 retain most of the DNA cleavage activity. The RAG-1 core contains three acidic residues (D600, D708, and E962) in what is called the DDE motif, the major active site for DNA cleavage. These residues are critical for nicking the DNA strand and for forming the DNA hairpin. Residues 384–454 of RAG-1 comprise a nonamer-binding region (NBR) that specifically binds the conserved nonamer (9 nucleotides) of the RSS and the central domain (amino acids 528–760) of RAG-1 binds specifically to the RSS heptamer. The core region of RAG-2 is predicted to form a six-bladed beta-propeller structure that appears less specific than RAG-1 for its target. Cryo-electron microscopy structures of the synaptic RAG complexes reveal a closed dimer conformation with generation of new intermolecular interactions between two RAG1-RAG2 monomers upon DNA binding, compared to the Apo-RAG complex which constitutes as an open conformation. Both RAG1 molecules in the closed dimer are involved in the cooperative binding of the 12-RSS and 23-RSS intermediates with base specific interactions in the heptamer of the signal end. The first base of the heptamer in the signal end is flipped out to avoid the clash in the active center. Each coding end of the nicked-RSS intermediate is stabilized exclusively by one RAG1-RAG2 monomer with non-specific protein-DNA interactions. The coding end is highly distorted with one base flipped out from the DNA duplex in the active center, which facilitates the hairpin formation by a potential two-metal ion catalytic mechanism. The 12-RSS and 23-RSS intermediates are highly bent and asymmetrically bound to the synaptic RAG complex with the nonamer binding domain dimer tilts towards the nonamer of the 12-RSS but away from the nonamer of the 23-RSS, which emphasizes the 12/23 rule. Two HMGB1 molecules bind at each side of 12-RSS and 23-RSS to stabilize the highly bent RSSs. These structures elaborate the molecular mechanisms for DNA recognition, catalysis and the unique synapsis underlying the 12/23 rule, provide new insights into the RAG-associated human diseases, and represent a most complete set of complexes in the catalytic pathways of any DDE family recombinases, transposases or integrases. Evolution Based on core sequence homology, it is believed that RAG1 evolved from a transposase from the Transib superfamily. No Transib family members include an N-terminal sequence found in RAG1 suggesting the N-terminal of RAG1 came from a separate element. The N-terminal region of RAG1 has been found in the transposable element N-RAG-TP in the sea slug, Aplysia californica, which contains the entire RAG1 N-terminal. It is likely that the full RAG1 structure was derived from the recombination between a Transib and the N-RAG-TP transposon. A transposon with RAG2 arranged next to RAG1 has been identified in the purple sea urchin. Active Transib transposons with both RAG1 and RAG2 ("ProtoRAG") has been discovered in B. belcheri (Chinese lancelet) and Psectrotarsia flava (a moth). The terminal inverted repeats (TIR) in lancelet ProtoRAG have a heptamer-spacer-nonamer structure similar to that of RSS, but the moth ProtoRAG lacks a nonamer. The nonamer-binding regions and the nonamer sequences of lancelet ProtoRAG and animal RAG are different enough to not recognize each other. The structure of the lancelet protoRAG has been solved (), providing some understanding on what changes lead to the domestication of RAG genes. Although the transposon origins of these genes are well-established, there is still no consensus on when the ancestral RAG1/2 locus became present in the vertebrate genome. Because agnathans (a class of jawless fish) lack a core RAG1 element, it was traditionally assumed that RAG1 invaded after the agnathan/gnathostome split 1001 to 590 million years ago (MYA). However, the core sequence of RAG1 has been identified in the echinoderm Strongylocentrotus purpuratus (purple sea urchin), the amphioxi Branchiostoma floridae (Florida lancelet). Sequences with homology to RAG1 have also been identified in Lytechinus veriegatus (green sea urchin), Patiria minata (sea star), the mollusk Aplysia californica, and protostomes including oysters, mussels, ribbon worms, and the non-bilaterian cnidarians. These findings indicate that the Transib family transposon invaded multiple times in non-vertebrate species, and invaded the ancestral jawed vertebrate genome about 500 MYA. It is hypothesized that the absence of RAG-like genes in jawless vertebrates and urochordates is due to horizontal gene transfer or gene loss in certain phylogenetic groups due to conventional vertical transmission. Recent analysis has shown the RAG phylogeny to be gradual and directional, suggesting an evolutionary path that relies on vertical transmission. This hypothesis suggests that the RAG1/2-like pair may have been present in its current form in most metazoan lineages and was lost in the jawless vertebrate and urochordate lineages. There is no evidence that the V(D)J recombination system arose earlier than the vertebrate lineage. It is currently hypothesized that the invasion of RAG1/2 is the most important evolutionary event in terms of shaping the gnathostome adaptive immune system vs. the agnathan variable lymphocyte receptor system. Selective pressure It is still unclear what forces led to the development of a RAG1/2-mediated immune system exclusively in jawed vertebrates and not in any invertebrate species that also acquired the RAG1/2-containing transposon. Current hypotheses include two whole-genome duplication events in vertebrates, which would provide the genetic raw material for the development of the adaptive immune system, and the development of endothelial tissue, greater metabolic activity, and a decreased blood volume-to-body weight ratio, all of which are more specialized in vertebrates than invertebrates and facilitate adaptive immune responses. See also Omenn syndrome Severe combined immunodeficiency References Further reading External links A simple explanation of recombination activating gene for the general reader. Immune system Lymphocytes
Recombination-activating gene
[ "Biology" ]
2,254
[ "Immune system", "Organ systems" ]
7,039,259
https://en.wikipedia.org/wiki/History%20of%20metallurgy%20in%20China
Metallurgy in China has a long history, with the earliest metal objects in China dating back to around 3,000 BCE. The majority of early metal items found in China come from the North-Western Region (mainly Gansu and Qinghai, 青海). China was the earliest civilization to use the blast furnace and produce cast iron. Copper Archaeological evidence indicates that the earliest metal objects in China were made in the late fourth millennium BCE. Copper was generally the earliest metal to be used by humanity, and was used in China since at least 3000 BCE. Early metal-using communities have been found at the Qijia and Siba sites in Gansu. The metal knives and axes recovered in Qijia apparently point to some interactions with Siberian and Central Asian cultures, in particular with the Seima-Turbino complex, or the Afanasievo culture. Archeological evidence points to plausible early contact between the Qijia culture and Central Asia. Similar sites have been found in Xinjiang in the west and Shandong, Liaoning and Inner Mongolia in the east and north. The Central Plain sites associated with the Erlitou culture also contain early metalworks. Copper manufacturing, more complex than jade working, gradually appeared in the Yangshao period (5000–3000 BCE). Jiangzhai is the only place where copper artifacts were found in the Banpo culture. Archaeologists have found remains of copper metallurgy in various cultures from the late fourth to the early third millennia BCE. These include the copper-smelting remains and copper artifacts of the Hongshan culture (4700–2900) and copper slag at the Yuanwozhen site. This indicates that inhabitants of the Yellow River valley had already learned how to make copper artifacts by the later Yangshao period. The Qijia culture (c. 2500–1900) of Qinghai, Gansu, and western Shaanxi produced copper and bronze utilitarian items and gold, copper, and bronze ornaments. The earliest metalworks in this region are found at a Majiayao site at Linjia, Dongxiang, Gansu. "Their dates range from 2900 to 1600 BCE. These metal objects represent the Majiayao 馬家窯 type of the Majiayao culture (c. 3100–2700 BCE), Zongri 宗日 Culture (c. 3600–2050 BCE), Machang 馬廠 Type (c. 2300–2000 BCE), Qijia 齊家 Culture (c. 2050–1915 BCE), and Siba 四壩 Culture (c. 2000–1600 BCE)." At Dengjiawan, in the Shijiahe site complex in Hubei, some pieces of copper were discovered; they are the earliest copper objects discovered in southern China. The Linjia site (林家遺址, Línjiā yízhǐ) has the earliest evidence for bronze in China, dating to c. 3000 BCE. Bronze Bronze technology was imported to China from the steppes. The oldest bronze object found in China was a knife found at a Majiayao culture site in Dongxiang, Gansu, and dated to 2900–2740 BC. Further copper and bronze objects have been found at Machang-period sites in Gansu. Metallurgy spread to the middle and lower Yellow River region in the late 3rd millennium BC. Contacts between the Afanasievo culture and the Majiayao culture and the Qijia culture have been considered for the transmission of bronze technology. From around 2000 BCE, cast bronze objects such as the socketed spear with single side hook were imported and adapted from the Seima-Turbino culture. The Erlitou culture (c. 1900 – 1500 BCE), Shang dynasty (c. 1600 – 1046 BCE) and Sanxingdui culture (c. 1250 – 1046 BCE) of early China used bronze vessels for rituals (see Chinese ritual bronzes) as well as farming implements and weapons. By 1500 BCE, excellent bronzes were being made in China in large quantities, partly as a display of status, and as many as 200 large pieces were buried with their owner for use in the afterlife, as in the Tomb of Fu Hao, a Shang queen. In the tomb of the first Qin Emperor and multiple Warring States period tombs, extremely sharp swords and other weapons were found, coated with chromium oxide, which made the weapons rust resistant. The layer of chromium oxide used on these swords was 10 to 15 micrometers and left them in pristine condition to this day. Chromium was first scientifically attested in the 18th century. The beginning of new breakthroughs in metallurgy occurred towards the Yangzi River's south in China's southeastern region in the Warring States period such as gilt-bronze swords. Section-mold casting There are two types of bronze smelting techniques in early China, namely the section mold process and the lost-wax process. The earliest bronze ware found in China is the bronze knife (F20: 18) unearthed at the Majiayao in Linjia, Dongxiang, Gansu, and dated to about 3000 BC. This bronze knife uses the section mold process, which is spliced by two molds. The section mold process is a commonly used bronze casting method in the Shang dynasty, that is, the mud is selected, and after selecting, filtration, showering, deposition and other procedures, the mud is cooled to a moderate hardness as a backup, and then the mud is made according to the shape of the vessel to be made. There are two types of molds, which is inner mold and outer mold. The inner mold is only the shape of the bronze ware, without decoration; the outer model should consider the division of the bronze ware after casting in the future, that is, the block during the production of the clay model, and also engrave the inscriptions and inscriptions of the bronze ware decoration on the clay model. After the clay mold are done, put it in a cool place to dry in the shade, and then put it into the furnace for roasting. After the mold are heated, they become pottery molds unearthed during modern archaeological discoveries. After the pottery mold is fired, do not rush out of the furnace. After the copper furnace has liquefied the required copper, the pottery mold that still has residual temperature is taken out and poured. In this way, the temperature difference between the copper liquid and the pottery mold is not large, and the pottery mold is not easy to burst. The quality of the finished product is relatively high. After the copper liquid is poured, remove the pottery molds and molds according to the blocks they were made. If they can't be removed, they can be broken with a hammer. The bronze will come out, and after grinding, it is the finished product. Lost-wax casting According to some scholars, lost-wax casting was used in China already during the Spring and Autumn period (770 – 476 BCE), although this is often disputed. The lost-wax method is used in most parts of the world. As the name suggests, the lost-wax method is to use wax as a mold, and heat it to melt the wax mold and lose it, thereby casting bronze ware, making the model (the outer layer of the wax model is coated with mud), lost-wax (heating to make the wax flow out), pouring copper liquid to fill the cavity left by the wax model, etc. The development and spread of the lost-wax method in the West has never stopped, but the main bronze casting method in the Bronze Age in China is the section mold process. When the lost-wax method was introduced into China is also a topic of academic discussion. But there is no doubt that the lost-wax method already existed in China during the Spring and Autumn period. In 1978, the Bronze Zun-Pan unearthed from the tomb of Marquis Yi of Zeng in Leigudun, Suixian County, Hubei Province, used a mixed process of section mold method and lost-wax method. Iron Introduction The early Iron Age in China began before 1000 BCE, with the introduction of ironware, such as knives, swords, and arrowheads, from the west into Xinjiang, before it further diffused to Qinghai and Gansu. In 2008, two iron fragments were excavated at the Mogou site, in Gansu. They have been dated to the 14th century BCE, belonging to the period of Siwa culture. One of the fragments was made of bloomery iron rather than meteoritic iron. Cast iron Cast iron farm tools and weapons were widespread in China by the 5th century BC, employing workforces of over 200 men in iron smelters from the 3rd century onward. The earliest known blast furnaces are attributed to the Han dynasty in the 1st century AD. These early furnaces had clay walls and used phosphorus-containing minerals as a flux. Chinese blast furnaces ranged from around two to ten meters in height, depending on the region. The largest ones were found in modern Sichuan and Guangdong, while the 'dwarf" blast furnaces were found in Dabieshan. In construction, they are both around the same level of technological sophistication There is no evidence of the bloomery in China after the appearance of the blast furnace and cast iron. In China, blast furnaces produced cast iron, which was then either converted into finished implements in a cupola furnace, or turned into wrought iron in a fining hearth. If iron ores are heated with carbon to 1420–1470 K, a molten liquid is formed, an alloy of about 96.5% iron and 3.5% carbon. This product is strong, can be cast into intricate shapes, but is too brittle to be worked, unless the product is decarburized to remove most of the carbon. The vast majority of Chinese iron manufacture, from the late Zhou dynasty onward, was of cast iron. However forged swords began to be made in the Warring-States-period: "Earliest iron and steel Jian also appear, made by the earliest and most basic forging and folding techniques." Iron would become, by around 300 BCE, the preferred metal for tools and weapons in China. The primary advantage of the early blast furnace was in large scale production and making iron implements more readily available to peasants. Cast iron is more brittle than wrought iron or steel, which required additional fining and then cementation or co-fusion to produce, but for menial activities such as farming it sufficed. By using the blast furnace, it was possible to produce larger quantities of tools such as ploughshares more efficiently than the bloomery. In areas where quality was important, such as warfare, wrought iron and steel were preferred. Nearly all Han period weapons are made of wrought iron or steel, with the exception of axe-heads, of which many are made of cast iron. The effectiveness of the Chinese human and horse powered blast furnaces was enhanced during this period by the engineer Du Shi (c. AD 31), who applied the power of waterwheels to piston-bellows in forging cast iron. Early water-driven reciprocators for operating blast furnaces were built according to the structure of horse powered reciprocators that already existed. That is, the circular motion of the wheel, be it horse driven or water driven, was transferred by the combination of a belt drive, a crank-and-connecting-rod, other connecting rods, and various shafts, into the reciprocal motion necessary to operate a push bellow. Donald Wagner suggests that early blast furnace and cast iron production evolved from furnaces used to melt bronze. Certainly, though, iron was essential to military success by the time the State of Qin had unified China (221 BC). Usage of the blast and cupola furnace remained widespread during the Song and Tang dynasties. By the 11th century, the Song dynasty Chinese iron industry made a switch of resources from charcoal to coke in casting iron and steel, sparing thousands of acres of woodland from felling. This may have happened as early as the 4th century AD. Blast furnaces were also later used to produce gunpowder weapons such as cast iron bomb shells and cast iron cannons during the Song dynasty. Middle Ages Shen Kuo's written work of 1088 contains, among other early descriptions of inventions, a method of repeated forging of cast iron under a cold blast similar to the modern Bessemer process. Chinese metallurgy was widely practiced during the Middle Ages; during the 11th century, the growth of the iron industry caused vast deforestation due to the use of charcoal in the smelting process. To remedy the problem of deforestation, the Song Chinese discovered how to produce coke from bituminous coal as a substitute for charcoal. Although hydraulic-powered bellows for heating the blast furnace had been written about since Du Shi's (d. 38) invention of them in the 1st century CE, the first known illustration of a bellows in operation is found in a book written in 1313 by Wang Zhen (fl. 1290–1333). Gold and silver Gold-crafting technology developed in Northwest China during the early Iron Age, following the arrival of new technological skills from the Central Asian steppes, even before the establishment of the Xiongnu (209 BCE-150 CE). These technological and artistic exchanges attest to the magnitude of communication networks between China and the Mediterranean, even before the establishment of the Silk Road. The sites of Dongtalede (Ch: 东塔勒德, 9th–7th century BCE) in Xinjiang, or Xigoupan (Ch:西沟畔, 4th–3rd century BCE) in the Ordos region of Inner Mongolia, are known for numerous artifacts reminiscent of the Scytho-Siberian art of Central Asia. During the Qing dynasty the gold and silver smiths of Ningbo were noted for the delicacy and tastefulness of their work. Cultural significance Chinese mythology generally reflects a time when metallurgy had long been practiced. According to the Romanian anthropologist, orientalist, and philosopher Mircea Eliade, the Iron Age produced a large number of rites, myths and symbols; the blacksmith was the main agent of diffusion of mythology, rites and metallurgical mysteries. The secret knowledge of metallurgists and their powers made them founders of the human world and masters of the spirit world. This metallurgical model was reinterpreted again by Taoist alchemists. Some metalworkers illustrate the close relationship between Chinese mystical and sovereign power and the mining and metallurgy industries. Although the name Huangdi is absent from Shang or Zhou inscriptions, it appears in the Spring and Autumn period's Guoyu and Zuo zhuan. According to Mitarai (1984), Huangdi may have lived in early antiquity and led a regional ethnic group who worshiped him as a deity; "The Yellow Emperor fought Chiyou at Mount Kunwu whose summit was covered with a large quantity of red copper". "The seventy-two brothers of Chiyou had copper heads and iron fronts; they ate iron and stones [...] In the province of Ji where Chiyou is believed to have lived (Chiyou shen), when we dig the earth and we find skulls that seem to be made of copper and iron, they are identified as the bones of Chiyou." Chiyou was the leader of the indigenous Sanmiao (or Jiuli) tribes who defeated Xuanyuan, the future Yellow Emperor. Chiyou, a rival of the Yellow Emperor, belonged to a clan of blacksmiths. The advancement of weaponry is sometimes attributed to the Yellow Emperor and Chiyou, and Chiyou reportedly discovered the process of casting. Kunwu is associated with a people, a royal blacksmith, a mountain which produces metals, and a sword. Kui, a master of music and dance cited by Shun, was succeeded by Yu the Great. Yu the Great, reported founder of the Xia dynasty (China's first), spent many years working on flood control and is credited with casting the Nine Tripod Cauldrons. Helped by dragons descended from heaven, he died on Mount Xianglu in Zhejiang. In these myths and legends, mines and forges are associated with leadership. See also Economy of China Economic history of China before 1912 Economic history of China (1912–49) References Citations Sources Public domain Economic history of China History of science and technology in China China
History of metallurgy in China
[ "Chemistry", "Materials_science" ]
3,370
[ "Metallurgy", "History of metallurgy" ]
7,039,617
https://en.wikipedia.org/wiki/Grand%20Prix%20de%20l%27urbanisme
The Grand prix de l'urbanisme is awarded for urban planning in France by the Ministry for Ecology, Energy, Sustainable Development and Planning. The prize has been awarded annually since 1989, except during the period from 1994 until 1998, when it was not awarded. A book is published each year, detailing the work of the award winner and other nominees. Prize winners References External links Grand prix de l'urbanisme Architecture awards French awards Urban planning in France
Grand Prix de l'urbanisme
[ "Engineering" ]
94
[ "Architecture stubs", "Architecture" ]
7,040,363
https://en.wikipedia.org/wiki/Resource%20breakdown%20structure
In project management, the resource breakdown structure (RBS) is a hierarchical list of resources related by function and resource type that is used to facilitate planning and controlling of project work. The Resource Breakdown Structure includes, at a minimum, the personnel resources needed for successful completion of a project, and preferably contains all resources on which project funds will be spent, including personnel, tools, machinery, materials, equipment and fees and licenses. Money is not considered a resource in the RBS; only those resources that will cost money are included. Definition Assignable resources, such as personnel, are typically defined from a functional point of view: "who" is doing the work is identified based on their role within the project, rather than their department or role within the parent companies. In some cases, a geographic division may be preferred. Each descending (lower) level represents an increasingly detailed description of the resource until small enough to be used in conjunction with the work breakdown structure (WBS) to allow the work to be planned, monitored and controlled. Example In common practice, only non-expendable (i.e., durable goods) resources are listed in an RBS. Example of hierarchies of resources: 1. Engineering 1.1 Mr. Fred Jones, Manager 1.1.2 Ms. Jane Wagner, Architectural Lead 1.1.3 Software Design Team and Resources 1.1.3.1 Mr. Gary Neimi, Software Engineer 1.1.3.2 Ms. Jackie Toms, UI Designer 1.1.3.3 Standard Time Timesheet (timesheet and project tracking software) 1.1.3.4 Microsoft Project (project scheduling) 1.1.3.5 SQL Server (database) 1.1.4 Hardware Architecture Team and Resources 1.1.4.1 Ms. Korina Johannes, Resource Manager 1.1.4.2 Mr. Yan Xu, Testing Lead 1.1.4.3 Test Stand A 1.1.4.3.1 SAN Group A 1.1.4.3.2 Server A1 1.1.4.4 Test Stand B 1.1.4.4.1 SAN Group B 1.1.4.4.2 Server B1 Both human and physical resources, such as software and test instruments, are listed in the example above. The nomenclature is a numbered, hierarchical list of indented layers, each level adds an additional digit representing. For example, the numeric labels (1.1, 1.1.2) make each resource uniquely identifiable. Use in Microsoft Project The RBS (also known as the User Breakdown Structure) fields in a Project file are specifically coded by the administrator of that project, usually the Project Manager. Sometimes a PM Administrator is designated in larger project who will manage the Project tool itself. This field is called the Enterprise Resource Outline Code and it falls into one of two categories, RBS (resource field) and RBS (assignment field). These are high-level fields that require managers who know what these will be used for in terms of the organization. See also Business architecture List of project management topics Microsoft Project Project planning References Schedule (project management) Enterprise architecture
Resource breakdown structure
[ "Physics" ]
656
[ "Spacetime", "Physical quantities", "Time", "Schedule (project management)" ]
7,040,504
https://en.wikipedia.org/wiki/1026%20Ingrid
1026 Ingrid, provisional designation , is a stony Florian asteroid and long-lost minor planet (1923–1986) from the inner regions of the asteroid belt, approximately 7 kilometers in diameter. It was discovered by Karl Reinmuth at Heidelberg in 1923, and later named after Ingrid, niece and godchild of astronomer Albrecht Kahrstedt. Discovery and recovery Ingrid was discovered on 13 August 1923, by German astronomer Karl Reinmuth at the Heidelberg-Königstuhl State Observatory in southwest Germany. The asteroid was observed for only a few days during August 1923, before it became a lost minor planet for nearly 63 years until its recovery by Japanese astronomer Syuichi Nakano in 1986. Nakano was able to show that Ingrid had been observed and provisionally designated several times during its lost period: as at the discovering Heidelberg Observatory in October 1957, possibly as at Goethe Link Observatory in April 1963, as at the Crimean Astrophysical Observatory in November 1981, and as at Palomar Observatory in March 1986. With the recovery of Ingrid in 1986, and the almost simultaneously recovered asteroid 1179 Mally, the list of long-lost numbered asteroids was reduced to four. The last remaining lost asteroid, 69230 Hermes, was recovered in 2003. Orbit and classification Ingrid is a member of the Flora family (), a giant asteroid family and the largest family of stony asteroids. It orbits the Sun in the inner main-belt at a distance of 1.8–2.7 AU once every 3 years and 5 months (1,237 days). Its orbit has an eccentricity of 0.18 and an inclination of 5° with respect to the ecliptic. The body's observation arc begins at Heidelberg, one night after its official discovery observation in 1923. Physical characteristics Ingrid is an assumed S-type asteroid, in-line with the Flora family's spectral type. Rotation period A rotational lightcurve of Ingrid was obtained from photometric observations by a group of Hungarian astronomers. The 2005-published lightcurve analysis gave a rotation period of 5 hours with a brightness variation of 0.5 magnitude (). Diameter and albedo According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Ingrid measures between 5.73 and 7.67 kilometers in diameter and its surface has an albedo between 0.1441 and 0.43. The Collaborative Asteroid Lightcurve Link assumes an albedo of 0.24 – derived from 8 Flora, the largest member and namesake of the Flora family – and calculates a diameter of 8.19 kilometers based an absolute magnitude of 12.6. Naming This minor planet was named after Ingrid, niece and godchild of Albrecht Kahrstedt (1897–1971), a German astronomer at ARI and director of the institute's Potsdam division, who requested the naming of this asteroid and 984 Gretia (mother of Ingrid) in a personal letter to the discoverer in February 1926. Kahrstedt himself was honored with the naming of . The official naming citation was mentioned in The Names of the Minor Planets by Paul Herget in 1955 (). Lutz Schmadel quoted an excerpt of Kahrstedt's letter in his Dictionary of Minor Planet Names (LDS). References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 001026 Discoveries by Karl Wilhelm Reinmuth Named minor planets 19230813 Recovered astronomical objects
1026 Ingrid
[ "Astronomy" ]
762
[ "Recovered astronomical objects", "Astronomical objects" ]
7,041,246
https://en.wikipedia.org/wiki/Pigou%20Club
The Pigou Club is described by its creator, economist Gregory Mankiw, as a “group of economists and pundits with the good sense to have publicly advocated higher Pigouvian taxes, such as gasoline taxes or carbon taxes." A Pigouvian tax is a tax levied to correct the negative externalities (negative side-effects) of a market activity. These ideas are also known as an ecotaxes or green tax shifts. Members Supports The Economist has expressed support for Pigouvian policies as has The Washington Post Editorial Board, NPR's "Planet Money" and The New York Times. References External links The Pigou Club Manifesto (Greg Mankiw's Blog) Smart Taxes: An Open Invitation to Join the Pigou Club Rogoff joins the Pigou Club (Greg Mankiw's Blog) Raise the Gasoline Tax? Funny, It Doesn’t Sound Republican (New York Times) Talk of Raising Gas Tax Is Just That (Washington Post) The Nopigou Club (National Post) How Many Taxes Will it Take? (National Post) Economic policy Environmental tax Environmental economics
Pigou Club
[ "Environmental_science" ]
229
[ "Environmental economics", "Environmental social science" ]
7,041,409
https://en.wikipedia.org/wiki/Basidiobolomycosis
Basidiobolomycosis is a fungal disease caused by Basidiobolus ranarum. It may appear as one or more painless firm nodules in the skin which becomes purplish with an edge that appears to be slowly growing outwards. A serious but less common type affects the stomach and intestine, which usually presents with abdominal pain, fever and a mass. B. ranarum, can be found in soil, decaying vegetables and has been isolated from insects, some reptiles, amphibians, and mammals. The disease results from direct entry of the fungus through broken skin such as an insect bite or trauma, or eating contaminated food. It generally affects people who are well. Diagnosis is by medical imaging, biopsy, microscopy, culture and histopathology. Treatment usually involves amphotericin B and surgery. Although B. ranarum is found around the world, the disease Basidiobolomycosis is generally reported in tropical and subtropical areas of Africa, South America, Asia and Southwestern United States. It is rare. The first case in a human was reported from Indonesia in 1956 as a skin infection. Signs and symptoms Basidiobolomycosis may appear as a firm nodule in the skin which becomes purplish with an edge that appears to be slowly growing outwards. It is generally painless but may feel itchy or burning. There can be one lesion or several, and usually on the arms or legs of children. Pus may be present if a bacterial infection also occurs. The infection can spread to nearby structures such as muscles, bones and lymph nodes. A serious but less common type affects the stomach and intestine, which usually presents with tummy ache, fever and a lump. Lymphoedema may occur. Mechanism Basidiobolomycosis is a type of Entomophthoromycosis, the other being conidiobolomycosis, and is caused by Basidiobolus ranarum, a fungus belonging to the order Entomophthorales. B. ranarum has been found in soil, decaying vegetables and has been isolated from insects some reptiles, amphibians, and mammals. The disease results from direct entry of the fungus through broken skin such as an insect bite or trauma, or eating contaminated food. Diabetes may be a risk factor. The exact way in which infection results is not completely understood. Diagnosis Diagnosis is by culture and biopsy. A review in 2015 showed that the most common finding on imaging of the abdomen was a mass in the bowel, the liver, or multiple sites and bowel wall thickening. Initially, many were considered to have either a cancer of the bowel or Crohns disease. Treatment Treatment usually involves itraconazole or amphotericin B, combined with surgical debridement. Bowel involvement may be better treated with voriconazole. Epidemiology The condition is rare but emerging. Men and children are affected more than females. The disease is generally reported in tropical and subtropical areas of Africa, South America, Asia and several cases in Southwestern United States. History The first case in a human was reported from Indonesia as a skin infection in 1956. In 1964, the first case involving stomach and intestine was reported. Society and culture Cases among gardeners in Arizona, US, may indicate an occupational hazard, but is unproven. Other animals Basidiobolomycosis has been reported in a dog. References External links Animal fungal diseases Fungal diseases
Basidiobolomycosis
[ "Biology" ]
728
[ "Fungi", "Fungal diseases" ]
7,041,469
https://en.wikipedia.org/wiki/Biochar
Biochar is charcoal, sometimes modified, that is intended for organic use, as in soil. It is the lightweight black remnants remaining after the pyrolysis of biomass, consisting of carbon and ashes; and is a form of charcoal. Despite its name, immediately following production biochar is sterile and only gains biological life following assisted or incidental exposure to biota. Biochar is defined by the International Biochar Initiative as the "solid material obtained from the thermochemical conversion of biomass in an oxygen-limited environment". Biochar is mainly used in soils to increase soil aeration, reduce soil emissions of greenhouse gases, reduce nutrient leaching and reduce soil acidity and can increase soil water content in coarse soils. Biochar application may increase soil fertility and agricultural productivity. Biochar soil amendments, when applied at excessive rates or with unsuitable soil type and biochar feedstock combinations, also have the potential for negative effects, including harming soil biota, reducing available water content, altering soil pH and increasing salinity. Beyond soil application, biochar can be used for slash-and-char farming, for water retention in soil, and as an additive for animal fodder. There is an increasing focus on the potential role of biochar application in global climate change mitigation. Due to its refractory stability, biochar can stay in soils or other environments for thousands of years. This has given rise to the concept of Biochar Carbon Removal, i.e. carbon sequestration in the form of biochar. Carbon removal can be achieved when high-quality biochar is applied to soils, or added as a substitute material to construction materials such as concrete and tar. Etymology The word "biochar" is a late 20th century English neologism derived from the Greek word , bios, "life" and "char" (charcoal produced by carbonization of biomass). It is recognized as charcoal that participates in biological processes found in soil, aquatic habitats and in animal digestive systems. History Pre-Columbian Amazonians produced biochar by smoldering agricultural waste (i.e., covering burning biomass with soil) in pits or trenches. It is not known if they intentionally used biochar to enhance soil productivity. European settlers called it terra preta de Indio. Following observations and experiments, one research team working in French Guiana hypothesized that the Amazonian earthworm Pontoscolex corethrurus was the main agent of fine powdering and incorporation of charcoal debris in the mineral soil. Production Biochar is a high-carbon, fine-grained residue that is produced via pyrolysis. It is the direct thermal decomposition of biomass in the absence of oxygen, which prevents combustion, and produces a mixture of solids (biochar), liquid (bio-oil), and gas (syngas) products. Gasification Gasifiers produce most of the biochar sold in the United States. The gasification process consists of four main stages: oxidation, drying, pyrolysis, and reduction. Temperature during pyrolysis in gasifiers is , in the reduction zone and in the combustion zone. The specific yield from pyrolysis, the step of gasification that produces biochar, is dependent on process conditions such as temperature, heating rate, and residence time. These parameters can be tuned to produce either more energy or more biochar. Temperatures of produce more char, whereas temperatures above favor the yield of liquid and gas fuel components. Pyrolysis occurs more quickly at higher temperatures, typically requiring seconds rather than hours. The increasing heating rate leads to a decrease in biochar yield, while the temperature is in the range of . Typical yields are 60% bio-oil, 20% biochar, and 20% syngas. By comparison, slow pyrolysis can produce substantially more char (≈35%); this contributes to soil fertility. Once initialized, both processes produce net energy. For typical inputs, the energy required to run a "fast" pyrolyzer is approximately 15% of the energy that it outputs. Pyrolysis plants can use the syngas output and yield 3–9 times the amount of energy required to run. The Amazonian pit/trench method, in contrast, harvests neither bio-oil nor syngas, and releases , black carbon, and other greenhouse gases (GHGs) (and potentially, toxicants) into the air, though less greenhouse gasses than captured during the growth of the biomass. Commercial-scale systems process agricultural waste, paper byproducts, and even municipal waste and typically eliminate these side effects by capturing and using the liquid and gas products. The 2018 winner of the X Prize Foundation for atmospheric water generators harvests potable water from the drying stage of the gasification process. The production of biochar as an output is not a priority in most cases. Small-scale methods Smallholder farmers in developing countries easily produce their own biochar without special equipment. They make piles of crop waste (e.g., maize stalks, rice straw, or wheat straw), light the piles on the top, and quench the embers with dirt or water to make biochar. This method greatly reduces smoke compared to traditional methods of burning crop waste. This method is known as the top-down burn or conservation burn. Alternatively, more industrial methods can be used on small scales. While in a centralized system, unused biomass is brought to a central plant for processing into biochar, it is also possible for each farmer or group of farmers can operate a kiln. In this scenario, a truck equipped with a pyrolyzer can move from place to place to pyrolyze biomass. Vehicle power comes from the syngas stream, while the biochar remains on the farm. The biofuel is sent to a refinery or storage site. Factors that influence the choice of system type include the cost of transportation of the liquid and solid byproducts, the amount of material to be processed, and the ability to supply the power grid. Various companies in North America, Australia, and England also sell biochar or biochar production units. In Sweden, the 'Stockholm Solution' is an urban tree planting system that uses 30% biochar to support urban forest growth. At the 2009 International Biochar Conference, a mobile pyrolysis unit with a specified intake of was introduced for agricultural applications. Crops used Common crops used for making biochar include various tree species, as well as various energy crops. Some of these energy crops (i.e. Napier grass) can store much more carbon on a shorter timespan than trees do. For crops that are not exclusively for biochar production, the Residue-to-Product Ratio (RPR) and the collection factor (CF), the percent of the residue not used for other things, measure the approximate amount of feedstock that can be obtained. For instance, Brazil harvests approximately 460 million tons (MT) of sugarcane annually, with an RPR of 0.30, and a CF of 0.70 for the sugarcane tops, which normally are burned in the field. This translates into approximately 100 MT of residue annually, which could be pyrolyzed to create energy and soil additives. Adding in the bagasse (sugarcane waste) (RPR=0.29 CF=1.0), which is otherwise burned (inefficiently) in boilers, raises the total to 230 MT of pyrolysis feedstock. Some plant residue, however, must remain on the soil to avoid increased costs and emissions from nitrogen fertilizers. Hydrochar Besides pyrolysis, torrefaction and hydrothermal carbonization processes can also thermally decompose biomass to the solid material. However, these products cannot be strictly defined as biochar. The carbon product from the torrefaction process contains some volatile organic components, thus its properties are between that of biomass feedstock and biochar. Furthermore, even the hydrothermal carbonization could produce a carbon-rich solid product, the hydrothermal carbonization is evidently different from the conventional thermal conversion process. Therefore, the solid product from hydrothermal carbonization is defined as "hydrochar" rather than "biochar". Thermo-catalytic depolymerization Thermo-catalytic depolymerization is another method to produce biochar, which utilizes microwaves. It has been used to efficiently convert organic matter to biochar on an industrial scale, producing ≈50% char. Properties The physical and chemical properties of biochars as determined by feedstocks and technologies are crucial. Characterization data explain their performance in a specific use. For example, guidelines published by the International Biochar Initiative provide standardized evaluation methods. Properties can be categorized in several respects, including the proximate and elemental composition, pH value, and porosity. The atomic ratios of biochar, including H/C and O/C, correlate with the properties that are relevant to organic content, such as polarity and aromaticity. A van-Krevelen diagram can show the evolution of biochar atomic ratios in the production process. In the carbonization process, both the H/C and O/C atomic ratios decrease due to the release of functional groups that contain hydrogen and oxygen. Production temperatures influence biochar properties in several ways. The molecular carbon structure of the solid biochar matrix is particularly affected. Initial pyrolysis at 450–550 °C leaves an amorphous carbon structure. Temperatures above this range will result in the progressive thermochemical conversion of amorphous carbon into turbostratic graphene sheets. Biochar conductivity also increases with production temperature. Important to carbon capture, aromaticity and intrinsic recalcitrance increases with temperature. Applications Carbon sink The refractory stability of biochar leads to the concept of Biochar Carbon Removal, i.e. carbon sequestration in the form of biochar. It may be a means to mitigate climate change due to its potential of sequestering carbon with minimal effort. Biomass burning and natural decomposition releases large amounts of carbon dioxide and methane to the Earth's atmosphere. The biochar production process also releases (up to 50% of the biomass); however, the remaining carbon content becomes indefinitely stable. Biochar carbon remains in the ground for centuries, slowing the growth in atmospheric greenhouse gas levels. Simultaneously, its presence in the earth can improve water quality, increase soil fertility, raise agricultural productivity, and reduce pressure on old-growth forests. Biochar can sequester carbon in the soil for hundreds to thousands of years, like coal. Early works proposing the use of biochar for carbon dioxide removal to create a long-term stable carbon sink were published in the early 2000s. This technique is advocated by scientists including James Hansen and James Lovelock. A 2010 report estimated that sustainable use of biochar could reduce the global net emissions of carbon dioxide (), methane, and nitrous oxide by up to 1.8  billion tonnes carbon dioxide equivalent (e) per year (compared to the about 50 billion tonnes emitted in 2021), without endangering food security, habitats, or soil conservation. However a 2018 study doubted enough biomass would be available to achieve significant carbon sequestration. A 2021 review estimated potential removal from 1.6 to 3.2 billion tonnes per year, and by 2023 it had become a lucrative business renovated by carbon credits. As of 2023, the significance of biochar's potential as a carbon sink is widely accepted. Biochar is found to have the technical potential to sequester 7% of carbon dioxide in average of all countries, with twelve nations able to sequester over 20% of their greenhouse gas emissions. Bhutan leads this proportion (68%), followed by India (53%). In 2021 the cost of biochar ranged around European carbon prices, but was not yet included in the EU or UK Emissions Trading Scheme. Biochar adsorption of can be limited by the surface area of the material, which can be improved by using resonant acoustic mixing. In developing countries, biochar derived from improved cookstoves for home-use can contribute to lower carbon emissions if use of original cookstove is discontinued, while achieving other benefits for sustainable development. Soil health Biochar offers multiple soil health benefits in degraded tropical soils but is less beneficial in temperate regions. Its porous nature is effective at retaining both water and water-soluble nutrients. Soil biologist Elaine Ingham highlighted its suitability as a habitat for beneficial soil micro organisms. She pointed out that when pre-charged with these beneficial organisms, biochar promotes good soil and plant health. Biochar reduces leaching of E-coli through sandy soils depending on application rate, feedstock, pyrolysis temperature, soil moisture content, soil texture, and surface properties of the bacteria. For plants that require high potash and elevated pH, biochar can improve yield. Biochar can improve water quality, reduce soil emissions of greenhouse gases, reduce nutrient leaching, reduce soil acidity, and reduce irrigation and fertilizer requirements. Under certain circumstances biochar induces plant systemic responses to foliar fungal diseases and improves plant responses to diseases caused by soilborne pathogens. Biochar's impacts are dependent on its properties as well as the amount applied, although knowledge about the important mechanisms and properties is limited. Biochar impact may depend on regional conditions including soil type, soil condition (depleted or healthy), temperature, and humidity. Modest additions of biochar reduce nitrous oxide () emissions by up to 80% and eliminate methane emissions, which are both more potent greenhouse gases than . Studies reported positive effects from biochar on crop production in degraded and nutrient–poor soils. The application of compost and biochar under FP7 project FERTIPLUS had positive effects on soil humidity, crop productivity and quality in multiple countries. Biochar can be adapted with specific qualities to target distinct soil properties. In Colombian savanna soil, biochar reduced leaching of critical nutrients, created a higher nutrient uptake, and provided greater nutrient availability. At 10% levels biochar reduced contaminant levels in plants by up to 80%, while reducing chlordane and DDX content in the plants by 68 and 79%, respectively. However, because of its high adsorption capacity, biochar may reduce pesticide efficacy. High-surface-area biochars may be particularly problematic. Biochar may be plowed into soils in crop fields to enhance their fertility and stability and for medium- to long-term carbon sequestration in these soils. It has meant a remarkable improvement in tropical soils showing positive effects in increasing soil fertility and improving disease resistance in West European soils. Gardeners taking individual action on climate change add biochar to soil, increasing plant yield and thereby drawing down more carbon. The use of biochar as a feed additive can be a way to apply biochar to pastures and to reduce methane emissions. Application rates of appear required to improve plant yields significantly. Biochar costs in developed countries vary from $300–7000/tonne, which is generally impractical for the farmer/horticulturalist and prohibitive for low-input field crops. In developing countries, constraints on agricultural biochar relate more to biomass availability and production time. A compromise is to use small amounts of biochar in lower-cost biochar-fertilizer complexes. Biochar soil amendments, when applied at excessive rates or with unsuitable soil type and biochar feedstock combinations, also have the potential for negative effects, including harming soil biota, reducing available water content, altering soil pH and increasing salinity. Biochar can remove heavy metals from the soil. Slash-and-char Switching from slash-and-burn to slash-and-char farming techniques in Brazil can decrease both deforestation of the Amazon basin and carbon dioxide emission, as well as increase crop yields. Slash-and-burn leaves only 3% of the carbon from the organic material in the soil. Slash-and-char can retain up to 50%. Biochar reduces the need for nitrogen fertilizers, thereby reducing cost and emissions from fertilizer production and transport. Additionally, by improving soil's till-ability, fertility, and productivity, biochar-enhanced soils can indefinitely sustain agricultural production. This is unlike slash/burn soils, which quickly become depleted of nutrients, forcing farmers to abandon fields, producing a continuous slash and burn cycle. Using pyrolysis to produce bio-energy does not require infrastructure changes the way, for example, processing biomass for cellulosic ethanol does. Additionally, biochar can be applied by the widely used machinery. Water retention Biochar is hygroscopic due to its porous structure and high specific surface area. As a result, fertilizer and other nutrients are retained for plants' benefit. Stock fodder Biochar has been used in animal feed for centuries. Doug Pow, a Western Australian farmer, explored the use of biochar mixed with molasses as stock fodder. He asserted that in ruminants, biochar can assist digestion and reduce methane production. He also used dung beetles to work the resulting biochar-infused dung into the soil without using machinery. The nitrogen and carbon in the dung were both incorporated into the soil rather than staying on the soil surface, reducing the production of nitrous oxide and carbon dioxide. The nitrogen and carbon added to soil fertility. On-farm evidence indicates that the fodder led to improvements of liveweight gain in Angus-cross cattle. Doug Pow won the Australian Government Innovation in Agriculture Land Management Award at the 2019 Western Australian Landcare Awards for this innovation. Pow's work led to two further trials on dairy cattle, yielding reduced odour and increased milk production. Concrete additive Ordinary Portland cement (OPC), an essential component of concrete mix, is energy- and emissions-intensive to produce; cement production accounts for around 8% of global CO2 emissions. The concrete industry has increasingly shifted to using supplementary cementitious materials (SCMs), additives that reduce the volume of OPC in a mix while maintaining or improving concrete properties. Biochar has been shown to be an effective SCM, reducing concrete production emissions while maintaining required strength and ductility properties. Studies have found that a 1-2% weight concentration of biochar is optimal for use in concrete mixes, from both a cost and strength standpoint. A 2 wt.% biochar solution has been shown to increase concrete flexural strength by 15% in a three-point bending test conducted after 7 days, compared to traditional OPC concrete. Biochar concrete also shows promise in high-temperature resistance and permeability reduction. A cradle-to-gate life cycle assessment of biochar concrete showed decreased production emissions with higher concentrations of biochar, which tracks with a reduction in OPC. Compared to other SCMs from industrial waste streams (such as fly ash and silica fume), biochar also showed decreased toxicity. Fuel slurry Biochar mixed with liquid media such as water or organic liquids (ethanol, etc.) is an emerging fuel type known as biochar-based slurry. Adapting slow pyrolysis in large biomass fields and installations enables the generation of biochar slurries with unique characteristics. These slurries are becoming promising fuels in countries with regional areas where biomass is abundant, and power supply relies heavily on diesel generators. This type of fuel resembles a coal slurry, but with the advantage that it can be derived from biochar from renewable resources. Water treatment Biochar, also can have applications in the field of water treatment. Its porosity and properties can be modified using different methods to increase the efficiency of removal. Several contaminants such as heavy metals, dyes, organic pollutants are reported to be removed by biochar. Research Research into aspects involving pyrolysis/biochar is underway around the world, but was still in its infancy. From 2005 to 2012, 1,038 articles included the word "biochar" or "bio-char" in the topic indexed in the ISI Web of Science. Research is in progress by the University of Edinburgh, the University of Georgia, the Volcani Center, and the Swedish University of Agricultural Sciences. Research is also ongoing on the application of biochar to coarse soils in semi-arid and degraded ecosystems. In Namibia biochar is under exploration as climate change adaptation effort, strengthening local communities' drought resilience and food security through the local production and application of biochar from abundant encroacher biomass. Similar solutions for rangeland affected by woody plant encroachment have been explored in Australia. In recent years, biochar has attracted interest as a wastewater filtration medium as well as for its adsorbing capacity for the wastewater pollutants, such as pharmaceuticals, personal care products, and per- and polyfluoroalkyl substances. In some areas, citizen interest and support for biochar motivates government research into the uses of biochar. Studies Long-term effects of biochar on carbon sequestration have been examined using soil from arable fields in Belgium with charcoal-enriched black spots dating from before 1870 from charcoal production mound kilns. This study showed that soil treated over a long period with charcoal showed a higher proportion of maize-derived carbon and decreased respiration, attributed to physical protection, C saturation of microbial communities, and, potentially, slightly higher annual primary production. Overall, this study evidences the capacity of biochar to enhance C sequestration through reduced C turnover. Biochar sequesters carbon (C) in soils because of its prolonged residence time, ranging from years to millennia. In addition, biochar can promote indirect C-sequestration by increasing crop yield while potentially reducing C-mineralization. Laboratory studies have evidenced effects of biochar on C-mineralization using signatures. Fluorescence analysis of biochar-amended soil dissolved organic matter revealed that biochar application increased a humic-like fluorescent component, likely associated with biochar-carbon in solution. The combined spectroscopy-microscopy approach revealed the accumulation of aromatic carbon in discrete spots in the solid phase of microaggregates and its co-localization with clay minerals for soil amended with raw residue or biochar. The co-localization of aromatic-C: polysaccharides-C was consistently reduced upon biochar application. These findings suggested that reduced C metabolism is an important mechanism for C stabilization in biochar-amended soils. See also Activated carbon Charring Dark earth Pellet fuel Soil carbon Soil ecology References 118. Biochar, Activated Biochar & Application By: Prof. Dr. H. Ghafourian (Author) Book Amazon Sources * External links Practical Guidelines for Biochar Producers, Southern Africa Biochar Production in Namibia (Video) International Biochar Initiative Biochar-us.org Carbon dioxide removal Charcoal Environmental soil science Soil improvers Wildfire ecology Climate engineering
Biochar
[ "Engineering", "Environmental_science" ]
4,845
[ "Planetary engineering", "Geoengineering", "Environmental soil science" ]
7,041,590
https://en.wikipedia.org/wiki/MPT-1327
MPT 1327 is an industry standard for trunked radio communications networks. First published in January 1988 by the British Radiocommunications Agency, and is primarily used in the United Kingdom, Europe, South Africa, Australia, New Zealand and China. Many countries had their own version of numbering/user interface, including MPT1343 in the UK, Chekker (Regionet 43) in Germany, 3RP (CNET2424) in France, Multiax in Australia, and Gong An in China. MPT systems are still being built in many areas of the world, due to their cost-effectiveness. Digital alternatives The TETRA trunked radio standard was developed by the European Telecommunications Standards Institute (ETSI), as a digital alternative to analogue trunked systems. However, TETRA, with its enhanced encryption capability, has developed into a higher tier (public safety) product, currently mainly used by governments, some larger airports and government-owned utilities. DMR (digital mobile radio), and dPMR (digital private mobile radio) are more recent ETSI-standards for digital mobile radio using two-slot TDMA and FDMA respectively. The Tier 3 standard for these systems defines a trunking protocol very similar to MPT1327 and is intended as a potential migration path for existing and perhaps future trunking customers. Tier 3 equipment is (late 2011) now becoming available, so the impact on TETRA and MPT 1327 is yet to be seen, but may well be significant. However, it is unlikely that in terms of cost that the complicated new DMR/dPMR equipment will be able to compete with the simpler MPT1327 equipment for some time, if ever. It is worth noting that whilst many comparisons are made between Digital and Analog radio technologies, when it comes to applying these arguments to MPT1327, many of the distinctions become blurred, since MPT1327 with its digital control channel, already offers most of the features being offered by the DMR/dPMR/TETRA counterparts. Furthermore, most MPT1327 systems are engineered to a far higher standard than conventional FM systems, partially due to the lack of CTCSS within the standard. As such arguments with regards to "noisy FM audio quality", can become misleading, since the squelch levels tend to be set rather high on MPT1327 systems, such that weak/noisy signals do not generally open the mute. MPT1327 advantages and features The advantage of MPT 1327 over TETRA is the increased availability, lower cost of equipment, the ease of installation, the familiarity with the equipment, and many believe that MPT 1327 is superior to TETRA, due to its uncompressed FM audio, and greater receiver sensitivity. MPT1327 control channel signalling is more resilient, since the TETRA protocol uses a complex modulation scheme that requires a far higher Signal to Noise ratio to function than 1,200 bit/s FFSK signalling. Systems based on MPT 1327 only require one, but usually use two or more radio channels per site. Channels can be 12.5 or 25 kHz bandwidth, and can be any variety of channel spacings, with 6.25 kHz or 12.5 kHz being typical. At least one of these channels is defined as the control channel (CCH) and all other channels are traffic channels (TCs) used for speech calls. A typical installation will have around 6–10 channels. A 7-channel trunk, correctly engineered, is capable of handling in excess of 3,000 mobile units. The capacity of the system increases dramatically with the number of traffic channels. For example, 1 traffic channel with queuing can not handle many customers, perhaps 2 minicabs with 20 mobiles. In effect this would be a CBS with queuing. However, a 7 channel trunked system can handle 40 minicabs with 20 mobiles with ease. The Erlang formulas are typically used for calculating system capacity. Spectrum efficiency Whilst MPT 1327 systems, unlike DMR or dPMR, do not employ digital speech compression to gain any Spectral Efficiency (voice channels per 6.25 kHz), there are several methods used that increase the Spectrum Efficiency (Erlangs per square kilometre, per 6.25 kHz). A spectrum efficiency advantage over a 4-slot TDMA system like TETRA is in areas where low-bandwidth channels are required. The absolute minimum TETRA installation would require a 25 kHz bandwidth in order to carry a control slot and three traffic slots. The absolute minimum MPT1327 assignment is a single non-dedicated control channel, utilising 12.5 kHz, in most cases. A non-dedicated control channel can be used as a traffic channel when all the other traffic channels are busy. This can be useful if the site is part of a multi-site network and has a very low traffic profile as the site could have a single channel rather than at least two freeing up one channel for use elsewhere. The disadvantage is loss of queuing, and data cannot be sent on the control channel whilst it is in traffic mode. A non-dedicated CCH should not be used as a "reserve tank" for a busy site as the lack of signalling will seriously affect the operation of the site. Time-shared control channels and channel pooling Some MPT 1327 networks can also time-share control channels, which can be useful if the network has limited frequency availability, as it frees up channels for use as traffic channels, which can also be pooled across sites so the network capacity follows the traffic. This is another advantage of MPT 1327 (and dPMR) over TDMA-based systems such as TETRA and DMR, which cannot pool traffic channels so efficiently (if at all). The disadvantage of using a time-shared CCH is that it slows down registration and calls and requires some customization of the registration process, so is only useful if the network has a patient user community! Speech and data Speech is sent as narrowband frequency modulation. Data messages between mobiles and the network are exchanged on the control channel at 1,200 bits per second using FFSK signalling, or a specific "modem call", known as "non-prescribed data" can be established, whereby free-form 1,200 baud data can be exchanged on a traffic channel without tying up the control channel. With the use of special modems, the speed can be 19,200 bit/s. This, along with Short Data Messaging and Status Messaging via the control channel makes an MPT1327 network ideal for managing AVL for asset management, meter reading and SCADA networks, the advantage being that the network can be used for this sort of application whilst still carrying voice traffic. Numbering Each subscriber in an MPT-1327 trunked radio network has a unique call number. This call number (address) is a compound number consisting of a prefix (three digits), the fleet number and the subscriber's call number within the fleet. Different numbering schemes work differently, for example Zetron uses the first two digits of the Ident as the fleet number, and the last two digits as the unit number. Idents in the 6,000–6,999 range are typically used to establish group calls. After it has been entered the call number will be converted in the mobile to a 20-bit address. The numbering rules are clarified and expanded by supplementary standards such as MPT-1343. For the duration of his call a subscriber is exclusively allocated a traffic channel from the available pool. If all channels are occupied, the call will be queued. If the control channel has become a traffic channel, like in the case of a non-dedicated control channel, the call will be queued in the radio, although radio queuing loses the first come, first served effect, so if there are seven units queuing, the last unit to queue may get a traffic channel first. The different types of communications on an MPT-1327 network and their definitions Traffic types Mobile-mobile in a cell Mobile-mobile in different cells Mobile-line access unit via landline or radio Mobile-dispatcher station via landline or radio Mobile-PABX, Mobile-PSTN Data communication Status messages on the control channel (5-bit data length) Short data messages on the control channel (186-bit data length) Transparent data transmission on the TC (data communication). Calls Point to point connections Group calls with talk entitlement Group calls without talk entitlement (broadcast calls) Broadcast calls can be used to make announcements to the entire customer base on a trunked system. For example, if work is to be carried out on the trunked system, the owner of the system can initiate a Broadcast Call which calls every mobile on the system. However, the mobiles may not have talk entitlement, so the PTT may not work. By this means the owner can announce to all customers a short period of inoperation due to maintenance. Notes Trunked radio systems Mobile telecommunications standards
MPT-1327
[ "Technology" ]
1,870
[ "Mobile telecommunications", "Mobile telecommunications standards" ]
7,041,824
https://en.wikipedia.org/wiki/Glossary%20of%20shapes%20with%20metaphorical%20names
Many shapes have metaphorical names, i.e., their names are metaphors: these shapes are named after a most common object that has it. For example, "U-shape" is a shape that resembles the letter U, a bell-shaped curve has the shape of the vertical cross section of a bell, etc. These terms may variously refer to objects, their cross sections or projections. Types of shapes Some of these names are "classical terms", i.e., words of Latin or Ancient Greek etymology. Others are English language constructs (although the base words may have non-English etymology). In some disciplines, where shapes of subjects in question are a very important consideration, the shape naming may be quite elaborate, see, e.g., the taxonomy of shapes of plant leaves in botany. Astroid Aquiline, shaped like an eagle's beak (as in a Roman nose) Bell-shaped curve Biconic shape, a shape in a way opposite to the hourglass: it is based on two oppositely oriented cones or truncated cones with their bases joined; the cones are not necessarily the same Bowtie shape, in two dimensions Atmospheric reentry apparatus Centerbody of an inlet cone in ramjets Bow shape Bow curve Bullet Nose an open-ended hourglass Butterfly curve (algebraic) Cocked hat curve, also known as Bicorn Cone (from the Greek word for « pine cone ») Doughnut shape Egg-shaped, see "Oval", below Geoid (From Greek Ge (γη) for "Earth"), the term specifically introduced to denote the approximation of the shape of the Earth, which is approximately spherical, but not exactly so Heart shape, long been used for its varied symbolism Horseshoe-shaped, resembling a horseshoe, cf. horseshoe (disambiguation). In botany, also called lecotropal (see below) Hourglass shape or hourglass figure, the one that resembles an hourglass; nearly symmetric shape wide at its ends and narrow in the middle; some flat shapes may be alternatively compared to the figure eight or hourglass Dog bone shape, an hourglass with rounded ends Hourglass corset Ntama Engraved Hourglass Nebula Inverted bell Kite Lecotropal, in botany, shaped like a horseshoe (see horseshoe-shaped, above). From Greek λέκος dish + -τροπος turning Lens or Vesica shape (the latter taking its name from the shape of the lentil seed); see also mandorla, almond-shaped Lune, from the Latin word for the Moon Maltese Cross curve Mandorla, almond-shaped (Italian for "almond"), often used as a frame in mediaeval Christian iconography. Mushroom shape, which became infamous as a result of the mushroom cloud Oval (from the Latin "ovum" for egg), a descriptive term applied to several kinds of "rounded" shapes, including the egg shape Pear shaped, in reference to the shape of a pear, i.e., a generally rounded shape, tapered towards the top and more spherical/circular at the bottom Rod, a 3-dimensional, solid (filled) cylinder Rod shaped bacteria Scarabaeus curve resembling a scarab Serpentine, shaped like a snake Stadium, two half-circles joined by straight sides Stirrup curve Star a figure with multiple sharp points Sunburst Tomahawk Ungula, shaped like a horse's hoof Numbers and letters A-shape, the shape that resembles the capital letter A A-frame, the shape of a common structure that resembles the capital letter A A-frame house, a common style of house construction A-line skirt or dress B-shape, the shape that resembles the capital letter B C-shape, the shape that resembles the capital letter C D-shape, the shape that resembles the capital letter D D-ring Deltoid, the shape that resembles the Greek capital letter Δ Deltahedron Deltoid muscle River delta Delta wing E-shape, the shape that resembles the capital letter E Magnetic cores of transformers may be E-shaped A number of notable buildings have an E-shaped floorplan F-shape, the shape that resembles the capital letter F Figure 0, the shape that resembles the numeral 0 Figure 1, the shape that resembles the numeral 1 Figure 2, the shape that resembles the numeral 2 Figure 3, the shape that resembles the numeral 3 Figure 4, the shape that resembles the numeral 4 Figure 5, the shape that resembles the numeral 5 Figure 6, the shape that resembles the numeral 6 Figure 7, the shape that resembles the numeral 7 Figure 8, the shape that resembles the numeral 8 Figure 9, the shape that resembles the numeral 9 G-shape, the shape that resembles the capital letter G H-shape, the shape that resembles the capital letter H H-beam, a beam with H-shaped section Goals in several sports (gridiron football (old style), Gaelic football, rugby, hurling) are described as "H-shaped" H topology in electronic filter design Also see Balbis I-shape, the shape that resembles the capital letter in a serif font, i.e., with horizontal strokes -beam, a beam with an -shaped section The court in the Mesoamerican ballgame is I-shaped J-shape, the shape that resembles the capital letter J K-shape, the shape that resembles the capital letter K K-shaped recession K turn L-shape, the shape that resembles the capital letter L L-beam, a beam with an L-shaped section The L-Shaped Room L game L-shaped recession Lemniscate, the shape that resembles the infinity symbol M-shape, the shape that resembles the capital letter M (interchangeable with the W-shape) N-shape, the shape that resembles the capital letter N (interchangeable with the Z-shape) O-shape, the shape that resembles the capital letter O O-ring P-shape, the shape that resembles the capital letter P P-trap, a P-shaped pipe under a sink or basin Pi-shape, the shape that resembles the Greek capital letter Π Π topology in electronic filter design Q-shape, the shape that resembles the capital letter Q R-shape, the shape that resembles the capital letter R S-shape, the shape that resembles the capital letter S The sigmoid colon, an S-shaped bend in the human intestine S-twist, contrasted with Z-twist for yarn T-shape, the shape that resembles the capital letter T T junction T topology in electronic filter design T-shaped (chemistry) T-shaped skills, a format for résumés T-shirt T-pose, used in computer animation models U-shape, the shape that resembles the capital letter U U-shaped valley U-turn U-shaped recession Hyoid, the shape that resembles the Greek letter υ Hyoid bone V-shape, the shape that resembles the letter V, also known as the Chevron (which includes the inverted-V shape) V-shaped valley V-shaped recession V-shaped body – male human body shape with broad shoulders V-shaped passage grave V sign V-tail W-shape, the shape that resembles the capital letter W (interchangeable with the M-shape) W-shaped recession X-shape, the shape that resembles the letter X Saltire X topology in electronic filter design Chiasm, crossings that resemble the Greek letter χ Chiasmus Chiastic structure Optic chiasm Y-shape, the shape that resembles the letter Y Y-front briefs Pall Z-shape, the shape that resembles the capital letter Z (interchangeable with the N-shape) Z-twist, contrasted with S-twist for yarn See also List of geometric shapes The :Category:Curves lists numerous metaphorical names, such as Bean curves, also called Nephroids, from the Greek word for kidney References Shapes Shapes Glossary Wikipedia glossaries using unordered lists
Glossary of shapes with metaphorical names
[ "Mathematics" ]
1,649
[ "Geometric shapes", "Mathematical objects", "Geometric objects" ]
7,041,908
https://en.wikipedia.org/wiki/Basidiobolus%20ranarum
Basidiobolus ranarum is a filamentous fungus with worldwide distribution. The fungus was first isolated by Eidam in 1886. It can saprophytically live in the intestines of mainly cold-blooded vertebrates and on decaying fruits and soil. The fungus prefers glucose as a carbon source and grows rapidly at room temperature. Basidiobolus ranarum is also known as a cause of subcutaneous zygomycosis, usually causing granulomatous infections on a host's limbs. Infections are generally geographically limited to tropical and subtropical regions such as East and West Africa. Subcutaneous zygomycosis caused by B. ranarum is a rare disease and predominantly affects children and males. Common subcutaneous zygomycosis shows characteristic features and is relatively easy to be diagnosed; while, certain rare cases might show non-specific clinical features that might pose a difficulty on its identification. Although disease caused by this fungus is known to resolve spontaneously on its own, there are a number of treatments available. History In 1886, the fungus was first isolated from the dung and intestinal contents of frogs by Eidam. In 1927, it was found in the intestines of toads, slowworms, and salamanders by Levisohn. In 1956, Joe et al. reported and described the first four cases of zygomycosis in Indonesia. Since then, hundreds of the cases of this infection have been reported. In 1955, Drechsler isolated it from decaying plants material in North America. In 1971, it was first isolated by Nickerson and Hutchison from aquatic animals, suggesting that B. ranarum can survive in a wild range of ecological situations. Physiology At room temperature (25–30 °C), colonies of B. ranarum show very rapid growth and are able to reach a diameter of 75–80 mm in a week on suitable growth media. The favored carbohydrate source of this fungus is glucose that can stimulate the growth of its mycelium. Generally, asexual reproduction is favored by glucose and sexual reproduction is favored by acid amines. Primary asexual spores are singly formed on the apices of unbranched hyphae and will then be discharged to form ballistic spores. Secondary asexual spores are singly developed from a hypha that was generated from a geminated ballistic spore. Also, sporangiospores can be generated by internal cleave of the cytoplasm and can then be dispersed when the sporangial wall is dissolved. As a result, the ejected asexual spores can form satellite colonies in a distance. After around 10 days of growth, sexual spores, zygospores with 20–50 μm diameters can also be produced. This fungus is believed to have significant protease and lipase activity. Its lipase has a maximum activity at 35 °C and pH 6.0 while its protease has maximum activity at 30 °C and pH 5.5. Both enzymes might be involved in pathogenesis. Light does not affect hyphal growth light but may influence certain aspects of physiology. First, light may stimulate the production of the asexual spores, and certain blue lights (wavelengths 440 nm and 480 nm) may further stimulate the discharge of those spores. Second, light may also stimulate the induction of aerial hyphae and favor the unicellular configuration of the hyphae while darkness may favor their bicellular configuration. Morphology Colonies of B. ranarum are round, flat, waxy, glabrous and radially folded. And, their color is in a range of yellowish-grey to whitish-grey. A one-week-old colony can reach 75–80 mm in diameter. A white bloom, consisting of mycelia and sporangiospores, covers the colonies. Under microscope, younger hyphae are wide and have few septa. Older cultures have colorless zygospores (20–50 μm) with smooth, thick walls and abundant large, spherical, darkly coloured chlamydospores. The colonies commonly produce a strong Streptomyces-like or benzene hexachloride-like odour. Habitat and ecology Basidiobolus ranarum has a worldwide distribution and is capable of living saprotrophically in a broad range of ecological situations, indicating its great ecological and physiological tolerance as well as its ubiquity. Basidiobolus ranarum was widely reported from all parts of the world, especially Asia and Africa. It can saprophytically live in the intestines of vertebrates including amphibians (e.g. frogs, toads, salamanders, mudpuppy), reptiles (e.g. chameleons, wall geckoes, snakes, lizards, turtles), and fishes (e.g. sturgeon). In addition, studies also reported occasional presence of B. ranarum in the intestinal contents of mammals such as one bat in India and the kangaroos in Australia. Moreover, other habitats including compost heaps, decaying plant material and soil can also be their place to live. However, the habitat for B. ranarum is not fixed and a life-cycle illustration of it might provide a better idea of the variation of its habitats. First, insects might eat feces and decaying plant materials in which B. ranarum might be present, or insects might have physical contact with the strains so that the strains can attach to the insects externally. Then, those insects might be devoured by predators, such as frogs. Next, the fungi will travel through the predator's gastrointestinal tract and might either stay a little bit longer (as long as 18 days) at or leave from the intestine along with the feces. Eventually, the strains in those feces will end up in the soil and some of them will be further transported to decaying plant materials or other organic contents. Also, the tissues that the pathogenic strains of B. ranarum infect can also be considered as its habitats, B. ranarum can also live in both human and non-human animal (e.g. horses, frogs) tissues. However, instead of a worldwide distribution, the pathogenic lifestyle of B. ranarum only exists in tropical and subtropical regions. Pathology Subcutaneous zygomycosis (also known as "entomophthoromycosis basidiobolae", subcutaneous phycomycosis, and basidiobolomycosis) is a both human and non-human animal disease or lesion caused by the granulomatous infection of subcutaneous tissue by B. ranarum. Several enzymes produced by B. ranarum, including lipase and protease, might hydrolyze and utilize the fatty tissues of the host and contribute to the pathogenesis of the infection. Prevalence, mode of transmission Considering the broad-range distribution of B. ranarum and its high ubiquity, subcutaneous zygomycosis is not really prevalent. In addition, the fact that infections were only reported at tropical and subtropical regions further limits its prevalence. Currently, the reason why the infections were limited to those regions is not fully understood. However, the low prevalence might be explained by the speculations that the widespread immunity of other species was developed against its infection or the number of the B. ranarum strains with pathogenic characteristics is much lower than the saprophytic strains. Its transmission mode has not been fully understood though certain general ideas about its transmission are widely accepted. Ingestion of B. ranarum is thought to help disperse the agent through the deposition of feces at a distant place where human and other non-human animals might be exposed. As well, the agent may transmit through traumas or insect bites on skin. Vulnerable groups Most of the reported cases were from Nigeria and Uganda in Africa as well as Indonesia and thus the residents there might be considered as one of the vulnerable groups. Over 90% of the reported infections occurred on the people under 20 years old; thus the young are thought to be a particularly vulnerable group for this agent. Based on the skewed male to female ratio of infection reported in Nigeria (3:1) and Uganda (3:2), males are substantially more vulnerable to infection. One explanation that has been offered for this observation suggests that male children in endemic regions areas were likely to use decayed leaves which might be associated with pathogenic B. ranarum strains as toilet paper following defecation. Although rarely, the agent can cause gastrointestinal disease which does not show specific vulnerable groups or risk factors. Clinical features and diagnosis In general, the clinical presentation of subcutaneous zygomycosis is quite identifiable and characteristic and the diagnosis is fairly easy. Human infection is characterized by the single formation of enlarging, painless and firm swelling in soft tissues on extremities e.g. buttocks, thighs, perineum, trunk. However, as the infection worsens, symptoms such a burning sensation or itchiness may develop in the swollen region. In addition to general severe symptoms, one unusual case reported that the severe perineal infection of a led to acute large intestinal obstruction. Moreover, other rare cases also reported the infections happened on other anatomical regions such as the colon in the case of gastrointestinal basidiobolomycosis. Infections may be associated with a diffusive bluish pigmentation generally associated with swelling. Joint function is often not affected; however, a few other cases reported the subcutaneous infection transfect local muscle tissues and lymph nodes. Definitive diagnosis requires laboratory effort. Culture, histopathology and immunology can be used to for the diagnosis. First, a portion of the infected tissue will be surgically removed and used for a biopsy. Since the fungus can not tolerate refrigeration, the biopsied material needs to be incubated immediately once it is collected. Then, the examination will investigate the presence of thin-walled, wide, hyaline, coenocytic hyphae and internal cleavage for the production of the sporangiospores in H&E (Haemotoxylin and Eosin) stained sections. Other characteristics of its appearance mentioned in the morphology section might also be used to identify the species. Moreover, the histopathology test will expect a granuloma consisting of a variety of immune cells in which hypha or hyphal fragments (4–10 μm diameter) often stain bright pink in H&E sections. When biopsy is not available, immunofluorescent test can also be used to identify B. ranarum strains. Five specific antigens have been identified that can be used measured in the sera of the infected patients using antibodies conjugated to fluorescein dye. The diagnosis of the rare cases, such as gastrointestinal basidiobolomycosis, is challenging given the nonspecific clinical presentation as well as the need for surgical biopsy. Treatment Many cases are thought to resolve spontaneously, although surgical intervention may be help to debulk the infected tissue. The most common treatment is taking potassium iodide (KI) on a daily basis for a half of a year to one-year period. For the patients who can not response to KI, some successful cases with other treatments also reported that medications including cotrimoxazole, amphotericin B, itraconazole, and ketoconazole might also show beneficial effects. In addition, given the fact that Conidiobolus coronatus infection causes a similar disease as B. ranarum infection does, coupled with the fact that fluconazole shows great effects on treating C. coronatus infection, there might be a possibility that fluconazole will also be effective in treating B. ranarum infection. References Animal fungal diseases Zygomycota Entomophthorales Fungi described in 1886 Fungal pathogens of humans Fungus species
Basidiobolus ranarum
[ "Biology" ]
2,506
[ "Fungi", "Fungus species" ]
7,042,220
https://en.wikipedia.org/wiki/Amir%20Caldeira
Amir Ordacgi Caldeira (born 1950 in Rio de Janeiro) is a Brazilian physicist. He received his bachelor's degree in 1973 from the Pontifícia Universidade Católica do Rio de Janeiro, his M.Sc. degree in 1976 from the same university, and his Ph.D. in 1980 from University of Sussex. His Ph.D. advisor was the Physics Nobel Prize winner Anthony James Leggett. He joined the faculty at Universidade Estadual de Campinas (UNICAMP) in 1980. In 1984 he did post-doctoral work at the Kavli Institute for Theoretical Physics (KITP) at University of California, Santa Barbara and at the Thomas J. Watson Research Laboratory at IBM. In 1994–1995 he spent a sabbatical at the University of Illinois at Urbana-Champaign. He is currently a full professor at Universidade Estadual de Campinas. He was the recipient of the Wataghin Prize, from Universidade Estadual de Campinas, for his contributions to theoretical physics in 1986. Caldeira's research interests are in theoretical condensed matter physics, in particular quantum dissipation and strongly correlated electron systems. His best known work is on the Caldeira–Leggett model, which is one of the first and most important treatments of decoherence in quantum mechanical systems. Selected Scientific Articles See also Cristiane de Morais Smith References 1950 births 20th-century Brazilian physicists Living people Members of the Brazilian Academy of Sciences Academic staff of the State University of Campinas Theoretical physicists Pontifical Catholic University of Rio de Janeiro alumni Fellows of the American Physical Society
Amir Caldeira
[ "Physics" ]
337
[ "Theoretical physics", "Theoretical physicists" ]
7,043,609
https://en.wikipedia.org/wiki/Enterprise%20master%20patient%20index
An enterprise master patient index or enterprise-wide master patient index (EMPI) is a patient database used by healthcare organizations to maintain accurate medical data across its various departments. Patients are assigned a unique identifier, so they are represented only once across all the organization's systems. Patient data can include name; gender; date of birth; race and ethnicity; social security number; current address and contact information; insurance information; current diagnoses; and most recent date of hospital admission and discharge (if applicable). EMPIs are intended to ensure patient data is correct and consistent throughout the organization regardless of which system is being updated. Non-healthcare organizations also face similar issues maintaining customer records across different departments. Many software vendors use EMPI and MPI (master patient index) synonymously, because an MPI is only workable if it is used by all software applications across an entire enterprise; that is, "master" implies enterprise-wide scope. EMPIs use match engines along with the technique of referential matching to more easily identify duplicate patient records. Overview In computing, an enterprise[-wide] master patient index is a form of customer data integration (CDI) specific to the healthcare industry. Healthcare organizations and groups use EMPI to identify, match, merge, de-duplicate, and cleanse patient records to create a master index that may be used to obtain a complete and single view of a patient. The EMPI will create a unique identifier for each patient and maintain a mapping to the identifiers used in each records' respective system. An EMPI will typically provide an application programming interface (API) for searching and querying the index to find patients and the pointers to their identifiers and records in the respective systems. It may also store some subset of the attributes for the patient so that it may be queried as an authoritative source of the "single most accurate record" or "source of truth" for the patient. Registration or other practice management applications may interact with the index when admitting new patients to have the single best record from the start, or may have the records indexed at a later time. An EMPI may additionally work with or include enterprise application integration (EAI) capabilities to update the originating source systems of the patient records with the cleansed and authoritative data. Even the best tuned EMPI will not be 100% accurate. Thus an EMPI will provide a data stewardship interface for reviewing the match engine results, handling records for which the engine does not definitively determine a match or not. This interface will provide for performing search, merge, unmerge, edit and numerous other operations. This interface may also be used to monitor the performance of the match engine and perform periodic audits on the quality of the data. EMPI can be used by organizations such as hospitals, medical centers, outpatient clinics, physician offices and rehabilitation facilities. Match engine A component of an EMPI is the match engine, the method by which different records can be identified as being for the same patient. A match engine may be deterministic, probabilistic, or naturalistic. The match engine must be configured and tuned for each implementation to minimize false matches and unmatches. The accuracy and performance of the match engine are a big factor in determining the value and ROI for an EMPI solution. The attributes a match engine is configured to use can typically include name, date of birth, sex, social security number, and address. The match engine must be able to give consideration to data challenges such as typos, misspellings, transpositions and aliases. Referential matching Referential matching involves taking third party patient demographic data containing unique identifiers and using it to better match patient records. Rather than compare incomplete records with each other to try to match them, the organization would compare each incomplete record with a more comprehensive referential database. This works across multiple organizations as long as they all use the same referential list of demographic data formatted the same way. Putting the EMPI on the cloud is one technique to ensure uniformity of the match engine. In 2018, Pennsylvania-based NGO The Pew Charitable Trusts identified referential matching using third party patient data as a good way to improve patient matching. Key benefits Correctly matching patient records from disparate systems and different organizations provides a more complete view of a patient. Additional benefits include: Better patient care can be provided. Improved customer service can be offered. In emergency or other critical care situations, medical staff can be more confident that they know medical conditions or other information that would be critical to providing proper care. Historical care related information can be obtained from across organizations. References Health informatics Patient
Enterprise master patient index
[ "Biology" ]
951
[ "Health informatics", "Medical technology" ]
7,043,631
https://en.wikipedia.org/wiki/Generalized%20inverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix . A matrix is a generalized inverse of a matrix if A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse. Motivation Consider the linear system where is an matrix and the column space of . If and is nonsingular then will be the solution of the system. Note that, if is nonsingular, then Now suppose is rectangular (), or square and singular. Then we need a right candidate of order such that for all That is, is a solution of the linear system . Equivalently, we need a matrix of order such that Hence we can define the generalized inverse as follows: Given an matrix , an matrix is said to be a generalized inverse of if The matrix has been termed a regular inverse of by some authors. Types Important types of generalized inverse include: One-sided inverse (right inverse or left inverse) Right inverse: If the matrix has dimensions and , then there exists an matrix called the right inverse of such that , where is the identity matrix. Left inverse: If the matrix has dimensions and , then there exists an matrix called the left inverse of such that , where is the identity matrix. Bott–Duffin inverse Drazin inverse Moore–Penrose inverse Some generalized inverses are defined and classified based on the Penrose conditions: where denotes conjugate transpose. If satisfies the first condition, then it is a generalized inverse of . If it satisfies the first two conditions, then it is a reflexive generalized inverse of . If it satisfies all four conditions, then it is the pseudoinverse of , which is denoted by and also known as the Moore–Penrose inverse, after the pioneering works by E. H. Moore and Roger Penrose. It is convenient to define an -inverse of as an inverse that satisfies the subset of the Penrose conditions listed above. Relations, such as , can be established between these different classes of -inverses. When is non-singular, any generalized inverse and is therefore unique. For a singular , some generalised inverses, such as the Drazin inverse and the Moore–Penrose inverse, are unique, while others are not necessarily uniquely defined. Examples Reflexive generalized inverse Let Since , is singular and has no regular inverse. However, and satisfy Penrose conditions (1) and (2), but not (3) or (4). Hence, is a reflexive generalized inverse of . One-sided inverse Let Since is not square, has no regular inverse. However, is a right inverse of . The matrix has no left inverse. Inverse of other semigroups (or rings) The element b is a generalized inverse of an element a if and only if , in any semigroup (or ring, since the multiplication function in any ring is a semigroup). The generalized inverses of the element 3 in the ring are 3, 7, and 11, since in the ring : The generalized inverses of the element 4 in the ring are 1, 4, 7, and 10, since in the ring : If an element a in a semigroup (or ring) has an inverse, the inverse must be the only generalized inverse of this element, like the elements 1, 5, 7, and 11 in the ring . In the ring , any element is a generalized inverse of 0, however, 2 has no generalized inverse, since there is no b in such that . Construction The following characterizations are easy to verify: A right inverse of a non-square matrix is given by , provided has full row rank. A left inverse of a non-square matrix is given by , provided has full column rank. If is a rank factorization, then is a g-inverse of , where is a right inverse of and is left inverse of . If for any non-singular matrices and , then is a generalized inverse of for arbitrary and . Let be of rank . Without loss of generality, letwhere is the non-singular submatrix of . Then,is a generalized inverse of if and only if . Uses Any generalized inverse can be used to determine whether a system of linear equations has any solutions, and if so to give all of them. If any solutions exist for the n × m linear system , with vector of unknowns and vector of constants, all solutions are given by , parametric on the arbitrary vector , where is any generalized inverse of . Solutions exist if and only if is a solution, that is, if and only if . If A has full column rank, the bracketed expression in this equation is the zero matrix and so the solution is unique. Generalized inverses of matrices The generalized inverses of matrices can be characterized as follows. Let , and be its singular-value decomposition. Then for any generalized inverse , there exist matrices , , and such that Conversely, any choice of , , and for matrix of this form is a generalized inverse of . The -inverses are exactly those for which , the -inverses are exactly those for which , and the -inverses are exactly those for which . In particular, the pseudoinverse is given by : Transformation consistency properties In practical applications it is necessary to identify the class of matrix transformations that must be preserved by a generalized inverse. For example, the Moore–Penrose inverse, satisfies the following definition of consistency with respect to transformations involving unitary matrices U and V: . The Drazin inverse, satisfies the following definition of consistency with respect to similarity transformations involving a nonsingular matrix S: . The unit-consistent (UC) inverse, satisfies the following definition of consistency with respect to transformations involving nonsingular diagonal matrices D and E: . The fact that the Moore–Penrose inverse provides consistency with respect to rotations (which are orthonormal transformations) explains its widespread use in physics and other applications in which Euclidean distances must be preserved. The UC inverse, by contrast, is applicable when system behavior is expected to be invariant with respect to the choice of units on different state variables, e.g., miles versus kilometers. See also Block matrix pseudoinverse Regular semigroup Citations Sources Textbook Publication Matrices Mathematical terminology
Generalized inverse
[ "Mathematics" ]
1,379
[ "Matrices (mathematics)", "Mathematical objects", "nan" ]
7,043,632
https://en.wikipedia.org/wiki/Vestigial%20response
A vestigial response or vestigial reflex in a species is a response that has lost its original function. In humans, vestigial responses include ear perking, goose bumps and the hypnic jerk. In humans Ear perking It has been observed that some people have slight protrusions on the outer ear (also known as the auricle). These protrusions tend towards the top of the auricle. This has been tagged and coined Darwin's tubercle of the auricle. This phenomenon agrees with the accepted scientific explanation: the incidence of tubercles of the auricle among humans, are vestigial structures testifying to our evolutionary past. They are a throwback to the pointed ears of many mammals and just one more vestigial trace of human evolutionary history. The focus on this part of the human anatomy has finally been followed by a much later observation testifying to our evolutionary past. The subsequent observation concerns an automatic ear-perking response seen, for example, in dogs when startled by a sudden noise. This response, though faint, fleeting and hardly discernible in humans nonetheless clearly manifests itself. This phenomenon is an automatic-response mechanism that activates even before a human becomes consciously aware that a startling, unexpected or unknown sound has been "heard". That this vestigial response occurs even before becoming consciously aware of a startling noise would explain why the function of ear-perking had evolved in animals. The mechanism serves to give a split-second advantage to a startled animal – possibly an animal being stalked and hunted. The evolutionary advantage of the ear-perking response could spell the difference between life and death. The perking response serves to gather and focus that much more audible information that is fed into the brain and on its way to being analyzed even before the animal actually becomes aware of the sound. This fraction-of-a-second advantage would explain the evolutionary selection for this response. Goose bumps The pilomotor reflex, more commonly known as goose bumps, was originally a reflex that assured the raising of fur for additional insulation against cold. When scared, this response also made the frightened animal seem bigger and a more formidable enemy. Hypnic jerk The sudden startled arm-jerking response sometimes experienced when on the verge of sleeping is known as the hypnic jerk. The evolutionary explanation for the existence of the hypnic jerk is unclear, but a possibility is that it is a vestigial reflex humans evolved when they usually slept in trees. Experiencing a hypnic jerk prior to falling asleep may have been selected so that the individual would be able to readjust their sleeping position in the tree with a branch-grabbing response to avoid falling, much as orangutans grasp upper branches of trees while sleeping. See also Exaptation Human vestigiality References Evolutionary biology
Vestigial response
[ "Biology" ]
580
[ "Evolutionary biology" ]
7,043,646
https://en.wikipedia.org/wiki/Quantum%20game%20theory
Quantum game theory is an extension of classical game theory to the quantum domain. It differs from classical game theory in three primary ways: Superposed initial states, Quantum entanglement of initial states, Superposition of strategies to be used on the initial states. This theory is based on the physics of information much like quantum computing. History In 1969, John Clauser, Michael Horne, Abner Shimony, and Richard Holt (often referred to collectively as "CHSH") wrote an often-cited paper describing experiments which could be used to prove Bell's theorem. In one part of this paper, they describe a game where a player could have a better chance of winning by using quantum strategies than would be possible classically. While game theory was not explicitly mentioned in this paper, it is an early outline of how quantum entanglement could be used to alter a game. In 1999, a professor in the math department at the University of California at San Diego named David A. Meyer first published Quantum Strategies which details a quantum version of the classical game theory game, matching pennies. In the quantum version, players are allowed access to quantum signals through the phenomenon of quantum entanglement. Since Meyer's paper, many papers have been published exploring quantum games and the way that quantum strategies could be used in games that have been commonly studied in classical game theory. Superposed initial states The information transfer that occurs during a game can be viewed as a physical process. In the simplest case of a classical game between two players with two strategies each, both the players can use a bit (a '0' or a '1') to convey their choice of strategy. A popular example of such a game is the prisoners' dilemma, where each of the convicts can either cooperate or defect: withholding knowledge or revealing that the other committed the crime. In the quantum version of the game, the bit is replaced by the qubit, which is a quantum superposition of two or more base states. In the case of a two-strategy game this can be physically implemented by the use of an entity like the electron which has a superposed spin state, with the base states being +1/2 (plus half) and −1/2 (minus half). Each of the spin states can be used to represent each of the two strategies available to the players. When a measurement is made on the electron, it collapses to one of the base states, thus conveying the strategy used by the player. Entangled initial states The set of qubits which are initially provided to each of the players (to be used to convey their choice of strategy) may be entangled. For instance, an entangled pair of qubits implies that an operation performed on one of the qubits, affects the other qubit as well, thus altering the expected pay-offs of the game. A simple example of this is a quantum version of the Two-up coin game in which the coins are entangled. Superposition of strategies to be used on initial states The job of a player in a game is to choose a strategy. In terms of bits this means that the player has to choose between 'flipping' the bit to its opposite state or leaving its current state untouched. When extended to the quantum domain this implies that the player can rotate the qubit to a new state, thus changing the probability amplitudes of each of the base states. Such operations on the qubits are required to be unitary transformations on the initial state of the qubit. This is different from the classical procedure which chooses the strategies with some statistical probabilities. Multiplayer games Introducing quantum information into multiplayer games allows a new type of "equilibrium strategy" which is not found in traditional games. The entanglement of players' choices can have the effect of a contract by preventing players from profiting from other player's betrayal. Quantum Prisoner's Dilemma The Classical Prisoner's Dilemma is a game played between two players with a choice to cooperate with or betray their opponent. Classically, the dominant strategy is to always choose betrayal. When both players choose this strategy every turn, they each ensure a suboptimal profit, but cannot lose, and the game is said to have reached a Nash equilibrium. Profit would be maximized for both players if each chose to cooperate every turn, but this is not the rational choice, thus a suboptimal solution is the dominant outcome. In the Quantum Prisoner's Dilemma, both parties choosing to betray each other is still an equilibrium, however, there can also exist multiple Nash equilibriums that vary based on the entanglement of the initial states. In the case where the states are only slightly entangled, there exists a certain unitary operation for Alice so that if Bob chooses betrayal every turn, Alice will actually gain more profit than Bob and vice versa. Thus, a profitable equilibrium can be reached in 2 additional ways. The case where the initial state is most entangled shows the most change from the classical game. In this version of the game, Alice and Bob each have an operator Q that allows for a payout equal to mutual cooperation with no risk of betrayal. This is a Nash equilibrium that also happens to be Pareto optimal. Additionally, the quantum version of the Prisoner's Dilemma differs greatly from the classical version when the game is of unknown or infinite length. Classically, the infinite Prisoner's Dilemma has no defined fixed strategy but in the quantum version it is possible to develop an equilibrium strategy. Quantum Volunteer's Dilemma The Volunteer's dilemma is a well-known game in game theory that models the conflict players face when deciding whether to volunteer for a collective benefit, knowing that volunteering incurs a personal cost. One significant volunteer’s dilemma variant was introduced by Weesie and Franzen in 1998, involves cost-sharing among volunteers. In this variant of the volunteer's dilemma, if there is no volunteer, all players receive a payoff of 0. If there is at least one volunteer, the reward of b units is distributed to all players. In contrast, the total cost of c units incurred by volunteering is divided equally among all the volunteers. It is shown that for classical mixed strategies setting, there is a unique symmetric Nash equilibrium and the Nash equilibrium is obtained by setting the probability of volunteering for each player to be the unique root in the open interval (0,1) of the degree-n polynomial given by In 2024, a quantum variant of the classical volunteer’s dilemma is introduced with b=2 and c=1 is studied, generalizing the classical setting by allowing players to utilize quantum strategies. This is achieved by employing the Eisert–Wilkens–Lewenstein quantization framework. In this setting, the players received an entangled n-qubit state with each player controlling one qubit. The decision of each player can be viewed as determining two angles. Symmetric Nash equilibria that attain a payoff value of for each player is shown and each player volunteers at this Nash Equilibrium. Furthermore, these Nash Equilibrium are Pareto optimal. It is shown that the payoff function of Nash equilibrium in the quantum setting is higher than the payoff of Nash equilibrium in the classical setting. Quantum Card Game A classically unfair card game can be played as follows: There are two players, Alice and Bob. Alice has three cards: one has a star on both sides, one has a diamond on both sides, and one has a star on one side and a diamond on the other side. Alice places the three cards in a box and shakes it up, then Bob draws a card so that both players can only see one side of the card. If the card has the same markings on both sides, Alice wins. But if the card has different markings on each side, Bob wins. Clearly, this is an unfair game, where Alice has a probability of winning of 2/3 and Bob has a probability of winning of 1/3. Alice gives Bob one chance to "operate" on the box and then allows him to withdraw from the game if he would like, but he can only classically obtain information on one card from this operation, so the game is still unfair. However, Alice and Bob can play a version of this game adjusted to allow for quantum strategies. If we describe the state of a card with a diamond facing up as and the state where the star is facing up as , after shaking the box up, we can describe the state of the face-up part of the cards as: where each is either 0 or 1. Now, Bob can take advantage of his ability to operate on the box by constructing a machine as follows: First, he has a unitary matrix defined as . This matrix is equal to if is 0 and if is 1. He then creates his machine by putting this matrix between two Hadamard gates, so his machine now looks as follows: This machine operating on the state gives So if Bob inputs to his machine, he obtains and he knows the state (i.e. the mark facing up) of all three of the cards. From here, Bob can draw one card, and then choose to either withdraw, or keep playing the game. Based on the first card that he draws, he can know from his knowledge of the face-up values of the cards whether or not he has drawn a card that will give him even chances of winning going forward (in which case he can continue to play a fair game) or if he has drawn the card that will guarantee that he loses the game. In this way, he can make the game fair for himself. This is an example of a game where a quantum strategy can make a game fair for one player when it would be unfair for them with classical strategies. Quantum Chess Quantum Chess was first developed by a graduate student at the University of Southern California named Chris Cantwell. His motivation to develop the game was to expose non-physicists to the world of quantum mechanics. The game uses the same pieces as classical chess (8 pawns, 2 knights, 2 bishops, 2 rooks, 1 queen, 1 king) and is won in the same manner (by capturing the opponent's king). However, the pieces are allowed to obey laws of quantum mechanics such as superposition. By allowed the introduction of superposition, it becomes possible for pieces to occupy more than one square in an instance. The movement rules for each piece are the same as classical chess. The biggest difference between quantum chess and classical chess is the check rule. Check is not included in quantum chess because it is possible for the king, as well as all other pieces, to occupy multiple spots on the grid at once. Another difference is the concept of movement to occupied space. Superposition also allows two occupies to share space or move through each other. Capturing an opponent's piece is also slightly different in quantum chess than in classical chess. Quantum chess uses quantum measurement as a method of capturing. When attempting to capture an opponent's piece, a measurement is made to determine the probability of whether or not the space is occupied and if the path is blocked. If the probability is favorable, a move can be made to capture. PQ Penny Flip Game The PQ penny flip game involves two players: Captain Picard and Q. Q places a penny in a box, then they take turns (Q, then Picard, then Q) either flipping or not flipping the penny without revealing its state to either player. After these three moves have been made, Q wins if the penny is heads up, and Picard if the penny is face down. The classical Nash Equilibrium has both players taking a mixed strategy with each move having a 50% chance of either flipping or not flipping the penny, and Picard and Q will each win the game 50% of the time using classical strategies. Allowing for Q to use quantum strategies, namely applying a Hadamard gate to the state of the penny places it into a superposition of face up and down, represented by the quantum state In this state, if Picard does not flip the gate, then the state remains unchanged, and flipping the penny puts it into the state Then, no matter Picard's move, Q can once again apply a Hadamard gate to the superposition which results in the penny being face up. In this way the quantization of Q's strategy guarantees a win against a player constrained by classical strategies. This game is exemplary of how applying quantum strategies to classical games can shift an otherwise fair game in favor of the player using quantum strategies. Quantum minimax theorems The concepts of a quantum player, a zero-sum quantum game and the associated expected payoff were defined by A. Boukas in 1999 (for finite games) and in 2020 by L. Accardi and A. Boukas (for infinite games) within the framework of the spectral theorem for self-adjoint operators on Hilbert spaces. Quantum versions of Von Neumann's minimax theorem were proved. Paradoxes Quantum game theory also offers a solution to Newcomb's Paradox. Take the two boxes offered in Newcomb's game to be coupled, as the contents of box 2 depend on if the ignorant player takes box 1. Quantum game theory enables a situation such that foreknowledge by otherwise omniscient player isn't required in order to achieve the situation. If the otherwise omniscient player operates on the state of the two boxes using a Hadamard gate, then sets up a device that operates on the state defined by the two boxes to operate again using a Hadamard gate after the ignorant player's choice. Then, no matter the pure or mixed strategy that the ignorant player uses, the ignorant player's choice will lead to its corresponding outcome as defined by the premise of the game. Because choosing a strategy for the game, then changing it to fool to otherwise omniscient player (corresponding to operating on the game state using a NOT gate) cannot give the ignorant player an additional advantage, as the two Hadamard operations ensure that the only two outcomes are those defined by the chosen strategy. In this way, the expected situation is achieved no matter the ignorant player's strategy without requiring a system knowledgeable about that player's future. See also Quantum tic-tac-toe: not a quantum game in the sense above, but a pedagogical tool based on metaphors for quantum mechanics Quantum pseudo-telepathy Quantum refereed game CHSH game Jan Sładkowski Jens Eisert References Further reading Danaci, Onur; Zhang, Wenlei; Coleman, Robert; Djakam, William; Amoo, Michaela; Glasser, Ryan T.; Kirby, Brian T.; N'Gom, Moussa; Searles, Thomas A. (2023-02-28), ManQala: Game-Inspired Strategies for Quantum State Engineering, doi:10.48550/arXiv.2302.14582, retrieved 2024-12-06 Quantum information science Game theory
Quantum game theory
[ "Mathematics" ]
3,077
[ "Quantum game theory", "Game theory" ]
7,043,844
https://en.wikipedia.org/wiki/Ideally%20hard%20superconductor
An ideally hard superconductor is a type II superconductor material with an infinite pinning force. In the external magnetic field it behaves like an ideal diamagnet if the field is switched on when the material is in the superconducting state, so-called "zero field cooled" (ZFC) regime. In the field cooled (FC) regime, the ideally hard superconductor screens perfectly the change of the magnetic field rather than the magnetic field itself. Its magnetization behavior can be described by Bean's critical state model. The ideally hard superconductor is a good approximation for the melt-textured high temperature superconductors (HTSC) used in large scale HTSC applications such as flywheels, HTSC bearings, HTSC motors, etc. See also Frozen mirror image method Bean's critical state model References Superconductivity Magnetism
Ideally hard superconductor
[ "Physics", "Materials_science", "Engineering" ]
191
[ "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
7,044,083
https://en.wikipedia.org/wiki/Electrical%20system%20of%20the%20International%20Space%20Station
The electrical system of the International Space Station is a critical part of the International Space Station (ISS) as it allows the operation of essential life-support systems, safe operation of the station, operation of science equipment, as well as improving crew comfort. The ISS electrical system uses solar cells to directly convert sunlight to electricity. Large numbers of cells are assembled in arrays to produce high power levels. This method of harnessing solar power is called photovoltaics. The process of collecting sunlight, converting it to electricity, and managing and distributing this electricity builds up excess heat that can damage spacecraft equipment. This heat must be eliminated for reliable operation of the space station in orbit. The ISS power system uses radiators to dissipate the heat away from the spacecraft. The radiators are shaded from sunlight and aligned toward the cold void of deep space. Solar array wing Each ISS solar array wing (often abbreviated "SAW") consists of two retractable "blankets" of solar cells with a mast between them. Each wing is the largest ever deployed in space, weighing over 2,400 pounds and using nearly 33,000 solar arrays, each measuring 8-cm square with 4,100 diodes. When fully extended, each is in length and wide. Each SAW is capable of generating nearly 31 Kilowatts (kW) of direct current power. When retracted, each wing folds into a solar array blanket box just high and in length. Altogether, the eight solar array wings can generate about 240 kilowatts in direct sunlight, or about 84 to 120 kilowatts average power (cycling between sunlight and shade). The solar arrays normally track the Sun, with the "alpha gimbal" used as the primary rotation to follow the Sun as the space station moves around the Earth, and the "beta gimbal" used to adjust for the angle of the space station's orbit to the ecliptic. Several different tracking modes are used in operations, ranging from full Sun-tracking, to the drag-reduction mode (night glider and Sun slicer modes), to a drag-maximization mode used to lower the altitude. Over time, the photovoltaic cells on the wings have degraded gradually, having been designed for a 15-year service life. This is especially noticeable with the first arrays to launch, with the P6 and P4 Trusses in 2000 (STS-97) and 2006 (STS-115). STS-117 delivered the S4 truss and solar arrays in 2007. STS-119 (ISS assembly flight 15A) delivered the S6 truss along with the fourth set of solar arrays and batteries to the station during March 2009. To augment the oldest wings, NASA launched three pairs of large-scale versions of the ISS Roll Out Solar Array (IROSA) aboard three SpaceX Dragon 2 cargo launches from early June 2021 to early June 2023, SpaceX CRS-22, CRS-26 and CRS-28. These arrays were deployed along the central part of the wings up to two thirds of its length. Work to install iROSA's support brackets on the truss mast cans holding the Solar Array Wings was initiated by the crew members of Expedition 64 in late February 2021. After the first pair of arrays were delivered in early June, a spacewalk on 16 June by Shane Kimbrough and Thomas Pesquet of Expedition 65 to place one iROSA on the 2B power channel and mast can of the P6 truss ended early due to technical difficulties with the array's deployment. The 20 June spacewalk saw the first iROSA's successful deployment and connection to the station's power system. The 25 June spacewalk saw the astronauts successfully install and deploy the second iROSA on the 4B mast can opposite the first iROSA. The next pair of panels were launched on 26 November 2022. Astronauts Josh Cassada and Frank Rubio of Expedition 68 installed each one on the 3A power channel and mast can on the S4 segment, and the 4A power channel and mast can on the P4 truss segments, on 3 and 22 December 2022, respectively. The third pair of panels were launched on 5 June 2023. On 9 June, astronauts Steve Bowen and Warren Hoburg of Expedition 69 installed the fifth iROSA on the 1A power channel and mast can on the S4 truss segment. On 15 June, Bowen and Hoburg installed the sixth iROSA on the 1B power channel and mast can on the S6 truss segment. The last pair of iROSAs, the seventh and eighth, are planned to be installed on the 2A and 3B power channels on the P4 and S6 truss segments in 2025. Batteries Since the station is often not in direct sunlight, it relies on rechargeable lithium-ion batteries (initially nickel-hydrogen batteries) to provide continuous power during the "eclipse" part of the orbit (35 minutes of every 90 minute orbit). Each battery assembly, situated on the S4, P4, S6, and P6 Trusses, consists of 24 lightweight lithium-ion battery cells and associated electrical and mechanical equipment. Each battery assembly has a nameplate capacity of 110 Ah ( C) (originally 81 Ah) and . This power is fed to the ISS via the BCDU and DCSU respectively. The batteries ensure that the station is never without power to sustain life-support systems and experiments. During the sunlight part of the orbit, the batteries are recharged. The nickel-hydrogen batteries and the battery charge/discharge units were manufactured by Space Systems/Loral (SS/L), under contract to Boeing. Ni-H2 batteries on the P6 truss were replaced in 2009 and 2010 with more Ni-H2 batteries brought by Space Shuttle missions. The nickel-hydrogen batteries had a design life of 6.5 years and could exceed 38,000 charge/discharge cycles at 35% depth of discharge. They were replaced multiple times during the expected 30-year life of the station. Each battery measured and weighed . From 2017 to 2021, the nickel-hydrogen batteries were replaced by lithium-ion batteries. On January 6, 2017, Expedition 50 members Shane Kimbrough and Peggy Whitson began the process of converting some of the oldest batteries on the ISS to the new lithium-ion batteries. Expedition 64 members Victor J. Glover and Michael S. Hopkins concluded the campaign on February 1, 2021. There are a number of differences between the two battery technologies. One difference is that the lithium-ion batteries can handle twice the charge, so only half as many lithium-ion batteries were needed during replacement. Also, the lithium-ion batteries are smaller than the older nickel-hydrogen batteries. Although Li-ion batteries typically have shorter lifetimes than Ni-H2 batteries as they cannot sustain as many charge/discharge cycles before suffering notable degradation, the ISS Li-ion batteries have been designed for 60,000 cycles and ten years of lifetime, much longer than the original Ni-H2 batteries' design life span of 6.5 years. Power management and distribution The power management and distribution subsystem operates at a primary bus voltage set to Vmp, the peak power point of the solar arrays. , Vmp was 160 volts DC (direct current). It can change over time as the arrays degrade from ionizing radiation. Microprocessor-controlled switches control the distribution of primary power throughout the station. The battery charge/discharge units (BCDUs) regulate the amount of charge put into the battery. Each BCDU can regulate discharge current from two battery ORUs (each with 38 series-connected Ni-H2 cells), and can provide up to 6.6 kW to the Space Station. During insolation, the BCDU provides charge current to the batteries and controls the amount of battery overcharge. Each day, the BCDU and batteries undergo sixteen charge/discharge cycles. The Space Station has 24 BCDUs, each weighing 100 kg. The BCDUs are provided by SS/L Sequential shunt unit (SSU) Eighty-two separate solar array strings feed a sequential shunt unit (SSU) that provides coarse voltage regulation at the desired Vmp. The SSU applies a "dummy" (resistive) load that increases as the station's load decreases (and vice versa) so the array operates at a constant voltage and load. The SSUs are provided by SS/L. DC-to-DC conversion DC-to-DC converter units supply the secondary power system at a constant 124.5 volts DC, allowing the primary bus voltage to track the peak power point of the solar arrays. Thermal control The thermal control system regulates the temperature of the main power distribution electronics and the batteries and associated control electronics. Details on this subsystem can be found in the article External Active Thermal Control System. Station to shuttle power transfer system From 2007 the Station-to-Shuttle Power Transfer System (SSPTS; pronounced spits) allowed a docked Space Shuttle to make use of power provided by the International Space Station's solar arrays. Use of this system reduced usage of a shuttle's on-board power-generating fuel cells, allowing it to stay docked to the space station for an additional four days. SSPTS was a shuttle upgrade that replaced the Assembly Power Converter Unit (APCU) with a new device called the Power Transfer Unit (PTU). The APCU had the capacity to convert shuttle 28 VDC main bus power to 124 VDC compatible with ISS's 120 VDC power system. This was used in the initial construction of the space station to augment the power available from the Russian Zvezda service module. The PTU adds to this the capability to convert the 120 VDC supplied by the ISS to the orbiter's 28 VDC main bus power. It is capable of transferring up to 8 kW of power from the space station to the orbiter. With this upgrade both the shuttle and the ISS were able to use each other's power systems when needed, though the ISS never again required the use of an orbiter's power systems. In December 2006, during mission STS-116, PMA-2 (then at the forward end of the Destiny module) was rewired to allow for the use of the SSPTS. The first mission to make actual use of the system was STS-118 with Space Shuttle Endeavour. Only Discovery and Endeavour were equipped with the SSPTS. Atlantis was the only surviving shuttle not equipped with the SSPTS, so it could only go on shorter length missions than the rest of the fleet. References External links NASA Glenn Contributions to the International Space Station (ISS) Electrical Power System https://ntrs.nasa.gov/citations/20110015485 Components of the International Space Station Electrical systems Solar power and space
Electrical system of the International Space Station
[ "Physics" ]
2,209
[ "Physical systems", "Electrical systems" ]
7,044,103
https://en.wikipedia.org/wiki/Theorem%20of%20the%20cube
In mathematics, the theorem of the cube is a condition for a line bundle over a product of three complete varieties to be trivial. It was a principle discovered, in the context of linear equivalence, by the Italian school of algebraic geometry. The final version of the theorem of the cube was first published by , who credited it to André Weil. A discussion of the history has been given by . A treatment by means of sheaf cohomology, and description in terms of the Picard functor, was given by . Statement The theorem states that for any complete varieties U, V and W over an algebraically closed field, and given points u, v and w on them, any invertible sheaf L which has a trivial restriction to each of U× V × {w}, U× {v} × W, and {u} × V × W, is itself trivial. (Mumford p. 55; the result there is slightly stronger, in that one of the varieties need not be complete and can be replaced by a connected scheme.) Special cases On a ringed space X, an invertible sheaf L is trivial if isomorphic to O, as an O-module. If the base X is a complex manifold, then an invertible sheaf is (the sheaf of sections of) a holomorphic line bundle, and trivial means holomorphically equivalent to a trivial bundle, not just topologically equivalent. Restatement using biextensions Weil's result has been restated in terms of biextensions, a concept now generally used in the duality theory of abelian varieties. Theorem of the square The theorem of the square is a corollary (also due to Weil) applying to an abelian variety A. One version of it states that the function φ taking x∈A to TL⊗L is a group homomorphism from A to Pic(A) (where T is translation by x on line bundles). References Notes Abelian varieties Algebraic varieties Cube
Theorem of the cube
[ "Mathematics" ]
419
[ "Mathematical theorems", "Mathematical problems", "Geometry", "Theorems in geometry" ]
7,044,296
https://en.wikipedia.org/wiki/Cold%20water%20pitting%20of%20copper%20tube
Cold water pitting of copper tube occurs in only a minority of installations. Copper water tubes are usually guaranteed by the manufacturer against manufacturing defects for a period of 50 years. The vast majority of copper systems far exceed this time period but a small minority may fail after a comparatively short time. The majority of failures seen are the result of poor installation or operation of the water system. The most common failure seen in the last 20 years is pitting corrosion in cold water tubes, also known as Type 1 pitting. These failures are usually the result of poor commissioning practice although a significant number are initiated by flux left in the bore after assembly of soldered joints. Prior to about 1970 the most common cause of Type 1 pitting was carbon films left in the bore by the manufacturing process. Research and manufacturing improvements in the 1960s virtually eliminated carbon as a cause of pitting with the introduction of a clause in the 1971 edition of BS 2871 requiring tube bores to be free of deleterious films. Despite this, carbon is still regularly blamed for tube failures without proper investigation. Copper water tubes Copper tubes have been used to distribute potable water within building for many years and hundreds of miles are installed throughout Europe every year. The long life of copper when exposed to natural waters is a result of its thermodynamic stability, its high resistance to reacting with the environment, and the formation of insoluble corrosion products that insulate the metal from the environment. The corrosion rate of copper in most potable waters is less than 2.5 μm/year, at this rate a 15 mm tube with a wall thickness of 0.7 mm would last for about 280 years. In some soft waters the general corrosion rate may increase to 12.5 μm/year, but even at this rate it would take over 50 years to perforate the same tube. Despite the reliability of copper and copper alloys, in some cold hard waters pits may form in the bore of a tube. If these pits form, failure times can be expected between 6 months and 2 years from initiation. The mechanism that leads to the pitting of copper in cold hard waters is complex, it requires a water with a specific chemistry that is capable of supporting pit growth and a mechanism for the initiation of the pits. Pitting The pits that penetrate the bore are usually covered in a hard pale green nodule of copper sulfate and copper hydroxide salts. If the nodule is removed a hemispherical pit is revealed filled with coarse crystals of red cuprous oxide and green cuprous chloride. The pits are often referred to as Type 1 pits and the form of attack as Type 1 pitting. Water The characteristics capable of supporting Type 1 pits were determined empirically by Lucey after examining the compositions of waters in which the pitting behaviour was known. They should be cold, less than 30 °C, hard or moderately hard, 170 to 300 mg/L carbonate hardness, and organically pure. Organically pure waters usually originate from deep wells, or boreholes. Surface waters from rivers or lakes contain naturally occurring organic compounds that inhibit the formation of Type 1 pits, unless a deflocculation treatment has been carried out that removes organic material. Type 1 pitting is relatively uncommon in North America and this may be a result of the lower population density allowing a significant proportion of the potable water to be obtained from surface derived sources. In addition to being cold hard and organically pure, the water needs a specific chemistry. The effect of the water chemistry can be empirically determined though use of the Pitting Propensity Rating (PPR) a number that takes into account the sulfate, chloride, nitrate and sodium ion concentrations of the water as well as its acidity or pH. A water with a positive PPR has been shown to be capable of propagating Type 1 pits. Initiation Many waters in both the UK and Europe are capable of supporting Type 1 pitting but no problems will be experienced unless a pit is initiated in the wall of the tube. When a copper tube is initially filled with a hard water salts deposit on the wall and the copper slowly reacts with the water producing a thin protective layer of mixed corrosion products and hardness scale. If any pitting of the tube is to occur then this film must be locally disrupted. Three mechanisms allow the disruption of the protective deposits. The most well known, although now the least common, is the presence of carbon films on the bore. Stagnation and flux residues are the most common initiation mechanisms that have led to Type 1 pitting failures in the last ten years. Carbon films Copper tubes are made from the large billets of copper that are gradually worked and drawn down to the required size. As the tubes are drawn they are heat treated to produce the correct mechanical properties. The organic oils and greases used to lubricate the tubes during the drawing processes are broken down during the heat treatment and gradually coat the tube with a film of carbon. If the carbon is left in the bore of the tube then it disrupts the formation of the protective scale and allows the initiation of pits in the wall. The presence of deleterious films, such as carbon, has been prohibited by the British Standards in copper tubes since 1969. All copper tubes for water service are treated, usually by sand (or other nonferrous medium) blasting or acid pickling, to remove any films produced during manufacture with the result that Type 1 pitting initiated by carbon films is now rare. Stagnation If water is left to stand in a tube for an extended period, the chemical characteristics of the water change as the mixed scale and corrosion products are deposited. In addition any loose scale that is not well adhered to the wall will not be flushed away and air dissolved in the water will form bubbles, producing air pockets. These processes can lead to a number of problems mainly on horizontal tube runs. Particles of scale that do not adhere to the walls and are not washed away tend to fall into the bottom of the tube producing a coarse porous deposit. Air pockets that develop in horizontal runs disrupt the formation of protective scales in two areas: the water lines at the sides, and the air space at the top of the tube. In each of the areas that the scale has been disrupted, Type 1 pitting can be initiated. Then, even after the tube has been put back into service, the pit will continue to develop until the wall has perforated. This form of attack is often associated with the commissioning of a system. Once a system has been commissioned it should be either put immediately into service or drained down and dried by flushing with compressed air otherwise pitting may initiate. If either of these options is not possible then the system should be flushed through regularly until it is put into use. Flux In plumbing systems fluxes are used to keep the mating surfaces clean during soldering operations. The fluxes often consist of corrosive chemicals such as ammonium chloride and zinc chloride in a binder such as petroleum jelly. If too much is applied to the joint, then the excess flux will melt and run down the bore of a vertical tube or pool in the bottom of a horizontal tube. Where the bore of the tube is covered in a layer of flux it may be locally protected from corrosion but at the edges of the flux pits often initiate. If the tube is put into service in a water that supports Type 1 pitting then these pits will develop and eventually perforate the sides of the tube. Good working practice In most cases Type 1 pitting can be avoided by good working practices. Always use tubes that have been manufactured to BS EN 1057. Tubes greater than 10 mm in diameter made to this standard will always be marked the number of the standard, the nominal size, wall thickness and temper of the tube, the manufacturer's identification mark and the date of production at least every 600 mm. Tubes less than 10 mm in diameter will be similarly marked at each end. Once a system has been commissioned it should be either put immediately into service or drained down and dried. If either of these options is not possible then the system should be flushed though regularly until it is put into use. It should not be left to stand for more than a week. At present stagnation is the most common cause of Type 1 pitting. Flux should be used sparingly. A small quantity should be painted over the areas to be joined and any excess removed after the joint has been made. Some fluxes are marked as water-soluble but under some circumstances they are not removed before pitting has initiated. See also Erosion corrosion of copper water tubes References External links NACE International -Professional society for corrosion engineers ( NACE ) Copper Pipe Corrosion Theory and information on Corrosion of Copper Pipe Corrosion Copper Water
Cold water pitting of copper tube
[ "Chemistry", "Materials_science", "Environmental_science" ]
1,773
[ "Hydrology", "Metallurgy", "Corrosion", "Electrochemistry", "Water", "Materials degradation" ]
7,044,318
https://en.wikipedia.org/wiki/Terminal%20controller
A Terminal controller is a device that collects traffic from a set of terminals and directs them to a concentrator. References Telecommunications equipment
Terminal controller
[ "Technology" ]
28
[ "Computing stubs" ]