id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
2,139,649
https://en.wikipedia.org/wiki/Moore%20space%20%28algebraic%20topology%29
In algebraic topology, a branch of mathematics, Moore space is the name given to a particular type of topological space that is the homology analogue of the Eilenberg–Maclane spaces of homotopy theory, in the sense that it has only one nonzero homology (rather than homotopy) group. The study of Moore spaces was initiated by John Coleman Moore in 1954. Formal definition Given an abelian group G and an integer n ≥ 1, let X be a CW complex such that and for i ≠ n, where denotes the n-th singular homology group of X and is the i-th reduced homology group. Then X is said to be a Moore space. It's also sensible to require (as Moore did) that X be simply-connected if n>1. Examples is a Moore space of for . is a Moore space of for . See also Eilenberg–MacLane space, the homotopy analog. Homology sphere References Hatcher, Allen. Algebraic topology, Cambridge University Press (2002), . For further discussion of Moore spaces, see Chapter 2, Example 2.40. A free electronic version of this book is available on the author's homepage. Algebraic topology
Moore space (algebraic topology)
[ "Mathematics" ]
252
[ "Topology stubs", "Fields of abstract algebra", "Topology", "Algebraic topology" ]
2,139,702
https://en.wikipedia.org/wiki/Product%20term
In Boolean logic, a product term is a conjunction of literals, where each literal is either a variable or its negation. Examples Examples of product terms include: Origin The terminology comes from the similarity of AND to multiplication as in the ring structure of Boolean rings. Minterms For a boolean function of variables , a product term in which each of the variables appears once (in either its complemented or uncomplemented form) is called a minterm. Thus, a minterm is a logical expression of n variables that employs only the complement operator and the conjunction operator. References Fredrick J. Hill, and Gerald R. Peterson, 1974, Introduction to Switching Theory and Logical Design, Second Edition, John Wiley & Sons, NY, Boolean algebra
Product term
[ "Mathematics" ]
160
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic" ]
2,139,794
https://en.wikipedia.org/wiki/Paratrooper%20%28ride%29
The Paratrooper, also known as the "Parachute Ride" or "Umbrella Ride", is a type of fairground ride. It is a ride where seats suspended below a wheel rotate at an angle. The seats are free to rock sideways and swing out under centrifugal force as the wheel rotates. Invariably, the seats on the Paratrooper ride have a round shaped umbrella or other shaped canopy above the seats. In contrast to modern thrill rides, the Paratrooper is a ride suitable for almost all ages. Most Paratrooper rides require the rider to be at least 36 inches (91.44 cm) tall to be accompanied by an adult, and over 48 inches (121.92 cm) to ride alone. Older Paratrooper rides have a rotating wheel which is permanently raised, which has the disadvantage that riders can only load two at a time as each seat is brought to hang vertically at the lowest point of the wheel. Some models have a lower platform that's slightly raised on the ends that could permit the loading of up to three seats at a time. Most of these rides were made by the manufacturing companies Bennett, Watkins or Hrubetz. The German manufacturer Heintz-Fahtze also made larger models of the Paratrooper under the name of the Twister. Modern Paratrooper rides use a hydraulic lifting piston to raise the wheel to their riding angle while spinning the seats. In its lowered position, all the seats hang vertically near the ground and can be loaded simultaneously. The above manufacturers also made these types and the height requirements to ride them remain the same. Variations The Force 10 is a ride made by Tivoli Enterprises that features some of the same motion of the Paratrooper. The Star Trooper is a variant created by Dartron Industries that features seats facing both ways. The Star Trooper's initial design eventually evolved into the Cliffhanger, also made by Dartron Industries. The same seats for this ride are used in the Swift-O-Plane and the same height requirement is the same as Enterprise. In the 1980s, British amusement manufacturer David Ward developed the Super Trooper, of which the wheel rises horizontally up a central column. Once at the top, the wheel slants up to 45 degrees in either direction. He built two 12-seat versions and a 10-seat version. In 2018, PWS Rides Ltd. acquired the plans from Ward to build a new version with the first example due to be delivered in early 2019. References Amusement rides
Paratrooper (ride)
[ "Physics", "Technology" ]
508
[ "Physical systems", "Machines", "Amusement rides" ]
2,139,847
https://en.wikipedia.org/wiki/Keyhole%20Markup%20Language
Keyhole Markup Language (KML) is an XML notation for expressing geographic annotation and visualization within two-dimensional maps and three-dimensional Earth browsers. KML was developed for use with Google Earth, which was originally named Keyhole Earth Viewer. It was created by Keyhole, Inc, which was acquired by Google in 2004. KML became an international standard of the Open Geospatial Consortium in 2008. Google Earth was the first program able to view and graphically edit KML files, but KML support is now available in many GIS software applications, such as Marble, QGIS, and ArcGIS. Structure The KML file specifies a set of features (place marks, images, polygons, 3D models, textual descriptions, etc.) that can be displayed on maps in geospatial software implementing the KML encoding. Every place has a longitude and a latitude. Other data can make a view more specific, such as tilt, heading, or altitude, which together define a "camera view" along with a timestamp or timespan. KML shares some of the same structural grammar as Geography Markup Language (GML). Some KML information cannot be viewed in Google Maps or Mobile. KML files are very often distributed as KMZ files, which are zipped KML files with a .kmz extension. The contents of a KMZ file are a single root KML document and optionally any overlays, images, icons, and COLLADA 3D models referenced in the KML including network-linked KML files. The root KML document by convention is a file named "doc.kml" at the root directory level, which is the file loaded upon opening. By convention the root KML document is at root level and referenced files are in subdirectories (e.g. images for overlay). An example KML document is: <?xml version="1.0" encoding="UTF-8"?> <kml xmlns="http://www.opengis.net/kml/2.2"> <Document> <Placemark> <name>New York City</name> <description>New York City</description> <Point> <coordinates>-74.006393,40.714172,0</coordinates> </Point> </Placemark> </Document> </kml> The MIME type associated with KML is application/vnd.google-earth.kml+xml; the MIME type associated with KMZ is application/vnd.google-earth.kmz. Geodetic reference systems in KML For its reference system, KML uses 3D geographic coordinates: longitude, latitude, and altitude, in that order, with negative values for west, south, and below mean sea level. The longitude/latitude components (decimal degrees) are as defined by the World Geodetic System of 1984 (WGS84). Altitude, the vertical component, is measured in meters from the WGS84 EGM96 Geoid vertical datum. If altitude is omitted from a coordinate string, e.g. (-77.03647, 38.89763) then the default value of 0 (approximately sea level) is assumed for the altitude component, i.e. (-77.03647, 38.89763, 0). A formal definition of the coordinate reference system (encoded as GML) used by KML is contained in the OGC KML 2.2 Specification. This definition references well-known EPSG CRS components. OGC standard process The KML 2.2 specification was submitted to the Open Geospatial Consortium to assure its status as an open standard for all geobrowsers. In November 2007 a new KML 2.2 Standards Working Group was established within OGC to formalize KML 2.2 as an OGC standard. Comments were sought on the proposed standard until January 4, 2008, and it became an official OGC standard on April 14, 2008. The OGC KML Standards Working Group finished working on change requests to KML 2.2 and incorporated accepted changes into the KML 2.3 standard. The official OGC KML 2.3 standard was published on August 4, 2015. See also Packet radio protocols Brian McClendon CityGML GeoJSON Geospatial content management system GPS eXchange Format Keyhole satellite series NASA WorldWind Point of interest SketchUp file formats The Blue Marble Waypoint Wikimapia References External links OGC KML 2.2 Standard OGC Official KML 2.2 Schema Google's KML Documentation Articles with example code GIS file formats Google Open formats Open Geospatial Consortium XML-based standards XML markup languages Data compression
Keyhole Markup Language
[ "Technology" ]
1,015
[ "Computer standards", "XML-based standards" ]
2,139,849
https://en.wikipedia.org/wiki/Mass%20ratio
In aerospace engineering, mass ratio is a measure of the efficiency of a rocket. It describes how much more massive the vehicle is with propellant than without; that is, the ratio of the rocket's wet mass (vehicle plus contents plus propellant) to its dry mass (vehicle plus contents). A more efficient rocket design requires less propellant to achieve a given goal, and would therefore have a lower mass ratio; however, for any given efficiency a higher mass ratio typically permits the vehicle to achieve higher delta-v. The mass ratio is a useful quantity for back-of-the-envelope rocketry calculations: it is an easy number to derive from either or from rocket and propellant mass, and therefore serves as a handy bridge between the two. It is also a useful for getting an impression of the size of a rocket: while two rockets with mass fractions of, say, 92% and 95% may appear similar, the corresponding mass ratios of 12.5 and 20 clearly indicate that the latter system requires much more propellant. Typical multistage rockets have mass ratios in the range from 8 to 20. The Space Shuttle, for example, has a mass ratio around 16. Derivation The definition arises naturally from Tsiolkovsky's rocket equation: where Δv is the desired change in the rocket's velocity ve is the effective exhaust velocity (see specific impulse) m0 is the initial mass (rocket plus contents plus propellant) m1 is the final mass (rocket plus contents) This equation can be rewritten in the following equivalent form: The fraction on the left-hand side of this equation is the rocket's mass ratio by definition. This equation indicates that a Δv of times the exhaust velocity requires a mass ratio of . For instance, for a vehicle to achieve a of 2.5 times its exhaust velocity would require a mass ratio of (approximately 12.2). One could say that a "velocity ratio" of requires a mass ratio of . Alternative definition Sutton defines the mass ratio inversely as: In this case, the values for mass fraction are always less than 1. See also Rocket fuel Propellant mass fraction Payload fraction References Astrodynamics Mass Ratios
Mass ratio
[ "Physics", "Mathematics", "Engineering" ]
445
[ "Scalar physical quantities", "Astrodynamics", "Physical quantities", "Quantity", "Mass", "Ratios", "Size", "Arithmetic", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Matter" ]
2,139,915
https://en.wikipedia.org/wiki/Mason%27s%20mark
A mason's mark is an engraved symbol often found on dressed stone in buildings and other public structures. In stonemasonry Regulations issued in Scotland in 1598 by James VI's Master of Works, William Schaw, stated that on admission to the guild, every mason had to enter his name and his mark in a register. There are three types of marks used by stonemasons. Banker marks were made on stones before they were sent to be used by the walling masons. These marks served to identify the banker mason who had prepared the stones to their paymaster. This system was employed only when the stone was paid for by measure, rather than by time worked. For example, the 1306 contract between Richard of Stow, mason, and the Dean and Chapter of Lincoln Cathedral, specified that the plain walling would be paid for by measure, and indeed banker marks are found on the blocks of walling in this cathedral. Conversely, the masons responsible for walling the eastern parts of Exeter Cathedral were paid by the week, and consequently few banker marks are found on this part of the cathedral. Banker marks make up the majority of masons' marks, and are generally what are meant when the term is used without further specification. Assembly marks were used to ensure the correct installation of important pieces of stonework. For example, the stones on the window jambs in the chancel of North Luffenham church in Rutland are each marked with a Roman numeral, directing the order in which the stones were to be installed. Quarry stones were used to identify the source of a stone, or occasionally the quality. In Freemasonry Freemasonry, a fraternal order that uses an analogy to stonemasonry for much of its structure, also makes use of marks. A Freemason who takes the degree of Mark Master Mason will be asked to create his own Mark, as a type of unique signature or identifying badge. Some of these can be quite elaborate. Gallery of mason's marks See also Benchmark (surveying) Builder's signature Carpenter's mark House mark Merchant's mark References Further reading External links Examples of Mason's marks Site detailing Mason's Marks in Scotland Freemasonry Masonic symbolism Stonemasonry Symbols Inscriptions
Mason's mark
[ "Mathematics", "Engineering" ]
463
[ "Construction", "Symbols", "Stonemasonry" ]
15,833,970
https://en.wikipedia.org/wiki/Flip%20Video
The Flip Video cameras are an American series of pocket video cameras for digital video created by Pure Digital Technologies, a company bought by Cisco Systems in March 2009; variants include the UltraHD, the MinoHD, and the SlideHD. Flip Video cameras were known for their simple interface with few buttons, minimal menus and built in USB plugs (from which they derived the flip name), and were marketed as making video "simple to shoot, simple to share" Production of the line of Flip video cameras ran from 2006 until April 2011, when Cisco Systems discontinued them as to "exit aspects of [its] consumer businesses". Flip cameras contributed to an increase in the popularity of similar small tapeless camcorders, although the inclusion of HD video cameras in many smartphones has since made them a more niche product. Features Flip cameras' video quality was unusually good for their prices and sizes. They can record videos at different resolutions. FlipHD camcorders digitally record high-definition video at 1280 x 720 resolution using H.264 video compression, Advanced Audio Coding (AAC) audio compression and the MP4 file format, while the older models used a 640 x 480 resolution. The MinoHD and SlideHD models have an internal lithium-ion rechargeable battery included, while the Ultra series included a removable battery that can be interchanged with standard AA or AAA batteries. All models lack memory card extension slots, though the Flip UltraHD(2 hr) can record to a storage device via FlipPort. Models can be connected to a computer with a flip-out USB connector, without the need for a USB cable. Flip Cameras record monaural sound, and use a simple clip-navigation interface with a D-pad and two control buttons which allowed for viewing of recorded videos, starting and stopping recording, and digital zoom. The third and final generation of Flip UltraHD cameras retailed for $149.99 and $199.99 for 4GB (1 hour) and 8GB (2 hour) models respectively, incorporate digital stabilization, and increased the frame rate from 30 to 60 frames per second. With FlipPort, users can plug in external accessories. All Flip cameras included the required video player and 3ivx codec software, FlipShare, on the camera's internal storage. For all models after 2010, an HDMI cable can stream videos to TV screens. Later Flip Video models came in a variety of colors, and could be custom ordered with designs digitally painted on. Accessories for the Flip Video camera include an underwater case, a mini-tripod, a bicycle helmet attachment, and a wool case (Mino camcorders) or soft pouch (Flip UltraHD), rechargeable battery replacements for the UltraHD series, and an extension cable. Flip Video's accompanying software is called FlipShare, which facilitate downloads of videos, basic editing, and uploading to various websites. After the release of version 5.6, FlipShare no longer included a function to convert video to WMV format. History The first version was originally released as the "Pure Digital Point & Shoot" video camcorder on May 1, 2006 as a reusable follow-on to the popular CVS One-Time-Use Camcorder, a Pure Digital product sold through CVS/pharmacy stores that was designed for direct conversion to DVD media. The CVS product was a line extension of previous digital disposable camera products, sold initially through Ritz Camera and associated brands under the Dakota Digital name. The camcorder was renamed as the Flip Video a year later. On September 12, 2007, the Flip Ultra was released. The Flip Ultra was the best-selling camcorder on Amazon.com after its debut, capturing about 13% of the camcorder market. Flip products received an unusually large advertising campaign, including product placement, celebrity endorsements, and sponsoring of events such as concert tours during their introduction. From 2009, and through the Cisco takeover, the Flip range was sold in Europe by Widget UK. Models Pure Digital One-Time-Use Camcorder (20 minutes – model 200) Pure Digital Point & Shoot Video Camcorder (30 minutes – 225), Codenamed: Saturn 2.5 Pure Digital Point & Shoot Video Camcorder (30 minutes – PSV-351; 60 minutes – PSV-352), Codenamed: Saturn 3.5 Pure Digital Flip Video (30 minutes – F130/PSV-451; 60 minutes – F160/PSV-452), Codenamed: Austin Flip Video Ultra (30 minutes – F230/PSV-551; 60 minutes – F260/PSV-552) Codenamed: Chicago Flip Video Ultra II (2 hour – U1120), Codenamed: Phoenix SD Flip Video UltraHD (2 hours – U2120), Codenamed: Phoenix HD Flip Video UltraHD II (1 hour – U260) Flip Video UltraHD III (2 hours – U32120) Flip Video Mino (1 hour – F360), Codenamed: Fremont Flip Video MinoHD (1 hour – F460), Codenamed: Newton Flip Video MinoHD II (2 hours – M2120), Codenamed: Quantico Cisco Flip MinoPro (4 hours – MP2240) Flip Video MinoHD III (1 hour – M3160; 2 hours – M31120) Flip Video SlideHD (4 hours – S1240), Codenamed: Jamestown Flip Video Ultra Live supposed to be launched April 12, 2011; only a limited amount was produced (2 hours) Mino A smaller version of the Flip, the Flip Video Mino, was released on June 4, 2008. The Mino captures video in 640x480 resolution at 30 frames per second. On launch it retailed for about US$180 in the United States, providing about 60 minutes of video recording capability with 2 GB flash memory capacity. The third and final Flip MinoHD was released on September 20, 2010. It features HD recording capabilities in the same dimensions as the second generation MinoHD (1280/720 at 30 fps), The only major change in the MinoHD third generation was Image Stabilization. Also released on September 20, 2010 was a 4 GB, MinoHD with one hour of recording capability. The one-hour version retailed for $179 and the two-hour version retailed for $229. Free Minos were made available to all audience members at YouTube Live due to Flip Video's sponsorship of the event. A station was even set up so people could upload the videos to YouTube. FlipShare TV FlipShare TV was an accessory for the third-generation Flip UltraHD camera, allowing users to connect the TV base to their TV, plug in a USB transmitter key to their computer, and view their Flipshare library. Acquisition and shutdown by Cisco On May 21, 2009, Cisco Systems acquired Pure Digital Technologies for US$ 590 million in stock. On April 12, 2011, Cisco announced that it "will exit aspects of its consumer business", including shutting down the Flip Video division. Some observers suggested that the Flip was facing growing competition from camera phones, particularly smartphones (which disrupted consumer electronics trade such as point-and-shoot cameras, wristwatches, alarm clocks, portable music players and GPS devices) that had recently begun incorporating HD video cameras. David Pogue of The New York Times disagreed with the camera phone-competition theory. He said that smartphones made up only a small fraction of overall worldwide sales of cell phones in 2011, and the Flip was still selling strongly when its discontinuation was announced. Other potential causes of the shutdown include the fact that consumer hardware was not part of Cisco's core businesses of services and software, and that their profit margins on consumer electronics were narrow. CNet reported that Flip's Christmas 2010 sales disappointed Cisco. Cisco shut down the Flip business instead of divesting of it, retaining its technology. It is possible that Cisco always intended the opposite of acquihiring; close the company, keeping Flip's patents and other intellectual property for Cisco's videoconferencing business but not the consumer business or employees. References External links Specifications of all 3 video cameras of the final generation of flips Cisco buys Flip Video maker Cisco to shutter Flip camera business Video hardware Camcorders Cisco Systems Cisco Systems acquisitions Cameras introduced in 2006 Products and services discontinued in 2011 Cisco products
Flip Video
[ "Engineering" ]
1,761
[ "Electronic engineering", "Video hardware" ]
15,837,161
https://en.wikipedia.org/wiki/Fluoride%20varnish
Fluoride varnish is a highly concentrated form of fluoride that is applied to the tooth's surface by a dentist, dental hygienist or other dental professional, as a type of topical fluoride therapy. It is not a permanent varnish but due to its adherent nature it is able to stay in contact with the tooth surface for several hours. It may be applied to the enamel, dentine or cementum of the tooth and can be used to help prevent decay, remineralise the tooth surface and to treat dentine hypersensitivity. There are more than 30 fluoride-containing varnish products on the market today, and they have varying compositions and delivery systems. These compositional differences lead to widely variable pharmacokinetics, the effects of which remain largely untested clinically. Fluoride varnishes are relatively new in the United States, but they have been widely used in western Europe, Canada, South Africa and the Scandinavian countries since the 1980s as a dental caries prevention therapy. They are recognised by the Food and Drug Administration for use as desensitising agents, but, currently, not as an anti-decay agent. Both Canadian and European studies have reported that fluoride varnish is as effective in preventing tooth decay as professionally applied fluoride gel; however, it is not in widespread use for this purpose. Fluoride varnish is composed of a high concentration of fluoride as a salt or silane-based preparation in a fast drying, alcohol and resin based solution. The concentration, form of fluoride, and dispensing method may vary depending on the manufacturer. While most fluoride varnishes contain 5% sodium fluoride at least one brand of fluoride varnish contains 1% difluorsilane in a polyurethane base and one brand contains 2.5% sodium fluoride that has been milled to perform similar to 5% sodium fluoride products in a shellac base. Clinical recommendations A panel of experts convened by the American Dental Association (ADA) Council on Scientific Affairs presents evidence-based clinical recommendations regarding professionally applied, prescription-strength and home-use topical fluoride agents for caries prevention. The panel recommends the use of 2.26 percent fluoride varnish for people at risk of developing dental caries. As part of the evidence-based approach to care, these clinical recommendations should be integrated with a practitioner's professional judgment and the patient's needs and preferences. United Kingdom Fluoride varnish is widely used in the United Kingdom, following guidelines from multiple sources backing its efficacy. Public Health England, a UK government organisation sponsored by the Department of Health, released guidance in 2014 recommending fluoride varnish application at least twice yearly for children and young adults. Similarly, the Scottish Intercollegiate Guidelines Network and the Scottish Dental Clinical Effectiveness Programme have both released independent guidance recommending at least twice yearly fluoride varnish application, citing a strong clinical evidence base. SIGN recommends fluoride varnish at a concentration of 2.2%, while SDCEP recommends 15%. Types of varnish Different varnish products release varying amounts of calcium, inorganic phosphate, and fluoride ions. MI varnish releases the most amounts of calcium ions and fluoride. Enamel Pro varnish releases the most inorganic phosphate ions. Each type of varnish is designed to be used in specific situations. To date, there have been no studies that show that altering the basic formulation recommended by the FDA will result in greater caries reduction. Effectiveness There is some evidence that fluoride varnish treatment has a better outcome at preventing cavities at a lower cost compared to other fluoride treatments such as the fluoride mouth rinsing. For fluoride varnish treatment, the benefit to cost ratio 1.8:1, whereas fluoride mouth rinsing is 0.9:1. With fluoride varnish treatments, one can save by preventing future restorations. Fluoride varnish also requires fewer treatments for measurable effectiveness, therefore in the long run it is cost effective when compared to other treatments. A 2020 Cochrane systematic review found that while varnish may be effective at preventing cavities when applied to first permanent molars, there is no evidence to suggest if varnish is superior to resin-based fissure sealants. There is low quality evidence suggesting that when tooth surfaces were sealed and varnished with fluoride, as opposed to when varnished only with fluoride varnish, there may be an advantage. Advantages and disadvantages Advantages Fluoride varnishes are available in different flavours which can be advantageous when treating younger patients They dry rapidly and will set even in the presence of saliva Because they do not require the use of fluoride trays they are suitable for use in patients with a strong gag reflex (See image to the right) Due to the small amounts used and the rapid setting time there is only a small or negligible amount of fluoride ingested It has a sticky consistency which helps it to adhere to the tooth’s surface thereby allowing the fluoride to stay in contact with the tooth for several hours Based on published findings, professionally applied fluoride varnish does not appear to be a risk factor for dental fluorosis, even in children under the age of 6. This is due to the reduction in the amount of fluoride which may potentially be swallowed during the fluoride treatment because of the small quantities used and the adherence of the varnish to the teeth. Fluoride varnish treatments are shown to reduce the number of the cariogenic bacteria S. Mutans by over ten-fold. Fluoride varnish was a higher concentration than the foam and gel. There was not a significant difference in the amount of remineralization between gels, foams, and varnish. A study with a larger sample size and a longer time frame could show differing results. They can be applied easily and quickly Disadvantages Due to the color of and adherence of some brands of fluoride varnish, they may cause a temporary change in the surface color of teeth as well as some filling materials. As the varnish is worn away by eating and brushing, the yellowish colour fades. Varnish costs more than gel and requires a prescription unlike the gel that is over the counter. They do not have the bitter taste of some fluoride gels, but in some patients the taste of the varnish can cause nausea especially when consuming food within the 24 hours post treatment. Indications and contraindications Indications for use Use as a topical fluoride agent on moderate and high-risk patients, especially children aged 5 and younger Desensitizing agent for exposed root surfaces Fluoridated cavity varnish When a higher concentration of fluoride is needed for high caries risk patients In the elderly to prevent increasingly prevalent root dentin lesions, which may require higher concentration of fluoride On advanced enamel carious lesions, which may also require higher fluoride concentration for remineralization Fluoride treatment for institutionalized patients or in other situations where setting, equipment and patient management might preclude the use of other fluoride delivery methods Caries prevention on exposed root surfaces Remineralization of lesions in root dentin Fluoride application around orthodontic bands and brackets (See image to right) Fluoride treatment on patients when there is a concern that a fluoride rinse, gel or foam might be swallowed Contraindications for use Areas with open cavities Patients that are at low-risk or are decay-free and live in an area where the water is fluoridated Treatment of areas where discoloration after treatment may be an aesthetic concern Fluoride varnish application is contraindicated in patient with ulcerative gingivitis and stomatitis. See also Dental caries Fluoride therapy Xerostomia Dental fluorosis Dentin hypersensitivity Dental restoration Dental surgery References External links The Canadian Dental Association The American Dental Association Canadian Dental Hygienists Association American Dental Hygienists' Association Centers for Disease Control and Prevention Dental materials
Fluoride varnish
[ "Physics" ]
1,694
[ "Materials", "Dental materials", "Matter" ]
15,838,199
https://en.wikipedia.org/wiki/Thymosin%20beta-4
Thymosin beta-4 is a protein that in humans is encoded by the TMSB4X gene. Recommended INN (International Nonproprietary Name) for thymosin beta-4 is 'timbetasin', as published by the World Health Organization (WHO). The protein consists (in humans) of 43 amino acids (sequence: SDKPDMAEI EKFDKSKLKK TETQEKNPLP SKETIEQEKQ AGES) and has a molecular weight of 4921 g/mol. Thymosin-β4 is a major cellular constituent in many tissues. Its intracellular concentration may reach as high as 0.5 mM. Following Thymosin α1, β4 was the second of the biologically active peptides from Thymosin Fraction 5 to be completely sequenced and synthesized. Function This gene encodes an actin sequestering protein which plays a role in regulation of actin polymerization. The protein is also involved in cell proliferation, migration, and differentiation. This gene escapes X inactivation and has a homolog on chromosome Y (TMSB4Y). Biological activities of thymosin β4 Any concepts of the biological role of thymosin β4 must inevitably be coloured by the demonstration that total ablation of the thymosin β4 gene in the mouse allows apparently normal embryonic development of mice which are fertile as adults. Actin binding Thymosin β4 was initially perceived as a thymic hormone. However this changed when it was discovered that it forms a 1:1 complex with G (globular) actin, and is present at high concentration in a wide range of mammalian cell types. When appropriate, G-actin monomers polymerize to form F (filamentous) actin, which, together with other proteins that bind to actin, comprise cellular microfilaments. Formation by G-actin of the complex with β-thymosin (= "sequestration") opposes this. Due to its profusion in the cytosol and its ability to bind G-actin but not F-actin, thymosin β4 is regarded as the principal actin-sequestering protein in many cell types. Thymosin β4 functions like a buffer for monomeric actin as represented in the following reaction: F-actin ↔ G-actin + Thymosin β4 ↔ G-actin/Thymosin β4 Release of G-actin monomers from thymosin β4 occurs as part of the mechanism that drives actin polymerization in the normal function of the cytoskeleton in cell morphology and cell motility. The sequence LKKTET, which starts at residue 17 of the 43-aminoacid sequence of thymosin beta-4, and is strongly conserved between all β-thymosins, together with a similar sequence in WH2 domains, is frequently referred to as "the actin-binding motif" of these proteins, although modelling based on X-ray crystallography has shown that essentially the entire length of the β-thymosin sequence interacts with actin in the actin-thymosin complex. "Moonlighting" In addition to its intracellular role as the major actin-sequestering molecule in cells of many multicellular animals, thymosin β4 shows a remarkably diverse range of effects when present in the fluid surrounding animal tissue cells. Taken together, these effects suggest that thymosin has a general role in tissue regeneration. This has suggested a variety of possible therapeutic applications, and several have now been extended to animal models and human clinical trials. It is considered unlikely that thymosin β4 exerts all these effects via intracellular sequestration of G-actin. This would require its uptake by cells, and moreover, in most cases the cells affected already have substantial intracellular concentrations. The diverse activities related to tissue repair may depend on interactions with receptors quite distinct from actin and possessing extracellular ligand-binding domains. Such multi-tasking by, or "partner promiscuity" of, proteins has been referred to as protein moonlighting. Proteins such as thymosins which lack stable folded structure in aqueous solution, are known as intrinsically unstructured proteins (IUPs). Because IUPs acquire specific folded structures only on binding to their partner proteins, they offer special possibilities for interaction with multiple partners. A candidate extracellular receptor of high affinity for thymosin β4 is the β subunit of cell surface-located ATP synthase, which would allow extracellular thymosin to signal via a purinergic receptor. Some of the multiple activities of thymosin β4 unrelated to actin may be mediated by a tetrapeptide enzymically cleaved from its N-terminus, N-acetyl-ser-asp-lys-pro, brand names Seraspenide or Goralatide, best known as an inhibitor of the proliferation of haematopoietic (blood-cell precursor) stem cells of bone marrow. Tissue regeneration Work with cell cultures and experiments with animals have shown that administration of thymosin β4 can promote migration of cells, formation of blood vessels, maturation of stem cells, survival of various cell types and lowering of the production of pro-inflammatory cytokines. These multiple properties have provided the impetus for a worldwide series of on-going clinical trials of potential effectiveness of thymosin β4 in promoting repair of wounds in skin, cornea and heart. Such tissue-regenerating properties of thymosin β4 may ultimately contribute to repair of human heart muscle damaged by heart disease and heart attack. In mice, administration of thymosin β4 has been shown to stimulate formation of new heart muscle cells from otherwise inactive precursor cells present in the outer lining of adult hearts, to induce migration of these cells into heart muscle and recruit new blood vessels within the muscle. Anti-inflammatory role for sulfoxide In 1999 researchers in Glasgow University found that an oxidised derivative of thymosin β4 (the sulfoxide, in which an oxygen atom is added to the methionine near the N-terminus) exerted several potentially anti-inflammatory effects on neutrophil leucocytes. It promoted their dispersion from a focus, inhibited their response to a small peptide (F-Met-Leu-Phe) which attracts them to sites of bacterial infection and lowered their adhesion to endothelial cells. (Adhesion to endothelial cells of blood vessel walls is pre-requisite for these cells to leave the bloodstream and invade infected tissue). A possible anti-inflammatory role for the β4 sulfoxide was supported by the group's finding that it counteracted artificially-induced inflammation in mice. The group had first identified the thymosin sulfoxide as an active factor in culture fluid of cells responding to treatment with a steroid hormone, suggesting that its formation might form part of the mechanism by which steroids exert anti-inflammatory effects. Extracellular thymosin β4 would be readily oxidised to the sulfoxide in vivo at sites of inflammation, by the respiratory burst. Terminal deoxynucleotidyl transferase Thymosin β4 induces the activity of the enzyme terminal deoxynucleotidyl transferase in populations of thymocytes (thymus-derived lymphocytes). This suggests that the peptide may contribute to the maturation of these cells. Clinical significance Tβ4 has been studied in a number of clinical trials. In phase 2 trials with patients having pressure ulcers, venous pressure ulcers, and epidermolysis bullosa, Tβ4 accelerated the rate of repair. It was also found to be safe and well tolerated. In human clinical trials, Tβ4 improves the conditions of dry eye and neurotrophic keratopathy with effects lasting long after the end of treatment. Doping in sports Thymosin beta-4 is considered a performance enhancing substance and is banned in sports by the World Anti-Doping Agency due to its effect of aiding soft tissue recovery and enabling higher training loads. It was central to two controversies in Australia in the 2010s which saw a large proportion of the playing lists from two professional football clubs – the Cronulla-Sutherland Sharks of the National Rugby League and the Essendon Football Club of the Australian Football League – found guilty of doping and suspended from playing; in both cases, the players were administered thymosin beta-4 in a program organised by sports scientist Stephen Dank. Interactions TMSB4X has been shown to interact with ACTA1 and ACTG1. See also Beta thymosins Thymosin beta-4, Y-chromosomal Thymosins References Further reading Peptides
Thymosin beta-4
[ "Chemistry" ]
1,829
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
15,839,432
https://en.wikipedia.org/wiki/Thracian%20horseman
The Thracian horseman (also "Thracian Rider" or "Thracian Heroes") is a recurring motif depicted in reliefs of the Hellenistic and Roman periods in the Balkans—mainly Thrace, Macedonia, Thessaly and Moesia—roughly from the 3rd century BC to the 3rd century AD. Inscriptions found in Romania identify the horseman as Heros and Eros (Latin transcriptions of Ἥρως) and also Herron and Eron (Latin transcriptions of Ἥρων), apparently the word heroes used as a proper name. He is sometimes addressed in inscriptions merely as κύριος, δεσπότης or ἥρως. The Thracian horseman is depicted as a hunter on horseback, riding from left to right. Between the horse's hooves is depicted either a hunting dog or a boar. In some instances, the dog is replaced by a lion. Its depiction is in the tradition of the funerary steles of Roman cavalrymen, with the addition of syncretistic elements from Hellenistic and Paleo-Balkanic religious or mythological tradition. Name The original Palaeo-Balkan word for 'horseman' has been reconstructed as *Me(n)zana-, with the root *me(n)za- 'horse'. It is based on evidence provided by: Albanian: mëz or mâz 'foal', with the original meaning of 'horse' that underwent a later semantic shift 'horse' > 'foal' after the loan from Latin caballus into Albanian kalë 'horse'; the same root is also found in Albanian: mazrek 'horse breeder'; Messapic: menzanas, appearing as an epithet in Zis Menzanas, found in votive inscriptions, and in Iuppiter Menzanas, mentioned in a passage written by Festus in relation to a Messapian horse sacrifice; Romanian: mânz; Thracian: ΜΕΖΗΝΑ̣Ι mezēnai, found in the inscription of the Duvanli gold ring also bearing the image of a horseman. Iconography Images of the Thracian Horseman appear in Thrace and in Lower Moesia, but also in Upper Moesia among Thracian populations and Thracian soldiers. According to Vladimir Toporov (1990), a initial number of iconographies number 1,500, found in modern Bulgaria and in Yugoslavia. Interpretation The horseman was a common Palaeo-Balkan hero. The motif depicted on reliefs most likely represents a composite figure, a Thracian heroes possibly based on Rhesus, the Thracian king mentioned in the Iliad, to which Scythian, Hellenistic and possibly other elements had been added. Late Roman syncretism The Cult of the Thracian horseman was especially important in Philippi, where the Heros had the epithets of Hero Auloneites, soter ('saviour') and epekoos 'answerer of prayers'. Funerary stelae depicting the horseman belong to the middle or lower classes (while the upper classes preferred the depiction of banquet scenes). Under the Roman Emperor Gordian III the god on horseback appears on coins minted at Tlos, in neighboring Lycia, and at Istrus, in the province of Lower Moesia, between Thrace and the Danube. In the Roman era, the "Thracian horseman" iconography is further syncretised. The rider is now sometimes shown as approaching a tree entwined by a serpent, or as approaching a goddess. These motifs are partly of Greco-Roman and partly of possible Scythian origin. The motif of a horseman with his right arm raised advancing towards a seated female figure is related to Scythian iconographic tradition. It is frequently found in Bulgaria, associated with Asclepius and Hygeia. Stelai dedicated to the Thracian Heros Archegetas have been found at Selymbria. Inscriptions from Bulgaria give the names Salenos and Pyrmerula/Pirmerula. Epithets Apart from syncretism with other deities (such as Asclepios, Apollo, Sabatius), the figure of the Thracian Horseman was also found with several epithets: Karabasmos, Keilade(i)nos, Manimazos, Aularchenos, Aulosadenos, Pyrmeroulas. One in particular was found in Avren, dating from the III century CE, with a designation that seems to refer to horsemanship: Outaspios, and variations Betespios, Ephippios and Ouetespios. Bulgarian linguist Vladimir I. Georgiev proposed the following interpretations to its epithets: Ouetespios (Betespios) - related to Albanian vetë 'own, self' and Avestan aspa- 'horse', meaning 'der selbst Pferd ist'. Outaspios - corresponds to Greek epihippios 'on a horse'. Manimazos - related to Latin mani 'good' and Romanian mînz; meaning 'the good horse'. Karabasmos - related to Old Bulgarian gora 'mountain' and Greek phasma 'phantom'; meaning 'mountain-phantom' ("Berg-geist", in German). Bulgarian linguist interpreted the following theonyms: Руrumērulаs (Variations: Руrmērulаs, Руrymērulаs, Pirmerulas) - linked to Greek pyrós 'maize, corn'; and PIE stem *mer 'great'. Related imagery Twin horsemen Related to the Dioscuri motif is the so-called "Danubian Horsemen" motif of two horsemen flanking a standing goddess. These "Danubian horsemen" are thus called due to their reliefs being found in the Roman province of Danube. However, some reliefs have also been found in Roman Dacia - which gives the alternate name for the motif: "Dacian Horseman". Scholarship locates its diffusion across Moesia, Dacia, Pannonia and Danube, and, to a lesser degree, in Dalmatia and Thracia. The motif of a standing goddess flanked by two horsemen, identified as Artemis flanked by the Dioscuri, and a tree entwined by a serpent flanked by the Dioscuri on horseback was transformed into a motif of a single horseman approaching the goddess or the tree. Madara Rider The Madara Rider is an early medieval large rock relief carved on the Madara Plateau east of Shumen, in northeastern Bulgaria. The monument is dated in the c. 7th/8th century, during the reign of Bulgar Khan Tervel. In 1979 became enlisted on the UNESCO World Heritage Site. The relief incorporates elements of the autochthonous Thracian cult. Legacy The motif of the Thracian horseman was continued in Christianised form in the equestrian iconography of both Saint George and Saint Demetrius. The motif of the Thracian horseman is not to be confused with the depiction of a rider slaying a barbarian enemy on funerary stelae, as on the Stele of Dexileos, interpreted as depictions of a heroic episode from the life of the deceased. Gallery Hunter motif Serpent-and-tree Rider and goddess Greco-Roman comparanda Medieval comparanda See also Uastyrdzhi Tetri Giorgi Sabazios Medaurus Bellerophon Jupiter Column Pahonia Heros Peninsula in Antarctica is named after the Thracian Horseman. Castor and Pollux, sometimes linked to the Danubian Rider. References Bibliography Dimitrova, Nora. "Inscriptions and Iconography in the Monuments of the Thracian Rider." Hesperia: The Journal of the American School of Classical Studies at Athens 71, no. 2 (2002): 209-29. Accessed June 26, 2020. www.jstor.org/stable/3182007. Hoddinott, R. F. (1963). Early Byzantine Churches in Macedonia & Southern Serbia. Palgrave Macmillan, 1963. pp. 58–62. Irina Nemeti, Sorin Nemeti, Heros Equitans in the Funerary Iconography of Dacia Porolissensis. Models and Workshops. In: Dacia LVIII, 2014, p. 241-255, http://www.daciajournal.ro/pdf/dacia_2014/art_10_nemeti_nemeti.pdf Further reading Fol, Valeria. "Culte héroïque dans la Thrace – images littéraires grecques ou images réelles du chevalier-héros thrace". In: Ancient Thrace: Myth and Reality: Proceedings of the Thirteenth International Congress of Thracology, September 3 - 7, 2017. Volume 2. Sofia: St. Kliment Ohridski University Press, 2022. pp. 94–98. . Kirov, Slavtcho. "Sur la datation du culte du Cavalier thrace" [On the dating of the cult of the Thracian horseman]. In: Studia Academica Šumenensia 7 (2020): 172-186. Mackintosh, Majorie Carol (1992). The divine horseman in the art of the western Roman Empire. PhD thesis. The Open University. pp. 132–159. Oppermann, Manfred (2006). Der thrakische Reiter des Ostbalkanraumes im Spannungsfeld von Graecitas, Romanitas und lokalen Traditionen [The Thracian horseman of the Eastern Balkan region in the tension between Graecitas, Romanitas and local traditions]. Langenweißbach: Beier & Beran, . On the epigraphy of the Thracian Horseman Boteva, Diliana. "Further considerations on the votive reliefs of the Thracian Horseman". In: Moesica et Christiana. Studies in honour of professor Alexandru Barnea. hrsg. v. Adriana Panaite, Romeo Cîrjan. Brăila: Istros, 2016. pp. 309–320. Bottez, Valentin; Topoleanu, Florin. "A New Relief of the Thracian Horseman from Halmyris". In: Peuce (Serie Nouă) - Studii şi cercetari de istorie şi arheologie n. 19, XIX/2021, pp. 135–142. DIMITROVA, Nora; CLINTON, Kevin. "Chapter 2. A new bilingual votive monument with a “Thracian rider” relief". In: Studies in Greek epigraphy and history in honor of Stefen V. Tracy [en ligne]. Pessac: Ausonius Éditions, 2010 (généré le 29 juin 2021). Disponible sur Internet: <http://books.openedition.org/ausonius/2108>. . DOI: https://doi.org/10.4000/books.ausonius.2108. Krykin, S.M. "A Votive Bas-Relief of a Thracian Horseman From the Poltava Museum". In: Ancient Civilizations from Scythia to Siberia 2, 3 (1996): 283-288. doi: https://doi.org/10.1163/157005795X00164 Proeva, Nade. "Les représentations du «cavalier thrace» sur les monuments funéraires en Haute Macédoine". In: Ancient Thrace: Myth and Reality: Proceedings of the Thirteenth International Congress of Thracology, September 3 - 7, 2017. Volume 2. Sofia: St. Kliment Ohridski University Press, 2022. pp. 271–281. . Szabó, Csaba. "BEYOND ICONOGRAPHY. NOTES ON THE CULT OF THE THRACIAN RIDER IN APULUM". In: Studia Universitatis Babeş-Bolyai - Historia n. 1, 61/2016, pp. 62–73. On the "Danubian Horsemen" or "Danubian Riders": Bondoc, Dorel. "The representation of Danubian Horsemen from Ciupercenii Vechi, Dolj County". In: La Dacie et l´Empire romain. Mélanges d´épigraphie et d´archéologie offerts à Constantin C. Petolescu. Eds. M. Popescu, I. Achim, F. Matei-Popescu. București: 2018, pp. 229–257. Gočeva, Zlatozara. "Encore une Fois sur la “Déesse de Razgrad” et les Plus Anciens des “Cavaliers Danubiens”" [Again on the “Goddess from Razgrad” and the Most Ancient “Danube Horsemen”]. In: Thracia 19 (2011): 149-157. Hadiji, Maria Vasinca. "CULTUL CAVALERILOR DANUBIENI: ORIGINI SI DENUMIRE (I)" [THE WORSHIP OF THE DANUBIAN HORSEMEN: ORIGINS AND DESIGNATION (I)]. In: Apulum n. 1, 43/2006, pp. 253–267. Kremer, Gabrielle. "Some remarks about Domnus/Domna and the ‚Danubian Riders‘. In: S. Nemeti; E. Beu-Dachin; I. Nemeti; D. Dana (Hrsg.). The Roman Provinces. Mechanisms of Integration. Cluj-Napoca, 2019. pp. 275–290. Nemeti, Sorin; Cristean, Ștefana. "New Reliefs Plaques from Pojejena (Caraș-Severin county) depicting the Danubian Riders". In: Ziridava. Studia Archaeologica n. 1, 34/2020. pp. 277-286. Strokova, Lyudmila; Vitalii Zubar, and Mikhail Yu Treister. "Two Lead Plaques with a Depiction of a Danubian Horseman from the Collection of the National Museum of the History of the Ukraine". In: Ancient Civilizations from Scythia to Siberia 10, 1-2 (2004): 67-76. doi: https://doi.org/10.1163/1570057041963949 Szabó, Ádám. Domna et Domnus. CONTRIBUTIONS TO THE CULT-HISTORY OF THE ’DANUBIAN-RIDERS’ RELIGION. Hungarian Polis Studies 25, Phoibos Verlag, Wien, 2017. . Tudor, D. Corpus monumentorum religionis equitum danuvinorum (CMRED). Volume 1: Monuments. Leiden, The Netherlands: Brill. 24 Aug. 2015 [1969]. doi: https://doi.org/10.1163/9789004294745 Tudor, D. Corpus monumentorum religionis equitum danuvinorum (CMRED). Volume 2: Analysis and Interpretation of the Monuments; Leiden, The Netherlands: Brill, 24 Aug. 2015 [1976]. doi: https://doi.org/10.1163/9789004294752 3rd century BC in art Hellenistic art Greek war deities Horses in art Thracian religion Serbia in the Roman era Bulgaria in the Roman era Dacia Reliefs Iconography Supernatural beings identified with Christian saints Castor and Pollux Saint George (martyr)
Thracian horseman
[ "Astronomy" ]
3,277
[ "Castor and Pollux", "Astronomical myths" ]
15,840,012
https://en.wikipedia.org/wiki/Optical%20pulsar
An optical pulsar is a pulsar which can be detected in the visible spectrum. There are very few of these known: the Crab Pulsar was detected by stroboscopic techniques in 1969, shortly after its discovery in radio waves, at the Steward Observatory. The Vela Pulsar was detected in 1977 at the Anglo-Australian Observatory, and was the faintest star ever imaged at that time. Six known optical pulsars are listed by Shearer and Golden (2002): References External links "A Pulsar Discovery: First Optical Pulsar." Moments of Discovery, American Institute of Physics, 2007 (Includes audio and teachers guides). Star types
Optical pulsar
[ "Astronomy" ]
142
[ "Stellar astronomy stubs", "Star types", "Astronomy stubs", "Astronomical classification systems" ]
15,840,118
https://en.wikipedia.org/wiki/Hydrus%20%28software%29
Hydrus is a suite of Windows-based modeling software that can be used for analysis of water flow, heat and solute transport in variably saturated porous media (e.g., soils). HYDRUS suite of software is supported by an interactive graphics-based interface for data-preprocessing, discretization of the soil profile, and graphic presentation of the results. While HYDRUS-1D simulates water flow, solute and heat transport in one-dimension, and is a public domain software, HYDRUS 2D/3D extends the simulation capabilities to the second and third dimensions, and is distributed commercially. History HYDRUS 1D HYDRUS-1D traces its roots to the early work of van Genuchten and his SUMATRA and WORM models, as well as later work by Vogel (1987) and Kool and van Genuchten (1989) and their SWMI and HYDRUS models, respectively. While Hermitian cubic finite element numerical schemes were used in SUMATRA and linear finite elements in WORM and the older HYDRUS code for solution of both the water flow and solute transport equations, SWMI used finite differences to solve the flow equation. Various features of these four early models were combined first in the DOS-based SWMI_ST model (Šimůnek et al., 1993), and later in the Windows-based HYDRUS-1D simulator (Šimůnek et al., 1998). After releasing versions 1 (for 16-bit Windows 3.1) and 2 (for 32-bit Windows 95), the next two major updates (versions 3 and 4) were released in 2005 and 2008. These last two versions included additional modules applicable to more complex biogeochemical reactions than the standard HYDRUS modules. While the standard modules of HYDRUS-1D can simulate the transport of solutes that are either fully independent or involved in the sequential first-order degradation chains, the two new modules can consider mutual interactions between multiple solutes, such as cation exchange and precipitation/dissolution. Version 3 included the UNSATCHEM module (Suarez and Šimůnek, 1997) for simulating carbon dioxide transport as well as the multi-component transport of major ions. The UNSATCHEM major ion module was recently included also in version 2 of HYDRUS (2D/3D) (Šimůnek et al., 2011). Version 4 of HYDRUS-1D includes now not only the UNSATCHEM module, but also the HP1 program (Jacques and Šimůnek, 2005), which resulted from coupling HYDRUS-1D with the biogeochemical program PHREEQC. HYDRUS 2D/3D The current HYDRUS (2D/3D) suite of software and their predecessors have a long history. The origin of these models can be traced back to the early work of Dr. Shlomo Neuman and collaborators (e.g., Neuman, 1972) who developed their UNSAT model at the Hydraulic Engineering Laboratory of Technion – Israel Institute of Technology, in Haifa, Israel, long before the introduction of personal computers. UNSAT was a finite element model simulating water flow in two-dimensional variably-saturated domains as described with the Richards equation. The model additionally considered root water uptake as well as a range of pertinent boundary conditions required to ensure wide applicability of the model. UNSAT was later modified by Davis and Neuman (1983) at the University of Arizona, Tucson, such that the model could be run on personal computers. This last version of UNSAT formed the basis of the SWMII model developed by Vogel (1987) during his stay at Wageningen University, the Netherlands. SWMII significantly extended the capabilities and ease of use of UNSAT. The code simulated variably-saturated water flow in two-dimensional transport domains, implemented the van Genuchten soil hydraulic functions (van Genuchten, 1980) and modifications thereof, considered root water uptake by taking advantage of some of the features of the SWATRE model (Feddes et al., 1978), and included scaling factors to enable simulations of flow in heterogeneous soils. The code also allowed the flow region to be composed of nonuniform soils having an arbitrary degree of local anisotropy. SWMII was a direct predecessor of the SWMS_2D model (Šimůnek et al., 1992) developed later at US Salinity Laboratory. The SWMS_2D model (Šimůnek et al., 1992) considerably extended the capabilities of SWMII by including provisions for solute transport. Solute transport was described using the standard advection-dispersion equation that included linear sorption, first-order degradation in both the liquid and solid phases, and zero-order production in both phases. Several other numerical improvements were at the time also implemented in SWMS_2D. These included solution of the mixed form of the Richards equation as suggested by Celia et al. (1990), thus providing excellent mass balances in the water flow calculations. While SWMII could simulate water flow in either two-dimensional vertical or horizontal planes, SWMS_2D extended the range of applications also to three-dimensional axisymmetrical flow domains around a vertical axis of symmetry. Examples are flow to a well, infiltration from a surface ring or tension disc infiltrometer, and infiltration from a surface or subsurface dripper. The first major upgrade of SWMS_2D was released under the name CHAIN_2D (Šimůnek et al., 1994b). This model greatly expanded upon the capabilities of SWMS_2D by including, among other things, sequential first-order solute decay chains and heat transport. The temperature dependence of the soil hydraulic properties was included by considering the effects of temperature on surface tension, dynamic viscosity and the density of water. The heat transport equation in CHAIN_2D considered transport due to conduction and advection with flowing water. The solute transport equations considered advective-dispersive transport in the liquid phase, as well as diffusion in the gaseous phase. The transport equations also included provisions for nonlinear nonequilibrium reactions between the solid and liquid phases, linear equilibrium reactions between the liquid and gaseous phase, zero-order production and two first-order degradation reactions: one which was independent of other solutes, and one which provided the coupling between solutes involved in the sequential first-order decay reactions. The SWMS_2D and CHAIN_2D models formed the bases of versions 1.0 (for 16-bit Windows 3.1) and 2.0 (for 32-bit Windows 95) of HYDRUS-2D (Šimůnek et al., 1999). A unique feature of HYDRUS-2D was that it used a Microsoft Windows-based Graphics User Interface (GUI) to manage the input data required to run the program, as well as for nodal discretization and editing, parameter allocation, problem execution, and visualization of results. It could handle flow regions delineated by irregular boundaries, as well as three-dimensional regions exhibiting radial symmetry about the vertical axis. The code includes the MeshGen2D mesh generator, which was specifically designed for variably-saturated subsurface flow and transport problems. The mesh generator may be used for defining very general domain geometries, and for discretizing the transport domain into an unstructured finite element mesh. HYDRUS-2D has been recently fully replaced with HYDRUS (2D/3D) as described below. The HYDRUS (2D/3D) (version 1) software package (Šimůnek et al., 2006; Šejna and Šimůnek, 2007) is an extension and replacement of HYDRUS-2D (version 2.0) and SWMS_3D (Šimůnek et al., 1995). This software package is a complete rewrite of HYDRUS-2D and its extensions for two- and three-dimensional geometries. In addition to features and processes available in HYDRUS-2D and SWMS_3D, the new computational modules of HYDRUS (2D/3D) consider (a) water flow and solute transport in a dual-porosity system, thus allowing for preferential flow in fractures or macropores while storing water in the matrix, (b) root water uptake with compensation, (c) the spatial root distribution functions, (d) the soil hydraulic property models of Kosugi and Durner, (e) the transport of viruses, colloids, and/or bacteria using an attachment/detachment model, filtration theory, and blocking functions, (f) a constructed wetland module (only in 2D), (g) the new hysteresis model to eliminate pumping by keeping track of historical reversal points, and many other options. Simulated processes Both HYDRUS models may be used to simulate movement of water, heat, and multiple solutes in variably saturated media. Both programs use linear finite elements to numerically solve the Richards equation for saturated-unsaturated water flow and Fickian-based advection dispersion equations for both heat and solute transport. The flow equation also includes a sink term to account for water uptake by plant roots as a function of both water and salinity stress. The unsaturated soil hydraulic properties can be described using van Genuchten, Brooks and Corey, modified van Genuchten, Kosugi, and Durner type analytical functions. The heat transport equation considers conduction as well as advection with flowing water. The solute transport equations assume advective-dispersive transport in the liquid phase, and diffusion in the gaseous phase. The transport equations further include provisions for nonlinear and/or non-equilibrium reactions between the solid and liquid phases, linear equilibrium reactions between the liquid and gaseous phases, zero-order production, and two first-order degradation reactions: one which is independent of other solutes, and one which provides the coupling between solutes involved in sequential first order decay reactions. In addition, physical non-equilibrium solute transport can be accounted for by assuming a two-region, dual-porosity type formulation which partitions the liquid phase into mobile and immobile regions. HYDRUS models may be used to analyze water and solute movement in unsaturated, partially saturated, or fully saturated homogeneous of layered media. The codes incorporates hysteresis by assuming that drying scanning curves are scaled from the main drying curve, and wetting scanning curves from the main wetting curve. Root water uptake can be simulated as a function of both water and salinity stress, and can be either compensated or uncompensated. The HYDRUS software packages additionally implement a Marquardt–Levenberg type parameter estimation technique for inverse estimation of soil hydraulic and/or solute transport and reaction parameters from measured transient or steady-state flow and/or transport data. The programs are for this purpose written in such a way that almost any application that can be run in a direct mode can equally well be run in an inverse mode, and thus for model calibration and parameter estimation. The HYDRUS packages use a Microsoft Windows-based graphical user interface (GUI) to manage the input data required to run the program, as well as for nodal discretization and editing, parameter allocation, problem execution, and visualization of results. All spatially distributed parameters, such as those for various soil horizons, the root water uptake distribution, and the initial conditions for water, heat and solute movement, are specified in a graphical environment. The program offers graphs of the distributions of the pressure head, water content, water and solute fluxes, root water uptake, temperature and solute concentrations in the subsurface at pre-selected times. Also included is a small catalog of unsaturated soil hydraulic properties, as well as pedotransfer functions based on neural networks. Both HYDRUS models also consider various provisions for simulating non-equilibrium flow and transport. The flow equation for the latter purpose can consider dual-porosity-type flow with a fraction of the water content being mobile, and a fraction immobile. The transport equations additionally were modified to allow consideration of kinetic attachment/detachment processes of solutes to the solid phase, and hence of solutes having a finite size. This attachment/detachment feature has been used by many recently to simulate the transport of viruses, colloids, and bacteria. HYDRUS model further include modules for simulating carbon dioxide transport (only HYDRUS-1D) and major ion chemistry modules, adopted from the UNSATCHEM program. HYDRUS-1D can thus be used in applications evaluating overall salinity, the concentration of individual soluble cations, as well as of the Sodium Adsorption Ratio and the Exchangeable Sodium Percentage. Applications Both HYDRUS-1D and HYDRUS (2D/3D) has been used in hundreds, if not thousands of applications referenced in peer-reviewed journal articles and many technical reports. Both software packages are also used in classrooms of many universities in courses covering Soil Physics, Processes in the Vadose Zone, or Vadose Zone Hydrology. A selected list of hundreds of applications of both HYDRUS software packages are given at: http://www.pc-progress.com/en/Default.aspx?h3d-references http://www.pc-progress.com/en/Default.aspx?h1d-references The website also provides many specific applications in the libraries of HYDRUS projects at: http://www.pc-progress.com/en/Default.aspx?h1d-library http://www.pc-progress.com/en/Default.aspx?h3d-applications HYDRUS software also provides capabilities for simulating water flow and solute transport for specialized domains. Constructed Wetland Module Constructed wetlands (CWs) are engineered water treatment systems that optimize the treatment processes found in natural environments. CWs are popular systems which efficiently treat various types of polluted water and are therefore sustainable, environmentally friendly solutions. A large number of physical, chemical and biological processes are simultaneously active and mutually influence each other. HYDRUS offers two biokinetic model formulations: (a) the CW2D module (Langergraber and Šimůnek, 2005), and/or the CW M1 (Constructed Wetland Model #1) biokinetic model (Langergraber et al., 2009b). References External links HYDRUS 1D home page HYDRUS 2D home page HYDRUS 2D/3D home page Integrated hydrologic modelling Soil physics Public-domain software
Hydrus (software)
[ "Physics" ]
3,086
[ "Applied and interdisciplinary physics", "Soil physics" ]
15,841,082
https://en.wikipedia.org/wiki/Choice%20modelling
Choice modelling attempts to model the decision process of an individual or segment via revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices (A over B; B over A, B & C) in order to infer positions of the items (A, B and C) on some relevant latent scale (typically "utility" in economics and various related fields). Indeed many alternative models exist in econometrics, marketing, sociometrics and other fields, including utility maximization, optimization applied to consumer theory, and a plethora of other identification strategies which may be more or less accurate depending on the data, sample, hypothesis and the particular decision being modelled. In addition, choice modelling is regarded as the most suitable method for estimating consumers' willingness to pay for quality improvements in multiple dimensions. Related terms There are a number of terms which are considered to be synonyms with the term choice modelling. Some are accurate (although typically discipline or continent specific) and some are used in industry applications, although considered inaccurate in academia (such as conjoint analysis). These include the following: Stated preference discrete choice modeling Discrete choice Choice experiment Stated preference studies Conjoint analysis Controlled experiments Although disagreements in terminology persist, it is notable that the academic journal intended to provide a cross-disciplinary source of new and empirical research into the field is called the Journal of Choice Modelling. Theoretical background The theory behind choice modelling was developed independently by economists and mathematical psychologists. The origins of choice modelling can be traced to Thurstone's research into food preferences in the 1920s and to random utility theory. In economics, random utility theory was then developed by Daniel McFadden and in mathematical psychology primarily by Duncan Luce and Anthony Marley. In essence, choice modelling assumes that the utility (benefit, or value) that an individual derives from item A over item B is a function of the frequency that (s)he chooses item A over item B in repeated choices. Due to his use of the normal distribution Thurstone was unable to generalise this binary choice into a multinomial choice framework (which required the multinomial logistic regression rather than probit link function), hence why the method languished for over 30 years. However, in the 1960s through 1980s the method was axiomatised and applied in a variety of types of study. Distinction between revealed and stated preference studies Choice modelling is used in both revealed preference (RP) and stated preference (SP) studies. RP studies use the choices made already by individuals to estimate the value they ascribe to items - they "reveal their preferences - and hence values (utilities) – by their choices". SP studies use the choices made by individuals made under experimental conditions to estimate these values – they "state their preferences via their choices". McFadden successfully used revealed preferences (made in previous transport studies) to predict the demand for the Bay Area Rapid Transit (BART) before it was built. Luce and Marley had previously axiomatised random utility theory but had not used it in a real world application; furthermore they spent many years testing the method in SP studies involving psychology students. History McFadden's work earned him the Nobel Memorial Prize in Economic Sciences in 2000. However, much of the work in choice modelling had for almost 20 years been proceeding in the field of stated preferences. Such work arose in various disciplines, originally transport and marketing, due to the need to predict demand for new products that were potentially expensive to produce. This work drew heavily on the fields of conjoint analysis and design of experiments, in order to: Present to consumers goods or services that were defined by particular features (attributes) that had levels, e.g. "price" with levels "$10, $20, $30"; "follow-up service" with levels "no warranty, 10 year warranty"; Present configurations of these goods that minimised the number of choices needed in order to estimate the consumer's utility function (decision rule). Specifically, the aim was to present the minimum number of pairs/triples etc of (for example) mobile/cell phones in order that the analyst might estimate the value the consumer derived (in monetary units) from every possible feature of a phone. In contrast to much of the work in conjoint analysis, discrete choices (A versus B; B versus A, B & C) were to be made, rather than ratings on category rating scales (Likert scales). David Hensher and Jordan Louviere are widely credited with the first stated preference choice models. They remained pivotal figures, together with others including Joffre Swait and Moshe Ben-Akiva, and over the next three decades in the fields of transport and marketing helped develop and disseminate the methods. However, many other figures, predominantly working in transport economics and marketing, contributed to theory and practice and helped disseminate the work widely. Relationship with conjoint analysis Choice modelling from the outset suffered from a lack of standardisation of terminology and all the terms given above have been used to describe it. However, the largest disagreement has proved to be geographical: in the Americas, following industry practice there, the term "choice-based conjoint analysis" has come to dominate. This reflected a desire that choice modelling (1) reflect the attribute and level structure inherited from conjoint analysis, but (2) show that discrete choices, rather than numerical ratings, be used as the outcome measure elicited from consumers. Elsewhere in the world, the term discrete choice experiment has come to dominate in virtually all disciplines. Louviere (marketing and transport) and colleagues in environmental and health economics came to disavow the American terminology, claiming that it was misleading and disguised a fundamental difference discrete choice experiments have from traditional conjoint methods: discrete choice experiments have a testable theory of human decision-making underpinning them (random utility theory), whilst conjoint methods are simply a way of decomposing the value of a good using statistical designs from numerical ratings that have no psychological theory to explain what the rating scale numbers mean. Designing a choice model Designing a choice model or discrete choice experiment (DCE) generally follows the following steps: Identifying the good or service to be valued; Deciding on what attributes and levels fully describe the good or service; Constructing an Experimental design that is appropriate for those attributes and levels, either from a design catalogue, or via a software program; Constructing the survey, replacing the design codes (numbers) with the relevant attribute levels; Administering the survey to a sample of respondents in any of a number of formats including paper and pen, but increasingly via web surveys; Analysing the data using appropriate models, often beginning with the Multinomial logistic regression model, given its attractive properties in terms of consistency with economic demand theory. Identifying the good or service to be valued This is often the easiest task, typically defined by: the research question in an academic study, or the needs of the client (in the context of a consumer good or service) Deciding on what attributes and levels fully describe the good or service A good or service, for instance mobile (cell) phone, is typically described by a number of attributes (features). Phones are often described by shape, size, memory, brand, etc. The attributes to be varied in the DCE must be all those that are of interest to respondents. Omitting key attributes typically causes respondents to make inferences (guesses) about those missing from the DCE, leading to omitted variable problems. The levels must typically include all those currently available, and often are expanded to include those that are possible in future – this is particularly useful in guiding product development. Constructing an experimental design that is appropriate for those attributes and levels, either from a design catalogue, or via a software program A strength of DCEs and conjoint analyses is that they typically present a subset of the full factorial. For example, a phone with two brands, three shapes, three sizes and four amounts of memory has 2x3x3x4=72 possible configurations. This is the full factorial and in most cases is too large to administer to respondents. Subsets of the full factorial can be produced in a variety of ways but in general they have the following aim: to enable estimation of a certain limited number of parameters describing the good: main effects (for example the value associated with brand, holding all else equal), two-way interactions (for example the value associated with this brand and the smallest size, that brand and the smallest size), etc. This is typically achieved by deliberately confounding higher order interactions with lower order interactions. For example, two-way and three-way interactions may be confounded with main effects. This has the following consequences: The number of profiles (configurations) is significantly reduced; A regression coefficient for a given main effect is unbiased if and only if the confounded terms (higher order interactions) are zero; A regression coefficient is biased in an unknown direction and with an unknown magnitude if the confounded interaction terms are non-zero; No correction can be made at the analysis to solve the problem, should the confounded terms be non-zero. Thus, researchers have repeatedly been warned that design involves critical decisions to be made concerning whether two-way and higher order interactions are likely to be non-zero; making a mistake at the design stage effectively invalidates the results since the hypothesis of higher order interactions being non-zero is untestable. Designs are available from catalogues and statistical programs. Traditionally they had the property of Orthogonality where all attribute levels can be estimated independently of each other. This ensures zero collinearity and can be explained using the following example. Imagine a car dealership that sells both luxury cars and used low-end vehicles. Using the utility maximisation principle and assuming an MNL model, we hypothesise that the decision to buy a car from this dealership is the sum of the individual contribution of each of the following to the total utility. Price Marque (BMW, Chrysler, Mitsubishi) Origin (German, American) Performance Using multinomial regression on the sales data however will not tell us what we want to know. The reason is that much of the data is collinear since cars at this dealership are either: high performance, expensive German cars low performance, cheap American cars There is not enough information, nor will there ever be enough, to tell us whether people are buying cars because they are European, because they are a BMW or because they are high performance. This is a fundamental reason why RP data are often unsuitable and why SP data are required. In RP data these three attributes always co-occur and in this case are perfectly correlated. That is: all BMWs are made in Germany and are of high performance. These three attributes: origin, marque and performance are said to be collinear or non-orthogonal. Only in experimental conditions, via SP data, can performance and price be varied independently – have their effects decomposed. An experimental design (below) in a Choice Experiment is a strict scheme for controlling and presenting hypothetical scenarios, or choice sets to respondents. For the same experiment, different designs could be used, each with different properties. The best design depends on the objectives of the exercise. It is the experimental design that drives the experiment and the ultimate capabilities of the model. Many very efficient designs exist in the public domain that allow near optimal experiments to be performed. For example the Latin square 1617 design allows the estimation of all main effects of a product that could have up to 1617 (approximately 295 followed by eighteen zeros) configurations. Furthermore this could be achieved within a sample frame of only around 256 respondents. Below is an example of a much smaller design. This is 34 main effects design. This design would allow the estimation of main effects utilities from 81 (34) possible product configurations assuming all higher order interactions are zero. A sample of around 20 respondents could model the main effects of all 81 possible product configurations with statistically significant results. Some examples of other experimental designs commonly used: Balanced incomplete block designs (BIBD) Random designs Main effects Higher order interaction designs Full factorial More recently, efficient designs have been produced. These typically minimise functions of the variance of the (unknown but estimated) parameters. A common function is the D-efficiency of the parameters. The aim of these designs is to reduce the sample size required to achieve statistical significance of the estimated utility parameters. Such designs have often incorporated Bayesian priors for the parameters, to further improve statistical precision. Highly efficient designs have become extremely popular, given the costs of recruiting larger numbers of respondents. However, key figures in the development of these designs have warned of possible limitations, most notably the following. Design efficiency is typically maximised when good A and good B are as different as possible: for instance every attribute (feature) defining the phone differs across A and B. This forces the respondent to trade across price, brand, size, memory, etc; no attribute has the same level in both A and B. This may impose cognitive burden on the respondent, leading him/her to use simplifying heuristics ("always choose the cheapest phone") that do not reflect his/her true utility function (decision rule). Recent empirical work has confirmed that respondents do indeed have different decision rules when answering a less efficient design compared to a highly efficient design. More information on experimental designs may be found here. It is worth reiterating, however, that small designs that estimate main effects typically do so by deliberately confounding higher order interactions with the main effects. This means that unless those interactions are zero in practice, the analyst will obtain biased estimates of the main effects. Furthermore (s)he has (1) no way of testing this, and (2) no way of correcting it in analysis. This emphasises the crucial role of design in DCEs. Constructing the survey Constructing the survey typically involves: Doing a "find and replace" in order that the experimental design codes (typically numbers as given in the example above) are replaced by the attribute levels of the good in question. Putting the resulting configurations (for instance types of mobile/cell phones) into a broader survey than may include questions pertaining to sociodemographics of the respondents. This may aid in segmenting the data at the analysis stage: for example males may differ from females in their preferences. Administering the survey to a sample of respondents in any of a number of formats including paper and pen, but increasingly via web surveys Traditionally, DCEs were administered via paper and pen methods. Increasingly, with the power of the web, internet surveys have become the norm. These have advantages in terms of cost, randomising respondents to different versions of the survey, and using screening. An example of the latter would be to achieve balance in gender: if too many males answered, they can be screened out in order that the number of females matches that of males. Analysing the data using appropriate models, often beginning with the multinomial logistic regression model, given its attractive properties in terms of consistency with economic demand theory Analysing the data from a DCE requires the analyst to assume a particular type of decision rule - or functional form of the utility equation in economists' terms. This is usually dictated by the design: if a main effects design has been used then two-way and higher order interaction terms cannot be included in the model. Regression models are then typically estimated. These often begin with the conditional logit model - traditionally, although slightly misleadingly, referred to as the multinomial logistic (MNL) regression model by choice modellers. The MNL model converts the observed choice frequencies (being estimated probabilities, on a ratio scale) into utility estimates (on an interval scale) via the logistic function. The utility (value) associated with every attribute level can be estimated, thus allowing the analyst to construct the total utility of any possible configuration (in this case, of car or phone). However, a DCE may alternatively be used to estimate non-market environmental benefits and costs. Strengths Forces respondents to consider trade-offs between attributes; Makes the frame of reference explicit to respondents via the inclusion of an array of attributes and product alternatives; Enables implicit prices to be estimated for attributes; Enables welfare impacts to be estimated for multiple scenarios; Can be used to estimate the level of customer demand for alternative 'service product' in non-monetary terms; and Potentially reduces the incentive for respondents to behave strategically. Weaknesses Discrete choices provide only ordinal data, which provides less information than ratio or interval data; Inferences from ordinal data, to produce estimates on an interval/ratio scale, require assumptions about error distributions and the respondent's decision rule (functional form of the utility function); Fractional factorial designs used in practice deliberately confound two-way and higher order interactions with lower order (typically main effects) estimates in order to make the design small: if the higher order interactions are non-zero then main effects are biased, with no way for the analyst to know or correct this ex post; Non-probabilistic (deterministic) decision-making by the individual violates random utility theory: under a random utility model, utility estimates become infinite. There is one fundamental weakness of all limited dependent variable models such as logit and probit models: the means (true positions) and variances on the latent scale are perfectly Confounded. In other words they cannot be separated. The mean-variance confound Yatchew and Griliches first proved that means and variances were confounded in limited dependent variable models (where the dependent variable takes any of a discrete set of values rather than a continuous one as in conventional linear regression). This limitation becomes acute in choice modelling for the following reason: a large estimated beta from the MNL regression model or any other choice model can mean: Respondents place the item high up on the latent scale (they value it highly), or Respondents do not place the item high up on the scale BUT they are very certain of their preferences, consistently (frequently) choosing the item over others presented alongside, or Some combination of (1) and (2). This has significant implications for the interpretation of the output of a regression model. All statistical programs "solve" the mean-variance confound by setting the variance equal to a constant; all estimated beta coefficients are, in fact, an estimated beta multiplied by an estimated lambda (an inverse function of the variance). This tempts the analyst to ignore the problem. However (s)he must consider whether a set of large beta coefficients reflect strong preferences (a large true beta) or consistency in choices (a large true lambda), or some combination of the two. Dividing all estimates by one other – typically that of the price variable – cancels the confounded lambda term from numerator and denominator. This solves the problem, with the added benefit that it provides economists with the respondent's willingness to pay for each attribute level. However, the finding that results estimated in "utility space" do not match those estimated in "willingness to pay space", suggests that the confound problem is not solved by this "trick": variances may be attribute specific or some other function of the variables (which would explain the discrepancy). This is a subject of current research in the field. Versus traditional ratings-based conjoint methods Major problems with ratings questions that do not occur with choice models are: no trade-off information. A risk with ratings is that respondents tend not to differentiate between perceived 'good' attributes and rate them all as attractive. variant personal scales. Different individuals value a '2' on a scale of 1 to 5 differently. Aggregation of the frequencies of each of the scale measures has no theoretical basis. no relative measure. How does an analyst compare something rated a 1 to something rated a 2? Is one twice as good as the other? Again there is no theoretical way of aggregating the data. Other types Ranking Rankings do tend to force the individual to indicate relative preferences for the items of interest. Thus the trade-offs between these can, like in a DCE, typically be estimated. However, ranking models must test whether the same utility function is being estimated at every ranking depth: e.g. the same estimates (up to variance scale) must result from the bottom rank data as from the top rank data. Best–worst scaling Best–worst scaling (BWS) is a well-regarded alternative to ratings and ranking. It asks people to choose their most and least preferred options from a range of alternatives. By subtracting or integrating across the choice probabilities, utility scores for each alternative can be estimated on an interval or ratio scale, for individuals and/or groups. Various psychological models may be utilised by individuals to produce best-worst data, including the MaxDiff model. Uses Choice modelling is particularly useful for: Predicting uptake and refining new product development Estimating the implied willingness to pay (WTP) for goods and services Product or service viability testing Estimating the effects of product characteristics on consumer choice Variations of product attributes Understanding brand value and preference, including for products like colleges Demand estimates and optimum pricing Transportation demand Evacuation and disaster investigations and forecasting The section on "Applications" of discrete choice provides further details on how this type of modelling can be applied in different fields. Occupational choice model In Economics, an occupational choice model is a model that seeks to answer why people enter into different occupations . In the model, in each moment, the person decides whether to work as in the previous occupation, in some other occupation, or not to be employed. In some versions of the model, an individual chooses that occupation for which the present value of his expected income is a maximum. However, in other versions, risk aversion may drive people to work in the same occupation as before. See also Consumer choice Discrete choice Outline of management References External links Curated bibliography at IDEAS/RePEc Economics models Econometric modeling Behavioral economics
Choice modelling
[ "Biology" ]
4,538
[ "Behavior", "Behavioral economics", "Behaviorism" ]
15,841,716
https://en.wikipedia.org/wiki/Siemens%20C65
The Siemens C65 is a mobile phone announced by Siemens. This phone is under “C-Class” leveling of Siemens for entry levels/ consumer regularly. It was released in March 2004. It weighs 86 g and its dimensions are 100 x 45 x 16 mm (length x width x depth). Its display is a 130x130 pixels, 65K colors CSTN LCD. Its carrier-engineered variants are Siemens CT65, CV65 and CO65, exclusively for three mainly mobile operators T-Mobile, Vodafone and O2 respectively. It is known in North America as the Siemens C66. Reviews GSM Arena praised it as good value but criticised the poor camera. CNet agreed that it provided good value. References Mobile phones introduced in 2004 Mobile phones with infrared transmitter C65
Siemens C65
[ "Technology" ]
165
[ "Mobile technology stubs", "Mobile phone stubs" ]
15,842,094
https://en.wikipedia.org/wiki/Boskalis
Boskalis Westminster N.V. is a Dutch dredging and heavylift company that provides services relating to the construction and maintenance of maritime infrastructure internationally. The company has one of the world's largest dredging fleets, a large stake in Smit International and owns Dockwise, a large heavylift shipping company. As of 2020, Boskalis employs around 9,900 employees and 650 ships. They operate in over 75 countries in six continents. History Boskalis (Bos & Kalis) was founded as Johannes Kraaijeveld en van Noordenne in 1910 by Johannes Kraaijeveld and Eliza van Noordenne. During the 1930s, it was renamed NV Baggermaatschappij Bos & Kalis when Gerrit Jan Bos, Wilhelm Bos, Egbertus Dingeman Kalis and Kobus Kalis took over. Throughout much of the interwar period, Boskalis played a major role in the Zuiderzee project. In 1931, the company signed a contract for the dredging of Bromborough Dock. During 1933, Boskalis partnered with the Westminster Dredging Company (based in Fareham, England), which opened business opportunities with West Africa. In 1970, Boskalis became a public company. during 1978, Boskalis received the designation "Royal". In the 1980s, economic and political circumstances compelled Boskalis to concentrate on its core dredging business. Across the 1990s, the company embarked on a series of acquisitions, such as its purchase of a 40% interest in rival firm Archirodon Group. During this period, Boskalis was also involved in several major land reclamation projects. In Hong Kong, the company worked on the major land reclamation project for the new Chek Lap Kok airport, while in Singapore it cooperated on a multi-year development program. Addition work during this decade included its involvement in the construction of a gas and container port at Ras Laffan, Qatar. During 2000, Boskalis and the Dutch maritime construction firm Hollandsche Beton Groep (HBG) explored multiple avenues aimed at bringing together or merging the two businesses, ranging from a hostile takeover to even agreeing terms from a friendly transaction. However, even though the European Commission cleared such a deal to proceed, it did not come to fruition, allegedly due to disagreements over the proposed combined enterprise's direction. It was speculated that such an arrangement would have created the market leader in the Benelux region (in terms of turnover) as well as one of the five largest European construction companies. Since 2000 By 2007, the company was engaged in two major contracts in Australia — a €300 million contract to deepen the shipping channels of Port Phillip in Melbourne, utilising its dredge the Queen of the Netherlands, and a €50 million contract to expand the harbour at Newcastle. The company was also undertaking a €1.1 billion contract to develop a new port in Abu Dhabi. On 15 September 2008, Boskalis offered €1.11 billion for fellow Dutch maritime company Smit International. Despite the offer being promptly rejected by Smit's board, Boskalis subsequently built a stake of over 25% in the firm and expressed a continuing desire to buy a number of its business units. A revised offer of €1.35 billion was accepted by Smit in January 2010, with Boskalis declaring its offer unconditional that March. During early 2011, Boskalis acquired the Dutch-based civil engineering firm MNO Vervat. In April 2013, Boskalis acquired the Dutch marine transport company Dockwise. That same year, Boskalis completed the sale of its 40 percent stake in Archirodon Group in exchange for $190 million. In October 2014, Egypt signed a $1.5 billion contract with Boskalis, alongside five other multinational firms, to carry out dredging in connection with the expansion of the Suez Canal. During late March 2021, a pair of Boskalis tugboats assisted the eleven Egyptian tugs in the dredging and towing operation to free the 400-metre long ship Ever Given, which ran aground in the Suez Canal and got stuck diagonally, therefore blocked the canal between 23 and 29 March 2021, during which time the canal was impassable. Boskalis has played a key role in the delivery of numerous offshore wind power generation schemes, in particular the use of cable-laying ships to connect such farms to land-based energy grids. By 2024, half of the company's offshore energy revenues were being generated from work related to offshore wind farms. During 2019, Boskalis announced its intention to divest its worldwide harbor towage interests. Accordingly, the firm sold its stakes in Saam Smit Towage (which operated primarily in Central and South America), Kotug Smith Towage (which operated in Northern Europe), and Keppel Smit Towage, a joint venture with Keppel Offshore in Singapore. In early 2022, HAL Investments approached Boskalis with an offer to purchase the latter; this deal valued the firm at €4.3 billion. As a result of the completion of this transaction, under which HAL Investments obtained in excess of 95 percent of all shares in the Boskalis, the latter was delisted from Euronext Amsterdam. During the early 2020s, Boskalis has been one of several companies working on Malmporten, Sweden’s largest dredging projects in recent decades. The New Manila International Airport has been the largest land reclamation project in Boskalis’ dredging history on the coastal areas 35 km north of the capital Manila. On 15 September 2023, Boskalis’ Group Director, Pim van der Knaap, accepted the International Association of Dredging Companies Safety Award 2023 from IADC’s President Frank Verhoeven for the new and improved waterbox, used for sandfill areas. Controversies During the early 2010s, Boskalis was publicly accused of bribing Mauritian officials in order to obtain certain contracts in the nation. In October 2013, the company was fined by a Mauritian court. See also Shoalway Dockwise Vanguard References External links Companies based in South Holland Construction and civil engineering companies of the Netherlands Multinational companies headquartered in the Netherlands Dredging companies Dutch brands Papendrecht Companies listed on Euronext Amsterdam Construction and civil engineering companies established in 1910 Dutch companies established in 1910
Boskalis
[ "Engineering" ]
1,321
[ "Dredging companies", "Engineering companies" ]
15,842,342
https://en.wikipedia.org/wiki/Laboratory%20diagnosis%20of%20viral%20infections
In the diagnostic laboratory, virus infections can be confirmed by a myriad of methods. Diagnostic virology has changed rapidly due to the advent of molecular techniques and increased clinical sensitivity of serological assays. Sampling A wide variety of samples can be used for virological testing. The type of sample sent to the laboratory often depends on the type of viral infection being diagnosed and the test required. Proper sampling technique is essential to avoid potential pre-analytical errors. For example, different types of samples must be collected in appropriate tubes to maintain the integrity of the sample and stored at appropriate temperatures (usually 4 °C) to preserve the virus and prevent bacterial or fungal growth. Sometimes multiple sites may also be sampled. Types of samples include the following: Nasopharyngeal swab Blood Skin Sputum, gargles and bronchial washings Urine Semen Faeces Cerebrospinal fluid Tissues (biopsies or post-mortem) Dried blood spots For example, a nasal mucus test may be done to diagnose rhinovirus. Virus isolation Viruses are often isolated from the initial patient sample. This allows the virus sample to be grown into larger quantities and allows a larger number of tests to be run on them. This is particularly important for samples that contain new or rare viruses for which diagnostic tests are not yet developed. Many viruses can be grown in cell culture in the lab. To do this, the virus sample is mixed with cells, a process called adsorption, after which the cells become infected and produce more copies of the virus. Although different viruses often only grow in certain types of cells, there are cells that support the growth of a large variety of viruses and are a good starting point, for example, the African monkey kidney cell line (Vero cells), human lung fibroblasts (MRC-5), and human epidermoid carcinoma cells (HEp-2). One means of determining whether the cells are successfully replicating the virus is to check for a change in cell morphology or for the presence of cell death using a microscope. Other viruses may require alternative methods for growth such as the inoculation of embryonated chicken eggs (e.g. avian influenza viruses) or the intracranial inoculation of virus using newborn mice (e.g. lyssaviruses). Nucleic acid based methods Molecular techniques are the most specific and sensitive diagnostic tests. They are capable of detecting either the whole viral genome or parts of the viral genome. In the past nucleic acid tests have mainly been used as a secondary test to confirm positive serological results. However, as they become cheaper and more automated, they are increasingly becoming the primary tool for diagnostics and can also be use for monitoring of treatment of viral infected individuals t. Polymerase chain reaction Detection of viral RNA and DNA genomes can be performed using polymerase chain reaction. This technique makes many copies of the virus genome using virus-specific probes. Variations of PCR such as nested reverse transcriptase PCR and real time PCR can also be used to determine viral loads in patient serum. This is often used to monitor treatment success in HIV cases. Sequencing Sequencing is the only diagnostic method that will provide the full sequence of a virus genome. Hence, it provides the most information about very small differences between two viruses that would look the same using other diagnostic tests. Currently it is only used when this depth of information is required. For example, sequencing is useful when specific mutations in the patient are tested for in order to determine antiviral therapy and susceptibility to infection. However, as the tests are getting cheaper, faster and more automated, sequencing will likely become the primary diagnostic tool in the future. Microscopy based methods Immunofluorescence or immunoperoxidase Immunofluorescence or immunoperoxidase assays are commonly used to detect whether a virus is present in a tissue sample. These tests are based on the principle that if the tissue is infected with a virus, an antibody specific to that virus will be able to bind to it. To do this, antibodies that are specific to different types of viruses are mixed with the tissue sample. After the tissue is exposed to a specific wavelength of light or a chemical that allows the antibody to be visualized. These tests require specialized antibodies that are produced and purchased from commercial companies. These commercial antibodies are usually well characterized and are known to bind to only one specific type of virus. They are also conjugated to a special kind of tag that allows the antibody to be visualized in the lab, i.e.so that it will emit fluorescence or a color. Hence, immunofluorescence refers to the detection of a fluorescent antibody (immuno) and immunoperoxidase refers to the detection of a colored antibody (peroxidase produces a dark brown color). Electron microscopy Electron microscopy is a method that can take a picture of a whole virus and can reveal its shape and structure. It is not typically used as a routine diagnostic test as it requires a highly specialized type of sample preparation, microscope and technical expertise. However, electron microscopy is highly versatile due to its ability to analyze any type of sample and identify any type of virus. Therefore, it remains the gold standard for identifying viruses that do not show up on routine diagnostic tests or for which routine tests present conflicting results. Host antibody detection A person who has recently been infected by a virus will produce antibodies in their bloodstream that specifically recognize that virus. This is called humoral immunity. Two types of antibodies are important. The first called IgM is highly effective at neutralizing viruses but is only produced by the cells of the immune system for a few weeks. The second, called, IgG is produced indefinitely. Therefore, the presence of IgM in the blood of the host is used to test for acute infection, whereas IgG indicates an infection sometime in the past. Both types of antibodies are measured when tests for immunity are carried out. Antibody testing has become widely available. It can be done for individual viruses (e.g. using an ELISA assay) but automated panels that can screen for many viruses at once are becoming increasingly common. Hemagglutination assay Some viruses attach to molecules present on the surface of red blood cells, for example, influenza virus. A consequence of this is that – at certain concentrations – a viral suspension may bind together (agglutinate) the red blood cells thus preventing them from settling out of suspension. See also Serology Molecular diagnostics References Diagnostic virology Laboratory medicine techniques Viral diseases
Laboratory diagnosis of viral infections
[ "Chemistry" ]
1,349
[ "Laboratory medicine techniques" ]
15,843,466
https://en.wikipedia.org/wiki/P24%20capsid%20protein
The p24 capsid protein is the most abundant HIV protein with each virus containing approximately 1,500 to 3,000 p24 molecules. It is the major structural protein within the capsid, and it is involved in maintaining the structural integrity of the virus and facilitating various stages of the viral life cycle, including viral entry into host cells and the release of new virus particles. Detection of p24 protein's antigen can be used to identify the presence of HIV in a person's blood and diagnose HIV/AIDS, however, more modern tests have taken their place. After approximately 50 days of infection, the p24 antigen is often cleared from the bloodstream entirely. Structure P24 has a molecular weight of 24 kDa and is encoded by the gag gene. The structure of HIV capsid was determined by X-ray crystallography and cryo-electron microscopy. The p24 capsid protein consists of two domains: the N-terminal domain and the C-terminal domain connected by flexible inter-domain linkers. The N-terminal domain (NTD) is made up of 7 α-helices (H) and β-hairpin. The C-terminal domain (CTD) has 4 α-helices and an 11-residue unstructured region. The N-terminal domain (NTD) facilitates contacts within the hexamer, while the C-terminal domain (CTD) forms dimers that bind to adjacent hexamers. Each hexamer contains a size-selective pore surrounded by six positively charged arginine residues, and the pore is covered by a β-hairpin that can undergo conformational changes, which has both open and closed conformations. At the center of the hexamers lies an IP6 molecule which stabilizes the tertiary structure of the molecule. Additionally, the C-terminal domain includes a major homology region (MHR) spanning amino acids 153 to 172 with 20 highly conserved amino acids. Moreover, the N-terminal domain features a loop (amino acids 85–93) that interacts with the protein cyclophilin A (Cyp A). Function P24 is a structural protein that plays a crucial role in the formation and stability of the viral capsid, which protects the viral RNA. p24 capsid protein's roles in the HIV replicative process are summarized as follows: Fusion: HIV replication cycle begins when HIV fuses with the surface of the host cell. The capsid containing the virus’s genome and proteins then enters the cells. Reverse transcription: The capsid ensures the secure transport of the viral genome and reverse-transcription machinery from the cytoplasm's periphery to transcriptionally active sites in the nucleus. It achieves this by shielding the viral genome from detection by restriction factors, while still allowing the necessary molecules to diffuse through the core, facilitating the process of reverse transcription. Assembly: It is involved in the assembly of new virus particles, facilitating the proper organization of viral components. Budding: P24 contributes to the viral budding process, ensuring the proper packaging and release of mature and infectious virus particles. p24 HIV capsid as a therapeutic target New antiretroviral therapy Cyclosporine, an immunosuppressant drug designed to prevent organ transplant rejection, has been shown to inhibit infection in HIV-1 positive people. Cyclosporine acts as a competitive inhibitor to the capsid protein’s association with CypA, a cellular protein. CypA has been shown to be important for HIV’s infectivity. The HIV-1 p24 capsid protein plays crucial roles throughout the replication cycle, making it an attractive therapeutic target. Unlike the viral enzymes (protease, reverse transcriptase and integrase) that are currently targeted by small-molecule antiretroviral drugs, p24 capsid proteins operate through protein-protein interactions. Capsid inhibitors, such as Lenacapavir and GS-6207, interfere with the activities of the HIV capsid protein and underwent evaluation in phase-1 clinical trials as monotherapies. They demonstrated anti-viral activity against all subtypes with no cross-resistance with current antiretroviral drugs. These findings support therapies aimed at disrupting the functions of the HIV capsid protein. Vaccine design P24 can induce cellular immune responses and has been included in some vaccine strategies. Diagnosis Fourth generation-HIV test P24 is a target for the immune system, and antibodies against p24 are used in diagnostic tests to detect the presence of HIV antibodies. Fourth-generation HIV immunoassays detect viral p24 protein in the blood and patient antibodies against the virus. Previous generation tests relied on detecting patient antibodies alone; it takes about 3–4 weeks for the earliest antibodies to be detected. The p24 protein can be detected in a patient's blood as early as 2 weeks after infection, further reducing the window period necessary to accurately detect the HIV status of the patient. See also HIV vaccine References Further reading Viral structural proteins HIV/AIDS fr:Virus de l'immunodéficience humaine#Structure
P24 capsid protein
[ "Chemistry" ]
1,050
[ "Biochemistry stubs", "Protein stubs" ]
15,843,635
https://en.wikipedia.org/wiki/Cartesian%20tree
In computer science, a Cartesian tree is a binary tree derived from a sequence of distinct numbers. To construct the Cartesian tree, set its root to be the minimum number in the sequence, and recursively construct its left and right subtrees from the subsequences before and after this number. It is uniquely defined as a min-heap whose symmetric (in-order) traversal returns the original sequence. Cartesian trees were introduced by in the context of geometric range searching data structures. They have also been used in the definition of the treap and randomized binary search tree data structures for binary search problems, in comparison sort algorithms that perform efficiently on nearly-sorted inputs, and as the basis for pattern matching algorithms. A Cartesian tree for a sequence can be constructed in linear time. Definition Cartesian trees are defined using binary trees, which are a form of rooted tree. To construct the Cartesian tree for a given sequence of distinct numbers, set its root to be the minimum number in the sequence, and recursively construct its left and right subtrees from the subsequences before and after this number, respectively. As a base case, when one of these subsequences is empty, there is no left or right child. It is also possible to characterize the Cartesian tree directly rather than recursively, using its ordering properties. In any tree, the subtree rooted at any node consists of all other nodes that can reach it by repeatedly following parent pointers. The Cartesian tree for a sequence of distinct numbers is defined by the following properties: The Cartesian tree for a sequence is a binary tree with one node for each number in the sequence. A symmetric (in-order) traversal of the tree results in the original sequence. Equivalently, for each node, the numbers in its left subtree are earlier than it in the sequence, and the numbers in the right subtree are later. The tree has the min-heap property: the parent of any non-root node has a smaller value than the node itself. These two definitions are equivalent: the tree defined recursively as described above is the unique tree that has the properties listed above. If a sequence of numbers contains repetitions, a Cartesian tree can be determined for it by following a consistent tie-breaking rule before applying the above construction. For instance, the first of two equal elements can be treated as the smaller of the two. History Cartesian trees were introduced and named by , who used them as an example of the interaction between geometric combinatorics and the design and analysis of data structures. In particular, Vuillemin used these structures to analyze the average-case complexity of concatenation and splitting operations on binary search trees. The name is derived from the Cartesian coordinate system for the plane: in one version of this structure, as in the two-dimensional range searching application discussed below, a Cartesian tree for a point set has the sorted order of the points by their -coordinates as its symmetric traversal order, and it has the heap property according to the -coordinates of the points. Vuillemin described both this geometric version of the structure, and the definition here in which a Cartesian tree is defined from a sequence. Using sequences instead of point coordinates provides a more general setting that allows the Cartesian tree to be applied to non-geometric problems as well. Efficient construction A Cartesian tree can be constructed in linear time from its input sequence. One method is to process the sequence values in left-to-right order, maintaining the Cartesian tree of the nodes processed so far, in a structure that allows both upwards and downwards traversal of the tree. To process each new value , start at the node representing the value prior to in the sequence and follow the path from this node to the root of the tree until finding a value smaller than . The node becomes the right child of , and the previous right child of becomes the new left child of . The total time for this procedure is linear, because the time spent searching for the parent of each new node can be charged against the number of nodes that are removed from the rightmost path in the tree. An alternative linear-time construction algorithm is based on the all nearest smaller values problem. In the input sequence, define the left neighbor of a value to be the value that occurs prior to , is smaller than , and is closer in position to than any other smaller value. The right neighbor is defined symmetrically. The sequence of left neighbors can be found by an algorithm that maintains a stack containing a subsequence of the input. For each new sequence value , the stack is popped until it is empty or its top element is smaller than , and then is pushed onto the stack. The left neighbor of is the top element at the time is pushed. The right neighbors can be found by applying the same stack algorithm to the reverse of the sequence. The parent of in the Cartesian tree is either the left neighbor of or the right neighbor of , whichever exists and has a larger value. The left and right neighbors can also be constructed efficiently by parallel algorithms, making this formulation useful in efficient parallel algorithms for Cartesian tree construction. Another linear-time algorithm for Cartesian tree construction is based on divide-and-conquer. The algorithm recursively constructs the tree on each half of the input, and then merges the two trees. The merger process involves only the nodes on the left and right spines of these trees: the left spine is the path obtained by following left child edges from the root until reaching a node with no left child, and the right spine is defined symmetrically. As with any path in a min-heap, both spines have their values in sorted order, from the smallest value at their root to their largest value at the end of the path. To merge the two trees, apply a merge algorithm to the right spine of the left tree and the left spine of the right tree, replacing these two paths in two trees by a single path that contains the same nodes. In the merged path, the successor in the sorted order of each node from the left tree is placed in its right child, and the successor of each node from the right tree is placed in its left child, the same position that was previously used for its successor in the spine. The left children of nodes from the left tree and right children of nodes from the right tree remain unchanged. The algorithm is parallelizable since on each level of recursion, each of the two sub-problems can be computed in parallel, and the merging operation can be efficiently parallelized as well. Yet another linear-time algorithm, using a linked list representation of the input sequence, is based on locally maximum linking: the algorithm repeatedly identifies a local maximum element, i.e., one that is larger than both its neighbors (or than its only neighbor, in case it is the first or last element in the list). This element is then removed from the list, and attached as the right child of its left neighbor, or the left child of its right neighbor, depending on which of the two neighbors has a larger value, breaking ties arbitrarily. This process can be implemented in a single left-to-right pass of the input, and it is easy to see that each element can gain at most one left-, and at most one right child, and that the resulting binary tree is a Cartesian tree of the input sequence. It is possible to maintain the Cartesian tree of a dynamic input, subject to insertions of elements and lazy deletion of elements, in logarithmic amortized time per operation. Here, lazy deletion means that a deletion operation is performed by marking an element in the tree as being a deleted element, but not actually removing it from the tree. When the number of marked elements reaches a constant fraction of the size of the whole tree, it is rebuilt, keeping only its unmarked elements. Applications Range searching and lowest common ancestors Cartesian trees form part of an efficient data structure for range minimum queries. An input to this kind of query specifies a contiguous subsequence of the original sequence; the query output should be the minimum value in this subsequence. In a Cartesian tree, this minimum value can be found at the lowest common ancestor of the leftmost and rightmost values in the subsequence. For instance, in the subsequence (12,10,20,15,18) of the example sequence, the minimum value of the subsequence (10) forms the lowest common ancestor of the leftmost and rightmost values (12 and 18). Because lowest common ancestors can be found in constant time per query, using a data structure that takes linear space to store and can be constructed in linear time, the same bounds hold for the range minimization problem. reversed this relationship between the two data structure problems by showing that data structures for range minimization could also be used for finding lowest common ancestors. Their data structure associates with each node of the tree its distance from the root, and constructs a sequence of these distances in the order of an Euler tour of the (edge-doubled) tree. It then constructs a range minimization data structure for the resulting sequence. The lowest common ancestor of any two vertices in the given tree can be found as the minimum distance appearing in the interval between the initial positions of these two vertices in the sequence. Bender and Farach-Colton also provide a method for range minimization that can be used for the sequences resulting from this transformation, which have the special property that adjacent sequence values differ by one. As they describe, for range minimization in sequences that do not have this form, it is possible to use Cartesian trees to reduce the range minimization problem to lowest common ancestors, and then to use Euler tours to reduce lowest common ancestors to a range minimization problem with this special form. The same range minimization problem may also be given an alternative interpretation in terms of two dimensional range searching. A collection of finitely many points in the Cartesian plane can be used to form a Cartesian tree, by sorting the points by their -coordinates and using the -coordinates in this order as the sequence of values from which this tree is formed. If is the subset of the input points within some vertical slab defined by the inequalities , is the leftmost point in (the one with minimum -coordinate), and is the rightmost point in (the one with maximum -coordinate) then the lowest common ancestor of and in the Cartesian tree is the bottommost point in the slab. A three-sided range query, in which the task is to list all points within a region bounded by the three inequalities and , can be answered by finding this bottommost point , comparing its -coordinate to , and (if the point lies within the three-sided region) continuing recursively in the two slabs bounded between and and between and . In this way, after the leftmost and rightmost points in the slab are identified, all points within the three-sided region can be listed in constant time per point. The same construction, of lowest common ancestors in a Cartesian tree, makes it possible to construct a data structure with linear space that allows the distances between pairs of points in any ultrametric space to be queried in constant time per query. The distance within an ultrametric is the same as the minimax path weight in the minimum spanning tree of the metric. From the minimum spanning tree, one can construct a Cartesian tree, the root node of which represents the heaviest edge of the minimum spanning tree. Removing this edge partitions the minimum spanning tree into two subtrees, and Cartesian trees recursively constructed for these two subtrees form the children of the root node of the Cartesian tree. The leaves of the Cartesian tree represent points of the metric space, and the lowest common ancestor of two leaves in the Cartesian tree is the heaviest edge between those two points in the minimum spanning tree, which has weight equal to the distance between the two points. Once the minimum spanning tree has been found and its edge weights sorted, the Cartesian tree can be constructed in linear time. As a binary search tree The Cartesian tree of a sorted sequence is just a path graph, rooted at its leftmost endpoint. Binary searching in this tree degenerates to sequential search in the path. However, a different construction uses Cartesian trees to generate binary search trees of logarithmic depth from sorted sequences of values. This can be done by generating priority numbers for each value, and using the sequence of priorities to generate a Cartesian tree. This construction may equivalently be viewed in the geometric framework described above, in which the -coordinates of a set of points are the values in a sorted sequence and the -coordinates are their priorities. This idea was applied by , who suggested the use of random numbers as priorities. The self-balancing binary search tree resulting from this random choice is called a treap, due to its combination of binary search tree and min-heap features. An insertion into a treap can be performed by inserting the new key as a leaf of an existing tree, choosing a priority for it, and then performing tree rotation operations along a path from the node to the root of the tree to repair any violations of the heap property caused by this insertion; a deletion can similarly be performed by a constant amount of change to the tree followed by a sequence of rotations along a single path in the tree. A variation on this data structure called a zip tree uses the same idea of random priorities, but simplifies the random generation of the priorities, and performs insertions and deletions in a different way, by splitting the sequence and its associated Cartesian tree into two subsequences and two trees and then recombining them. If the priorities of each key are chosen randomly and independently once whenever the key is inserted into the tree, the resulting Cartesian tree will have the same properties as a random binary search tree, a tree computed by inserting the keys in a randomly chosen permutation starting from an empty tree, with each insertion leaving the previous tree structure unchanged and inserting the new node as a leaf of the tree. Random binary search trees have been studied for much longer than treaps, and are known to behave well as search trees. The expected length of the search path to any given value is at most , and the whole tree has logarithmic depth (its maximum root-to-leaf distance) with high probability. More formally, there exists a constant such that the depth is with probability tending to one as the number of nodes tends to infinity. The same good behavior carries over to treaps. It is also possible, as suggested by Aragon and Seidel, to reprioritize frequently-accessed nodes, causing them to move towards the root of the treap and speeding up future accesses for the same keys. In sorting describe a sorting algorithm based on Cartesian trees. They describe the algorithm as based on a tree with the maximum at the root, but it can be modified straightforwardly to support a Cartesian tree with the convention that the minimum value is at the root. For consistency, it is this modified version of the algorithm that is described below. The Levcopoulos–Petersson algorithm can be viewed as a version of selection sort or heap sort that maintains a priority queue of candidate minima, and that at each step finds and removes the minimum value in this queue, moving this value to the end of an output sequence. In their algorithm, the priority queue consists only of elements whose parent in the Cartesian tree has already been found and removed. Thus, the algorithm consists of the following steps: Construct a Cartesian tree for the input sequence Initialize a priority queue, initially containing only the tree root While the priority queue is non-empty: Find and remove the minimum value in the priority queue Add this value to the output sequence Add the Cartesian tree children of the removed value to the priority queue As Levcopoulos and Petersson show, for input sequences that are already nearly sorted, the size of the priority queue will remain small, allowing this method to take advantage of the nearly-sorted input and run more quickly. Specifically, the worst-case running time of this algorithm is , where is the sequence length and is the average, over all values in the sequence, of the number of consecutive pairs of sequence values that bracket the given value (meaning that the given value is between the two sequence values). They also prove a lower bound stating that, for any and (non-constant) , any comparison-based sorting algorithm must use comparisons for some inputs. In pattern matching The problem of Cartesian tree matching has been defined as a generalized form of string matching in which one seeks a substring (or in some cases, a subsequence) of a given string that has a Cartesian tree of the same form as a given pattern. Fast algorithms for variations of the problem with a single pattern or multiple patterns have been developed, as well as data structures analogous to the suffix tree and other text indexing structures. Notes References Binary trees Sorting algorithms
Cartesian tree
[ "Mathematics" ]
3,539
[ "Order theory", "Sorting algorithms" ]
15,843,784
https://en.wikipedia.org/wiki/Hoelite
Hoelite is a mineral, discovered in 1922 at Mt. Pyramide, Spitsbergen, Norway and named after Norwegian geologist Adolf Hoel (1879–1964). Its chemical formula is C14H8O2 (9,10-anthraquinone). It is a very rare organic mineral which occurs in coal fire environments in association with sal ammoniac and native sulfur. References Organic minerals Monoclinic minerals Minerals in space group 14 Minerals described in 1922
Hoelite
[ "Chemistry" ]
98
[ "Organic compounds", "Organic minerals" ]
15,844,313
https://en.wikipedia.org/wiki/In-gel%20digestion
The in-gel digestion step is a part of the sample preparation for the mass spectrometric identification of proteins in course of proteomic analysis. The method was introduced in 1992 by Rosenfeld. Innumerable modifications and improvements in the basic elements of the procedure remain. The in-gel digestion step primarily comprises the four steps; destaining, reduction and alkylation (R&A) of the cysteines in the protein, proteolytic cleavage of the protein and extraction of the generated peptides. Destaining Proteins which were separated by 1D or 2D PAGE are usually visualised by staining with dyes like Coomassie brilliant blue (CBB) or silver. Although the sensitivity of the method is significantly lower, the use of Coomassie is more common for samples destined for mass spectrometry since the silver staining impairs the analysis. After excision of the protein band of interest from the gel most protocols require a destaining of the proteins before proceeding. The destaining solution for CBB contains usually the buffer salt ammonium bicarbonate (NH4HCO3) and a fraction of 30%-50% organic solvent (mostly acetonitrile). The hydrophobic interactions between protein and CBB are reduced by the organic fraction of the solution. At the same time, the ionic part of the solution diminishes the electrostatic bonds between the dye and the positively charged amino acids of the protein. In contrast to a mixture of water with organic solvent the effectivity of destaining is increased. An increase of temperature promotes the destaining process. To a certain degree (< 10%) the destaining procedure is accompanied with a loss of protein. Furthermore, the removal of CBB does not affect the yield of peptides in the mass spectrometric measurement. In the case of silver stained protein bands the destaining is accomplished by oxidation of the metallic silver attached to the protein by potassium ferricyanide or hydrogen peroxide (H2O2). The released silver ions are complexed subsequently by sodium thiosulfate. Reduction and alkylation (R & A) The staining and destaining of gels is often followed by the reduction and alkylation (r&a) of the cystines or cysteines in the proteins. Hereby, the disulfide bonds of the proteins are irreversibly broken up and the optimal unfolding of the tertiary structure is obtained. The reduction to the thiol is accomplished by the reaction with chemicals containing sulfhydryl or phosphine groups such as dithiothreitol (DTT) or tris-2-carboxyethylphosphine hydrochloride (TCEP). In course of the subsequent irreversible alkylation of the SH groups with iodoacetamide the cysteines are transformed to the stable S-carboxyamidomethylcysteine (CAM; adduct: -CH2-CONH2). The molecular weight of the cysteine amino-acid residue is thereby increased from 103.01 Da to 160.03 Da. Reduction and alkylation of cysteine residues improves peptide yield and sequence coverage and the identification of proteins with a high number of disulfide bonds. Due to the rareness of the amino acid cysteine for most of the proteins the step of r&a does not effect any improvement of the mass spectrometric analysis. For the quantitative and homogeneous alkylation of cysteines the position of the modification step in the sample-preparation process is crucial. With denaturing electrophoresis it is strongly recommended to perform the reaction before the execution of the electrophoresis, since there are free acrylamide monomers in the gel able to modify cysteine residues irreversibly. The resulting acrylamide adducts have a molecular weight of 174.05 Da. In-gel digestion Afterwards the eponymous step of the method is performed, the in-gel digestion of the proteins. By this procedure, the protein is cut enzymatically into a limited number of shorter fragments. These fragments are called peptides and allow for the identification of the protein with their characteristic mass and pattern. The serine protease trypsin is the most common enzyme used in protein analytics. Trypsin cuts the peptide bond specifically at the carboxyl end of the basic aminoacids arginine and lysine. If there is an acidic amino acid like aspartic acid or glutamic acid in direct neighborhood to the cutting site, the rate of hydrolysis is diminished, a proline C-terminal to the cutting site inhibits the hydrolysis completely. An undesirable side effect of the use of proteolytic enzymes is the self digestion of the protease. To avoid this, in the past Ca2+-ions were added to the digestion buffer. Nowadays most suppliers offer modified trypsin where selective methylation of the lysines limits the autolytic activity to the arginine cutting sites. Unmodified trypsin has its highest activity between 35 °C and 45 °C. After the modification, the optimal temperature is changed to the range of 50 °C to 55 °C. Other enzymes used for in-gel digestion are the endoproteases Lys-C, Glu-C, Asp-N and Lys-N. These proteases cut specifically at only one amino acid e.g. Asp-N cuts n-terminal of aspartic acid. Therefore, a lower number of longer peptides is obtained. The analysis of the complete primary sequence of a protein using only one protease is usually not possible. In those cases the digestion of the target protein in several approaches with different enzymes is recommended. The resulting overlapping peptides permit the assembly of the complete sequence of the protein. For the digestion the proteins fixed in the matrix of the gel have to be made accessible for the protease. The permeation of the enzyme to the gel is believed to be facilitated by the dehydration of the gel pieces by treatment with acetonitrile and subsequent swelling in the digestion buffer containing the protease. This procedure relies on the presumption that the protease permeates to the gel by the process of swelling. Different studies about the penetration of the enzymes to the gel showed the process to be almost completely driven by diffusion. The drying of the gel does not seem to support the process. Therefore, the improvement of the in-gel digestion has to be achieved by the reduction of the way of the enzyme to its substrate e.g. by cutting the gel to pieces as small as possible. Usually, the in-gel digestion is run as an overnight process. For the use of trypsin as protease and a temperature of 37 °C the time of incubation found in most protocols is 12-15 h. However, experiments about the duration of the digestion process showed that after 3 h there is enough material for successful mass spectrometric analysis. Furthermore, the optimisation of the conditions for the protease in temperature and pH allows for the completion of the digestion of a sample in 30 min. Surfactant (detergents) can aid in the solubilization and denaturing of proteins in the gel and thereby shorten digestion times and increase protein cleavage and the number and amount of extracted peptides, especially for lipophilic proteins such as membrane proteins. Cleavable detergents are detergents that are cleaved after digestion, often under acidic conditions. This makes the addition of detergents compatible with mass spectrometry. Extraction After finishing the digestion the peptides generated in this process have to be extracted from the gel matrix. This is accomplished by one or several extraction steps. The gel particles are incubated with an extraction solution and the supernatant is collected. In the first extraction, almost all of the peptide is recovered, the repetition of the extraction step can increase the yield of the whole process by only 5-10%. To meet the requirements of peptides with different physical and chemical properties an iterative extraction with basic or acidic solutions is performed. For the extraction of acidic peptides a solution similar to the concentration and composition of the digestion buffer is used; basic peptides are extracted in dependence to the intended mass spectrometric method with a low concentrated acidic solution of formic acid for ESI and trifluoroacetic acid for MALDI respectively. Studies on model proteins showed a recovery of approximately 70–80% of the expected peptide yield by extraction from the gel. Many protocols contain an additional fraction of acetonitrile to the extraction solution which, in concentrations above 30% (v/v), is effective in reducing the adsorption of peptides to the surface of reaction tubes and pipette tips. The liquid of the pooled extracts is evaporated in a centrifugal evaporator. If the volatile salt ammonium bicarbonate was used for the basic extraction, it is partially removed in the drying process. The dried peptides can be stored at -20 °C for at least six months. Critical considerations and actual trends Some major drawbacks of the common protocols for the in-gel digestion are the extended time needed and the multiple processing steps, making the method error-prone with respect to contaminations (especially keratin). These disadvantages were largely removed by the development of optimised protocols and specialised reaction tubes. More severe than the difficulties with handling are losses of material while processing the samples. The mass spectrometric protein analysis is often performed at the limit of detection, so even small losses can dictate success or failure of the whole analysis. These losses are due to washout during different processing steps, adsorption to the surface of reaction tubes and pipette tips, incomplete extraction of peptides from the gel and/or bad ionisation of single peptides in the mass spectrometer. Depending on the physicochemical properties of the peptides, losses can vary between 15 and 50%. Due to the inherent heterogeneity of the peptides, up to now, a universally valid solution for this major drawback of the method has not been found. Commercial implementations The commercial implementations of in-gel digestion have to be divided into products for high and for low throughput laboratories. High-throughput Due to the highly time-consuming and work-intensive standard procedure, the method of in-gel digestion was limited to a relatively small number of protein spots to be processed at a time. Therefore, it has been found to be the ideal object for automation ambitions to overcome these limitations for industrial and service laboratories. Today, in laboratories where in-gel digestion is performed in high-throughput quantities, the procedure is usually automated. The degree of automation varies from simple pipetting robots to highly sophisticated all-in-one solutions, offering an automated workflow from gel to mass spectrometry. The systems usually consist of a spot picker, a digestion robot, and a spotter. The advantages of the automation other than the larger number of spots to be processed at a time are the reduced manual work and the improved standardisation. Due to the many handling steps of the method, the results of the manual process could vary depending on the dexterity of the user and the risk of contamination is high. Therefore, the quality of the results is described to be one main advantage of the automated process. Drawbacks of automated solutions are the costs for robots, maintenance and consumables as well as the complicated setup of the process. Since the automated picking needs digitised information of the spot location, the analysis of the gel image for relevant spots has to be done by software requiring standardised imaging methods and special scanners. This lengthy procedure prevents the researcher from spontaneous identifications of a few interesting spots from a single gel as well as the need to operate the systems at full capacity. The resulting amount of data from the subsequent automated MS analysis is another problem of high throughput systems as their quality is often questionable and the evaluation of these data takes significantly longer than the collection. Low-throughput The mentioned drawbacks limit the reasonable use of automated in-gel digestion systems to the routine laboratory whereas the research laboratory with a demand to make a flexible use of the instruments of protein identification more often stays with the manual, low-throughput methods for in-gel digestion and MS analysis. This group of customers is targeted by the industry with several kit systems for in-gel digestion. Most of the kit systems are mere collections of the chemicals and enzymes needed for the in-gel digestion whereas the underlying protocol remains unchanged from the manual standard procedure described above. The advantage of these products for the inexperienced customer lies in the guaranteed functioning of the diverse solutions in combination with a ready-made protocol for the process. A few companies have tried to improve the handling process of in-gel digestion to allow even with manual sample preparation an easier and more standardised workflow. The Montage In-Gel Digest Kit from Millipore is based on the standard protocol, but enables processing of a large number of parallel samples by transferring the handling of the gel pieces to a modified 96 well microplate. The solutions for the diverse steps of in-gel digestion are pipetted into the wells of this plate whereas the removal of liquids is performed through the bottom of the wells by a vacuum pump. This system simplifies the handling of the multiple pipetting steps by the use of multichannel pipettes and even pipetting robots. Actually, some manufacturers of high-throughput systems have adopted the system to work with their robots. This illustrates the orientation of this kit solution to laboratories with a larger number of samples. See also Zymography, an unrelated technique in molecular biology which also involves the digestion of proteins in an electrophoretic gel References External links Flash film illustrating the experimental procedure of the optimised in-gel digestion as described in Granvogl et al. Proteins Mass spectrometry
In-gel digestion
[ "Physics", "Chemistry" ]
2,938
[ "Biomolecules by chemical classification", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Molecular biology", "Proteins", "Matter" ]
15,845,679
https://en.wikipedia.org/wiki/Southwestern%20Statistical%20Region
The Southwestern Statistical Region (; ) is one of eight statistical regions of North Macedonia. Southwestern, located in the western part of the country, sharing Ohrid Lake with its westerly border Albania. Internally, it borders the Pelagonia, Polog, Skopje, and Vardar statistical regions. Municipalities The Southwestern Statistical Region is divided into nine municipalities: Centar Župa Debar Debarca Kičevo Makedonski Brod Ohrid Plasnica Struga Vevčani Population The current population of the Southwestern statistical region is 177,398 citizens, according to the last population census in 2021. Ethnicities The largest ethnic group in the region is Macedonian, followed by Albanians and Turkish people. Religions Religious affiliation according to the 2002 and 2021 censuses: References Statistical regions of North Macedonia
Southwestern Statistical Region
[ "Mathematics" ]
162
[ "Statistical regions of North Macedonia", "Statistical concepts", "Statistical regions" ]
15,845,764
https://en.wikipedia.org/wiki/A%E2%88%9E-operad
{{DISPLAYTITLE:A∞-operad}} In the theory of operads in algebra and algebraic topology, an A∞-operad is a parameter space for a multiplication map that is homotopy coherently associative. (An operad that describes a multiplication that is both homotopy coherently associative and homotopy coherently commutative is called an E∞-operad.) Definition In the (usual) setting of operads with an action of the symmetric group on topological spaces, an operad A is said to be an A∞-operad if all of its spaces A(n) are Σn-equivariantly homotopy equivalent to the discrete spaces Σn (the symmetric group) with its multiplication action (where n ∈ N). In the setting of non-Σ operads (also termed nonsymmetric operads, operads without permutation), an operad A is A∞ if all of its spaces A(n) are contractible. In other categories than topological spaces, the notions of homotopy and contractibility have to be replaced by suitable analogs, such as homology equivalences in the category of chain complexes. An-operads The letter A in the terminology stands for "associative", and the infinity symbols says that associativity is required up to "all" higher homotopies. More generally, there is a weaker notion of An-operad (n ∈ N), parametrizing multiplications that are associative only up to a certain level of homotopies. In particular, A1-spaces are pointed spaces; A2-spaces are H-spaces with no associativity conditions; and A3-spaces are homotopy associative H-spaces. A∞-operads and single loop spaces A space X is the loop space of some other space, denoted by BX, if and only if X is an algebra over an -operad and the monoid π0(X) of its connected components is a group. An algebra over an -operad is referred to as an -space. There are three consequences of this characterization of loop spaces. First, a loop space is an -space. Second, a connected -space X is a loop space. Third, the group completion of a possibly disconnected -space is a loop space. The importance of -operads in homotopy theory stems from this relationship between algebras over -operads and loop spaces. A∞-algebras An algebra over the -operad is called an -algebra. Examples feature the Fukaya category of a symplectic manifold, when it can be defined (see also pseudoholomorphic curve). Examples The most obvious, if not particularly useful, example of an -operad is the associative operad a given by . This operad describes strictly associative multiplications. By definition, any other -operad has a map to a which is a homotopy equivalence. A geometric example of an A∞-operad is given by the Stasheff polytopes or associahedra. A less combinatorial example is the operad of little intervals: The space consists of all embeddings of n disjoint intervals into the unit interval. See also Homotopy associative algebra operad E-infinity operad loop space References Abstract algebra Algebraic topology
A∞-operad
[ "Mathematics" ]
722
[ "Algebra", "Algebraic topology", "Fields of abstract algebra", "Topology", "Abstract algebra" ]
15,845,985
https://en.wikipedia.org/wiki/E%E2%88%9E-operad
{{DISPLAYTITLE:E∞-operad}} In the theory of operads in algebra and algebraic topology, an E∞-operad is a parameter space for a multiplication map that is associative and commutative "up to all higher homotopies". (An operad that describes a multiplication that is associative but not necessarily commutative "up to homotopy" is called an A∞-operad.) Definition For the definition, it is necessary to work in the category of operads with an action of the symmetric group. An operad A is said to be an E∞-operad if all of its spaces E(n) are contractible; some authors also require the action of the symmetric group Sn on E(n) to be free. In other categories than topological spaces, the notion of contractibility has to be replaced by suitable analogs, such as acyclicity in the category of chain complexes. En-operads and n-fold loop spaces The letter E in the terminology stands for "everything" (meaning associative and commutative), and the infinity symbols says that commutativity is required up to "all" higher homotopies. More generally, there is a weaker notion of En-operad (n ∈ N), parametrizing multiplications that are commutative only up to a certain level of homotopies. In particular, E1-spaces are A∞-spaces; E2-spaces are homotopy commutative A∞-spaces. The importance of En- and E∞-operads in topology stems from the fact that iterated loop spaces, that is, spaces of continuous maps from an n-dimensional sphere to another space X starting and ending at a fixed base point, constitute algebras over an En-operad. (One says they are En-spaces.) Conversely, any connected En-space X is an n-fold loop space on some other space (called BnX, the n-fold classifying space of X). Examples The most obvious, if not particularly useful, example of an E∞-operad is the commutative operad c given by c(n) = *, a point, for all n. Note that according to some authors, this is not really an E∞-operad because the Sn-action is not free. This operad describes strictly associative and commutative multiplications. By definition, any other E∞-operad has a map to c which is a homotopy equivalence. The operad of little n-cubes or little n-disks is an example of an En-operad that acts naturally on n-fold loop spaces. See also operad A-infinity operad loop space References Abstract algebra Algebraic topology
E∞-operad
[ "Mathematics" ]
591
[ "Algebra", "Algebraic topology", "Fields of abstract algebra", "Topology", "Abstract algebra" ]
15,846,030
https://en.wikipedia.org/wiki/Veterans%20Way/College%20Avenue%20station
Veterans Way/College Ave, also known as the Tempe Transportation Center, is a regional transportation center on Valley Metro Rail in Tempe, Arizona, United States. As part of the regional transportation system, it is also the location of stops on multiple bus routes. A bike station is located here. This station has three names: Valley Metro calls the train platforms of this station Veterans Way/College Ave and the local bus bays the Tempe Transportation Center. Both are part of the same facility and immediately adjacent to Mountain America Stadium which serves as the station's third name, as shown on the train platform signs. Bus schedules, train maps, and local signage all refer variously to only one of the names. Tempe Transportation Center The Tempe Transportation Center facilities are a combination of a light rail station, bus transfer stations and a mixed use building all in the shadow of A Mountain. The main building is composed of three stories with retail space, a transit information center and Arizona's first Bike Station all located on the first floor. The second floor is home to the offices for the City of Tempe Transportation Department Offices and the signature element of the project, the Don Cassano Community Room which is open on the ground level to provide shading for pedestrians passing by. On the third floor of the building are leasable office space and the City of Tempe's Transit Operations Center. The center was designed by the Tempe-based firm Architekton with Portland, OR based OTAK Inc. and is currently under review for LEED v2.2 Platinum Certification. The majority of the outdoor area on the site is covered with water-permeable pavers for natural drainage. Solar panels on the green roof are designed to reduce the heat island effect with local plants to help insulate the building. Ridership Gallery Notable places nearby Tempe City Hall Arizona State University: Tempe Campus Mountain America Stadium Desert Financial Arena Gammage Memorial Auditorium (approx. ) Tempe Butte / A Mountain Mill Avenue / Downtown Tempe See also List of United States bike stations References External links Valley Metro map Valley Metro Rail stations Railway stations in the United States opened in 2008 2008 establishments in Arizona United States bike stations Leadership in Energy and Environmental Design certified buildings Buildings and structures in Tempe, Arizona Railway stations in the United States at university and college campuses
Veterans Way/College Avenue station
[ "Engineering" ]
484
[ "Building engineering", "Leadership in Energy and Environmental Design certified buildings" ]
15,846,091
https://en.wikipedia.org/wiki/Diglycolic%20acid
Diglycolic acid is an aliphatic dicarboxylic acid, its acidity is between the one of acetic acid and oxalic acid. It is formed in the oxidation of diethylene glycol in the body and can lead to severe complications with fatal outcome. Preparation Oxidation of diethylene glycol with concentrated nitric acid was described by A. Wurtz in 1861 In parallel, W. Heintz reported the synthesis of diglycolic acid from chloroacetic acid by heating with sodium hydroxide solution. In a version with barium hydroxide solution as an alkaline medium, diglycolic acid is obtained in 68% yield after acidification. The yields of the described reactions are unsatisfactory for use on a technical scale. The single-stage nitric acid process gives even in the presence of an oxidation catalyst (vanadium(V)oxide) yields of only 58-60%. In a multi-stage process of nitric acid oxidation at 70 °C and multiple crystallization steps, evaporation of the residues and return of the diethylene glycol-containing mother liquor, product yields of up to 99% (based on diethylene glycol) can be achieved. The oxidation of diethylene glycol with air, oxygen or ozone avoids the use of expensive nitric acid and prevents the inevitable formation of nitrous gases. In the presence of a platinum catalyst, yields of 90% can be obtained by air oxidation. On a bismuth platinum contact catalyst, yields of 95% are to be achieved under optimized reaction conditions. The oxidation of 1,4-dioxan-2-one (p-dioxanone, a lactone which is used as a comonomer in biodegradable polyesters with nitric acid or dinitrogen tetroxide) is also described with yields of up to 75%. Properties Diglycolic acid is readily water-soluble and crystallizes from water in monoclinic prisms as a white, odorless solid. At an air humidity of more than 72% and 25 °C, the monohydrate is formed. The commercial product is the anhydrous form as free-flowing flakes. Application Diesters of diglycolic acid with (branched) higher alcohols can be used as softeners for polyvinyl chloride (PVC) with comparable properties as di-n-octyl phthalate (DOP). Basic solutions of diglycolic acid are described for the removal of limescale deposits in gas and oil bores, as well as in systems such as heat exchangers or steam boilers. Diglycolic acid can be used as a diester component in homo- and copolymeric polyesters (so-called polyalkylene diglycolates) which are biocompatible and biodegradable and can be used alone or in blends with aliphatic polyesters as tissue adhesives, cartilage substitutes or as implant materials: References Dicarboxylic acids Ethers
Diglycolic acid
[ "Chemistry" ]
650
[ "Organic compounds", "Functional groups", "Ethers" ]
15,846,235
https://en.wikipedia.org/wiki/XAM
XAM, or the eXtensible Access Method, is a standard for computer data storage developed by IBM and EMC and maintained by the Storage Networking Industry Association (SNIA). It was ratified as an ANSI standard by early 2011. XAM is an API for fixed content aware storage devices. XAM replaces the various proprietary interfaces that have been used for this purpose in the past. Content generating applications now have a standard means of saving and finding their content across a broad array of storage devices. XAM is similar in function to a file-system API such as the POSIX file and directory operations, in that it allows applications to store and retrieve their data. XAM stores application data in XSet objects that also contain metadata. See also Content-addressable storage References External links XAM Initiative – Provides good material both at the overview and detail level XAM SDK download – An open source reference implementation of the API XAM Developers Group – Provides information to assist developers working with XAM Computer standards Computer storage technologies
XAM
[ "Technology" ]
209
[ "Computer standards" ]
15,846,301
https://en.wikipedia.org/wiki/Adams%20filtration
In mathematics, especially in the area of algebraic topology known as stable homotopy theory, the Adams filtration and the Adams–Novikov filtration allow a stable homotopy group to be understood as built from layers, the nth layer containing just those maps which require at most n auxiliary spaces in order to be a composition of homologically trivial maps. These filtrations, named after Frank Adams and Sergei Novikov, are of particular interest because the Adams (–Novikov) spectral sequence converges to them. Definition The group of stable homotopy classes between two spectra X and Y can be given a filtration by saying that a map has filtration n if it can be written as a composite of maps such that each individual map induces the zero map in some fixed homology theory E. If E is ordinary mod-p homology, this filtration is called the Adams filtration, otherwise the Adams–Novikov filtration. References Homotopy theory
Adams filtration
[ "Mathematics" ]
207
[ "Topology stubs", "Topology" ]
15,846,772
https://en.wikipedia.org/wiki/Nucleofection
Nucleofection is an electroporation-based transfection method which enables transfer of nucleic acids such as DNA and RNA into cells by applying a specific voltage and reagents. Nucleofection, also referred to as nucleofector technology, was invented by the biotechnology company Amaxa. "Nucleofector" and "nucleofection" are trademarks owned by Lonza Cologne AG, part of the Lonza Group. Applications Nucleofection is a method to transfer substrates into mammalian cells so far considered difficult or even impossible to transfect. Examples for such substrates are nucleic acids, like the DNA of an isolated gene cloned into a plasmid, or small interfering RNA (siRNA) for knocking down expression of a specific endogenous gene. Primary cells, for example stem cells, especially fall into this category, although many other cell lines are also difficult to transfect. Primary cells are freshly isolated from body tissue and thus cells are unchanged, closely resembling the in-vivo situation, and are therefore of particular relevance for medical research purposes. In contrast, cell lines have often been cultured for decades and may significantly differ from their origin. Mechanism Based on the physical method of electroporation, nucleofection uses a combination of electrical parameters, generated by a device called Nucleofector, with cell-type specific reagents. The substrate is transferred directly into the cell nucleus and the cytoplasm. In contrast, other commonly used non-viral transfection methods rely on cell division for the transfer of DNA into the nucleus. Thus, nucleofection provides the ability to transfect even non-dividing cells, such as neuron and resting blood cells. Before the introduction of the Nucleofector Technology, efficient gene transfer into primary cells had been restricted to the use of viral vectors, which typically involve disadvantages such as safety risks, lack of reliability, and high cost. The non-viral gene transfer methods available were not suitable for the efficient transfection of primary cells. Non-viral delivery methods may require cell division for completion of transfection, since the DNA enters the nucleus during breakdown of the nuclear envelope upon cell division or by a specific localization sequence. Optimal nucleofection conditions depend upon the individual cell type, not on the substrate being transfected. This means that identical conditions are used for the nucleofection of DNA, RNA, siRNAs, shRNAs, mRNAs and pre-mRNAs, BACs, peptides, morpholinos, PNA, or other biologically active molecules. See also Electroporation References Molecular biology Genetic engineering
Nucleofection
[ "Chemistry", "Engineering", "Biology" ]
549
[ "Biochemistry", "Biological engineering", "Genetic engineering", "Molecular biology" ]
15,847,654
https://en.wikipedia.org/wiki/TRA%20%28gene%29
T-cell receptor alpha locus is a protein that in humans is encoded by the TRA gene, also known as TCRA or TRA@. It contributes the alpha chain to the larger TCR protein (T-cell receptor). References Further reading Proteins
TRA (gene)
[ "Chemistry" ]
54
[ "Biomolecules by chemical classification", "Protein stubs", "Biochemistry stubs", "Molecular biology", "Proteins" ]
15,847,948
https://en.wikipedia.org/wiki/Affirmative%20prayer
Affirmative prayer is a form of prayer or a metaphysical technique that is focused on a positive outcome rather than a negative situation. For instance, a person who is experiencing some form of illness would focus the prayer on the desired state of perfect health and affirm this desired intention "as if already happened" rather than identifying the illness and then asking God for help to eliminate it. New Thought New Thought spirituality originated during the 1880s and has emphasized affirmative prayer as an essential part of its philosophy. Practitioners among the various New Thought denominations Religious Science, Divine Science and Unity may also refer to this form of prayer by such names as "scientific prayer," "spiritual mind treatment" or, simply, "treatment." Within New Thought organizations, centers, and churches, the foundational logic of this form of prayer is based on the belief that God is unlimited and plays no favorites, that God has created spiritual laws that are both as mysterious and as constant as scientific principles like gravity, and thus if one's prayer is correctly and diligently focused, it will be answered consistently. Religious Science, Divine Science, and Unity Affirmative prayer is called "Spiritual Mind Treatment" by practitioners of Religious Science. Affirmative prayer with a Christian theme is a central practice of the Unity School of Christianity. Jewish Science In the early 1900s, some in the American Jewish community were attracted to the teachings of Christian Science and the New Thought Movement, by the 1920s they were referring to their study by the term Jewish Science. A major figure in this movement was Morris Lichtenstein who together with his wife Tehilla Lichtenstein, published the Jewish Science Interpreter, a periodical featuring much of his own writing. Lichtenstein found affirmative prayer to be particularly useful because he believed that it provided the personal benefits of prayer without requiring the belief in a supernatural God who could suspend the laws of nature. Lichtenstein considered that affirmative prayer is a method that can access inner power that could be considered divine, but not supernatural. He taught that the origins of affirmative prayer can be found in the Old Testament book of Psalms, and that affirmations or affirmative prayer are best offered in silence. Spiritualism The well-known Theosophist, Spiritualist, and New Thought poet Ella Wheeler Wilcox popularized the power of affirmative prayer. After the death of her husband Robert Wilcox, she wrote that she had tried in vain to communicate with his spirit, but only after she composed and recited the affirmative prayer, "I am the living witness: The dead live: And they speak through us and to us: And I am the voice that gives this glorious truth to the suffering world: I am ready, God: I am ready, Christ: I am ready, Robert" was she able to contact him by means of a Ouija board, an event she described in her 1918 autobiography, The Worlds and I. Hoodoo Affirmative prayer is used by practitioners of African American hoodoo, usually in conjunction with its opposite, which is called a prayer of removal. In this folk magic application of the technique, the prayer of removal may be said during a waning moon or at sunset or at ebb tide ("As the sun goes down, this disease is removed from my body") and the affirmative prayer may be said during a waxing moon, at dawn, or at high tide ("As the sun rises, this day brings me perfect health"). The explanation for this application of affirmative prayer is that God has ordained laws of natural inflow and outflow and that by linking one's prayer to a natural condition that prevails at the time, the prayer is given the added power of God's planned natural event. Self-help William James described affirmative prayer as an element of the American metaphysical healing movement that he called the "mind-cure"; he described it as America's "only decidedly original contribution to the systemic philosophy of life." What sets affirmative prayer apart from secular affirmations of the autosuggestion type taught by the 19th century self-help author Émile Coué (whose most famous affirmation was "Every day in every way, I am getting better and better") is that affirmative prayer addresses the practitioner to God, the Divine, the Creative Mind, emphasizing the seemingly practical aspects of religious belief. See also Affirmations (New Age) Jesus Prayer References New Thought beliefs Personal development Prayer
Affirmative prayer
[ "Biology" ]
885
[ "Personal development", "Behavior", "Human behavior" ]
15,848,507
https://en.wikipedia.org/wiki/Rim%20joist
In the framing of a deck or floor system, a rim joist is attached perpendicular to the joists, and provides lateral support for the ends of the joists while capping off the end of the floor or deck system. Rim joists are not to be confused with end joists, which are the first and last joists at the ends of a row of joists that make up a floor or deck frame. A rim joist's relationship to the joists is similar to what the top or bottom wall plate is to the studs. It is also confusingly called a header (header also refers to other framing components) or rim board. Collectively, the end joists and rim joists are called band joists, especially in regard to deck construction. In dimensioned lumber construction, the rim joists are the same depth, thickness and material as the joists themselves; in engineered wood construction, the rim joists may be oriented strand board (OSB), plywood or an engineered wood material varying in thickness from to as much as , though they are usually laminated veneer lumber (LVL) or laminated strand lumber (LSL) thick. In flooring construction, the rim joists sit on the sill plates; in deck construction, they are parallel to the support beams and sit on the beams or in some cases, cantilever away from the beams. A double thickness board in the position of a rim joist is called a flush beam and serves a dual purpose, providing primary support for the joist ends as well as capping the joists. References Architectural elements
Rim joist
[ "Technology", "Engineering" ]
330
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]
15,848,569
https://en.wikipedia.org/wiki/Fukuyama%20coupling
The Fukuyama coupling is a coupling reaction taking place between a thioester and an organozinc halide in the presence of a palladium catalyst. The reaction product is a ketone. This reaction was discovered by Tohru Fukuyama et al. in 1998. Advantages The reaction has gained considerable importance in synthetic organic chemistry due to its high chemoselectivity, mild reaction conditions, and the use of less-toxic reagents. In particular, the protocol is compatible with sensitive functional groups such as ketones, α-acetates, sulfides, aryl bromides, chlorides, and aldehydes. This excellent chemoselectivity is attributed to the fast rate of ketone formation compared to oxidative addition of palladium to aryl bromides or the nucleophilic addition of zinc reagents to aldehydes. Mechanism Although the Fukuyama cross-coupling reaction has been widely used in natural product synthesis, the reaction mechanism remains unclear. Various catalysts have been shown to promote reactivity, including Pd/C, Pd(OH)2/C, Pd(OAc)2, PdCl2, NiCl2, Ni(acac)2, etc. The proposed catalytic cycle using Pd(OH)2/C (Pearlman’s catalyst) features the in situ generation of active Pd/C by reduction with a zinc reagent or zinc dust. The active Pd/C species then undergoes oxidative addition with a thioester, followed by transmetallation with a zinc reagent and reductive elimination, to afford the ketone coupling product. Reaction Conditions Pd-catalyzed Fukuyama Coupling Fukuyama et al. reported the PdCl2(PPh3)2-catalyzed coupling of ethyl thioesters with organozinc reagents in 1998. Remarkably, α−amino ketones starting from thioester derivatives of N-protected amino acids can be synthesized without racemization in good to excellent yields (58-88%). Ni-catalyzed Fukuyama Coupling Aside from the use of palladium catalysts, the first nickel-catalyzed Fukuyama coupling was reported by Shimizu and Seki in 2002. Ni(acac)2 was found to produce superior yields compared to other nickel catalysts. Pd/C-catalyzed Fukuyama Coupling Employing Dialkylzinc Reagents In 2004, the same group of researchers reported the Pd/C-catalyzed Fukuyama ketone synthesis. This reaction couples dialkylzinc reagents with various thioesters in the presence of zinc bromide, which is in situ generated from bromine and zinc dust. The authors proposed that the inactive zinc bromide is shifted to the active RZnBr species via the Schlenk equilibrium. Additionally, DMF can be used as an additive to increase reaction yields. Applications in Natural Product Total Synthesis Biotin The reaction has been used to shorten the synthesis of (+)-biotin. Previously, a lengthy sequence of six steps was required to install the C2 side chain of (+)-biotin to the thiolactone intermediate 1. Shimizu and Seki realized the efficient synthesis of (+)-biotin via the Fukuyama coupling of the thiolactone 1 and an easily prepared alkyl zinc reagent 2 in the presence of catalytic PdCl2(PPh3)2. The reaction generated an alcohol 3 which was directly reacted without purification with PTSA to afford alkene 4 in 86% yield as a single isomer. Hydrogenation and a subsequent benzyl-deprotection of the alkene intermediate according to the reported procedure afforded (+)-biotin in 73% yield over two steps. This Fukuyama coupling sequence provided (+)-biotin in 63% overall yield in three steps from the thiolactone 1, thus allowing practical access to the vitamin due the short sequence, high yield, mild conditions, and ready availability of the reagents. Related Reactions The reaction is conceptually related to Fukuyama Reduction and the Fukuyama-Mitsunobu reaction. References Carbon-carbon bond forming reactions Name reactions
Fukuyama coupling
[ "Chemistry" ]
898
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
15,848,717
https://en.wikipedia.org/wiki/David%20F.%20Dinges
David F. Dinges is an American sleep researcher and teacher. He is professor of psychology in psychiatry, chief of the Division of Sleep and Chronobiology in the Department of Psychiatry, and associate director of the Center for Sleep and Respiratory Neurobiology in the University of Pennsylvania School of Medicine. Dinges earned his M.S. (1974) and Ph.D. (1976) degrees in experimental physiological psychology from Saint Louis University. Dinges has served as president of the Sleep Research Society, on the boards of directors of the American Academy of Sleep Medicine and the National Sleep Foundation, as president of the World Federation of Sleep Research and Sleep Medicine Societies and as editor-in-chief of SLEEP, the leading scientific journal on sleep research and sleep medicine. Life, research and work His laboratory studies the physiological, cognitive and functional changes resulting from sleep loss in humans. His research has primarily focused on the manner in which sleep homeostasis and circadian rhythmicity control cognitive, affective, behavioral, endocrine and immunological processes. Dinges' work has contributed to our knowledge of the effects of sleep disorders, the recovery potential of naps, the nature of sleep inertia and the impact of cumulative sleep debt. He has developed technologies for monitoring human neurobehavioral capability, such as his patented Psychomotor Vigilance Test (PVT). He has consulted with many U.S. agencies, including the Department of Transportation, National Institutes of Health, NASA and the military as well as several non-federal, private and foreign entities on the physiological and behavioral effects of sleep deprivation, and ways to mitigate these effects. Dinges' CV in November 2007 showed 130 peer-reviewed research publications dated 1972 to 2007 as well as more than 90 editorials, reviews, chapters and committee reports, 1978 to 2007. Together with R. J. Broughton, he edited Sleep and Alertness: Chronobiological, Behavioral and Medical Aspects of Napping, Raven Press, New York, 1989. Together with M. P. Szuba and J. D. Kloss, he edited Insomnia: Principles and Management, Cambridge University Press, New York, 2003. Together with his colleague Siobhan Banks he co-wrote the chapter on sleep deprivation in Principles and Practice of Sleep Medicine. He has been featured talking on several documentaries discussing sleep, including two 60 minutes documentaries on sleep and a National Geographic documentary called Sleepless in America. Videos of his short Science Network lectures Behaving without sleep: Biological limits on our environmental demands and Napping and Recovery at the Salk Institute, 09 and 10 February 2007, can be viewed online. References Living people Sleep researchers University of Pennsylvania faculty Chronobiologists Year of birth missing (living people) 21st-century American biologists
David F. Dinges
[ "Biology" ]
574
[ "Sleep researchers", "Behavior", "Sleep" ]
15,848,842
https://en.wikipedia.org/wiki/Freddy%20II
Freddy (1969–1971) and Freddy II (1973–1976) were experimental robots built in the Department of Machine Intelligence and Perception (later Department of Artificial Intelligence, now part of the School of Informatics at the University of Edinburgh). Technology Technical innovations involving Freddy were at the forefront of the 70s robotics field. Freddy was one of the earliest robots to integrate vision, manipulation and intelligent systems as well as having versatility in the system and ease in retraining and reprogramming for new tasks. The idea of moving the table instead of the arm simplified the construction. Freddy also used a method of recognising the parts visually by using graph matching on the detected features. The system used an innovative collection of high level procedures for programming the arm movements which could be reused for each new task. Lighthill controversy In the mid 1970s there was controversy about the utility of pursuing a general purpose robotics programme in both the USA and the UK. A BBC TV programme in 1973, referred to as the "Lighthill Debate", pitched James Lighthill, who had written a critical report for the science and engineering research funding agencies in the UK, against Donald Michie from the University of Edinburgh and John McCarthy from Stanford University. The Edinburgh Freddy II and Stanford/SRI Shakey robots were used to illustrate the state-of-the-art at the time in intelligent robotics systems. Freddy I and II Freddy Mark I (1969–1971) was an experimental prototype, with 3 degrees-of-freedom created by a rotating platform driven by a pair of independent wheels. The other main components were a video camera and bump sensors connected to a computer. The computer moved the platform so that the camera could see and then recognise the objects. Freddy II (1973–1976) was a 5 degrees of freedom manipulator with a large vertical 'hand' that could move up and down, rotate about the vertical axis and rotate objects held in its gripper around one horizontal axis. Two remaining translational degrees of freedom were generated by a work surface that moved beneath the gripper. The gripper was a two finger pinch gripper. A video camera was added as well as a later a light stripe generator. The Freddy and Freddy II projects were initiated and overseen by Donald Michie. The mechanical hardware and analogue electronics were designed and built by Stephen Salter (who also pioneered renewable energy from waves (see Salter's Duck)), and the digital electronics and computer interfacing were designed by Harry Barrow and Gregan Crawford. The software was developed by a team led by Rod Burstall, Robin Popplestone and Harry Barrow which used the POP-2 programming language, one of the world's first functional programming languages. The computing hardware was an Elliot 4130 computer with 384KB (128K 24-bit words) RAM and a hard disk linked to a small Honeywell H316 computer with 16KB of RAM which directly performed sensing and control. Freddy was a versatile system which could be trained and reprogrammed to perform a new task in a day or two. The tasks included putting rings on pegs and assembling simple model toys consisting of wooden blocks of different shapes, a boat with a mast and a car with axles and wheels. Information about part locations was obtained using the video camera, and then matched to previously stored models of the parts. It was soon realised in the Freddy project that the 'move here, do this, move there' style of robot behavior programming (actuator or joint level programming) is tedious and also did not allow for the robot to cope with variations in part position, part shape and sensor noise. Consequently, the RAPT robot programming language was developed by Pat Ambler and Robin Popplestone, in which robot behavior was specified at the object level. This meant that robot goals were specified in terms of desired position relationships between the robot, objects and the scene, leaving the details of how to achieve the goals to the underlying software system. Although developed in the 1970s RAPT is still considerably more advanced than most commercial robot programming languages. The team of people who contributed to the project were leaders in the field at the time and included Pat Ambler, Harry Barrow, Ilona Bellos, Chris Brown, Rod Burstall, Gregan Crawford, Jim Howe, Donald Michie, Robin Popplestone, Stephen Salter, Austin Tate and Ken Turner. Also of interest in the project was the use of a structured-light 3D scanner to obtain the 3D shape and position of the parts being manipulated. The Freddy II robot is currently on display at the Royal Museum in Edinburgh, Scotland, with a segment of the assembly video shown in a continuous loop. References External links Edinburgh's Artificial Intelligence Applications Institute page on Freddy for more information. Freddy II A video (167 Mb WMV) from 1973 of Freddy II in action assembling a model car and ship simultaneously. Harry Barrow is the narrator. Pat Ambler, Harry Barrow, and Robin Popplestone appear briefly in the video. historical overview of the Freddy project by Pat Ambler. Record of experiences Harry Barrow writes on interfacing Freddy I to a computer. Presentation slide Freddy is mentioned in Aaron Sloman's (slide 23) (PDF) public symposium on 50 years of AI Aaron Sloman at the University of Bremen in June 2006 BBC Robotics Timeline list includes Freddy II. History of artificial intelligence History of computing in the United Kingdom Historical robots 1970s robots Robotic manipulators Robots of the United Kingdom Science and technology in Edinburgh University of Edinburgh School of Informatics 1973 robots
Freddy II
[ "Technology" ]
1,135
[ "History of computing", "History of computing in the United Kingdom" ]
958,052
https://en.wikipedia.org/wiki/Human%20interface%20guidelines
Human interface guidelines (HIG) are software development documents which offer application developers a set of recommendations. Their aim is to improve the experience for the users by making application interfaces more intuitive, learnable, and consistent. Most guides limit themselves to defining a common look and feel for applications in a particular desktop environment. The guides enumerate specific policies. Policies are sometimes based on usability studies of human–computer interaction, but most reflect the platform developers' preferences. The central aim of a HIG is to create a consistent experience across the environment (generally an operating system or desktop environment), including the applications and other tools being used. This means both applying the same visual design and creating consistent access to and behaviour of common elements of the interface – from simple ones such as buttons and icons up to more complex constructions, such as dialog boxes. HIGs are recommendations and advice meant to help developers create better applications. Developers sometimes intentionally choose to break them if they think that the guidelines do not fit their application, or usability testing reveals an advantage in doing so. But in turn, the organization publishing the HIG might withhold endorsement of the application. Mozilla Firefox's user interface, for example, goes against the GNOME project's HIG, which is one of the project's main arguments for including GNOME Web instead of Firefox in the GNOME distribution. Scope Human interface guidelines often describe the visual design rules, including icon and window design and style. Much less frequently, they specify how user input and interaction mechanisms work. Aside from the detailed rules, guidelines sometimes also make broader suggestions about how to organize and design the application and write user-interface text. HIGs are also done for applications. In this case the HIG will build on a platform HIG by adding the common semantics for a range of application functions. Cross-platform guidelines In contrast to platform-specific guidelines, cross-platform guidelines aren't tied to a distinct platform. These guidelines make recommendations which should be true on any platform. Since this isn't always possible, cross-platform guidelines may weigh the compliance against the imposed work load. Examples Linux, macOS, Unix-like Elementary OS Human Interface Guidelines (Old link ) GNOME Human Interface Guidelines KDE Human Interface Guidelines Apple Human Interface Guidelines OLPC Human Interface Guidelines Ubuntu App Design Guides Xfce UI Guidelines Motif and CDE 2.1 Style Guide (Classic) Macintosh Human Interface Guidelines Programming languages Java Look and Feel Design Guidelines, and Advanced Topics (2001 - Can't be accessed anymore, but can archived in Wayback Machine) Portable devices Android Design Designing for Apple watchOS Apple iOS Human Interface Guidelines Apple iPadOS Human Interface Guidelines Microsoft Windows Windows User Experience Interaction Guidelines (for Windows 7 and Windows Vista) Microsoft Fluent Design System (for Windows 10/11-based devices) Design library for Windows Phone Miscellaneous Eclipse User Interface Guidelines (2007) wyoGuide, a cross-platform HIG (wxWidgets) ELMER (guidelines for public forms on the internet) Haiku Human Interface Guidelines See also Common User Access Graphical user interface builder Human interface device Linux on the desktop Principle of least astonishment Principles of grouping Usability User interface Web accessibility References Human–computer interaction Graphical user interfaces de:Human Interface Guideline
Human interface guidelines
[ "Engineering" ]
664
[ "Human–computer interaction", "Human–machine interaction" ]
958,117
https://en.wikipedia.org/wiki/Nitrogen%20assimilation
Nitrogen assimilation is the formation of organic nitrogen compounds like amino acids from inorganic nitrogen compounds present in the environment. Organisms like plants, fungi and certain bacteria that can fix nitrogen gas (N2) depend on the ability to assimilate nitrate or ammonia for their needs. Other organisms, like animals, depend entirely on organic nitrogen from their food. Nitrogen assimilation in plants Plants absorb nitrogen from the soil in the form of nitrate (NO3−) and ammonium (NH4+). In aerobic soils where nitrification can occur, nitrate is usually the predominant form of available nitrogen that is absorbed. However this is not always the case as ammonia can predominate in grasslands and in flooded, anaerobic soils like rice paddies. Plant roots themselves can affect the abundance of various forms of nitrogen by changing the pH and secreting organic compounds or oxygen. This influences microbial activities like the inter-conversion of various nitrogen species, the release of ammonia from organic matter in the soil and the fixation of nitrogen by non-nodule-forming bacteria. Ammonium ions are absorbed by the plant via ammonia transporters. Nitrate is taken up by several nitrate transporters that use a proton gradient to power the transport. Nitrogen is transported from the root to the shoot via the xylem in the form of nitrate, dissolved ammonia and amino acids. Usually (but not always) most of the nitrate reduction is carried out in the shoots while the roots reduce only a small fraction of the absorbed nitrate to ammonia. Ammonia (both absorbed and synthesized) is incorporated into amino acids via the glutamine synthetase-glutamate synthase (GS-GOGAT) pathway. While nearly all the ammonia in the root is usually incorporated into amino acids at the root itself, plants may transport significant amounts of ammonium ions in the xylem to be fixed in the shoots. This may help avoid the transport of organic compounds down to the roots just to carry the nitrogen back as amino acids. Nitrate reduction is carried out in two steps. Nitrate is first reduced to nitrite (NO2−) in the cytosol by nitrate reductase using NADH or NADPH. Nitrite is then reduced to ammonia in the chloroplasts (plastids in roots) by a ferredoxin dependent nitrite reductase. In photosynthesizing tissues, it uses an isoform of ferredoxin (Fd1) that is reduced by PSI while in the root it uses a form of ferredoxin (Fd3) that has a less negative midpoint potential and can be reduced easily by NADPH. In non photosynthesizing tissues, NADPH is generated by glycolysis and the pentose phosphate pathway. In the chloroplasts, glutamine synthetase incorporates this ammonia as the amide group of glutamine using glutamate as a substrate. Glutamate synthase (Fd-GOGAT and NADH-GOGAT) transfer the amide group onto a 2-oxoglutarate molecule producing two glutamates. Further transaminations are carried out make other amino acids (most commonly asparagine) from glutamine. While the enzyme glutamate dehydrogenase (GDH) does not play a direct role in the assimilation, it protects the mitochondrial functions during periods of high nitrogen metabolism and takes part in nitrogen remobilization. pH and Ionic balance during nitrogen assimilation Every nitrate ion reduced to ammonia produces one OH− ion. To maintain a pH balance, the plant must either excrete it into the surrounding medium or neutralize it with organic acids. This results in the medium around the plants roots becoming alkaline when they take up nitrate. To maintain ionic balance, every NO3− taken into the root must be accompanied by either the uptake of a cation or the excretion of an anion. Plants like tomatoes take up metal ions like K+, Na+, Ca2+ and Mg2+ to exactly match every nitrate taken up and store these as the salts of organic acids like malate and oxalate. Other plants like the soybean balance most of their NO3− intake with the excretion of OH− or HCO3−. Plants that reduce nitrates in the shoots and excrete alkali from their roots need to transport the alkali in an inert form from the shoots to the roots. To achieve this they synthesize malic acid in the leaves from neutral precursors like carbohydrates. The potassium ions brought to the leaves along with the nitrate in the xylem are then sent along with the malate to the roots via the phloem. In the roots, the malate is consumed. When malate is converted back to malic acid prior to use, an OH− is released and excreted. (RCOO− + H2O -> RCOOH +OH−) The potassium ions are then recirculated up the xylem with fresh nitrate. Thus the plants avoid having to absorb and store excess salts and also transport the OH−. Plants like castor reduce a lot of nitrate in the root itself, and excrete the resulting base. Some of the base produced in the shoots is transported to the roots as salts of organic acids while a small amount of the carboxylates are just stored in the shoot itself. Nitrogen use efficiency Nitrogen use efficiency (NUE) is the proportion of nitrogen present that a plant absorbs and uses. Improving nitrogen use efficiency and thus fertilizer efficiency is important to make agriculture more sustainable, by reducing pollution (fertilizer runoff) and production cost and increasing yield. Worldwide, crops generally have less than 50% NUE. Better fertilizers, improved crop management, selective breeding, and genetic engineering can increase NUE. Nitrogen use efficiency can be measured at various levels: the crop plant, the soil, by fertilizer input, by ecosystem productivity, etc. At the level of photosynthesis in leaves, it is termed photosynthetic nitrogen use efficiency (PNUE). References Assimilation Assimilation Metabolism Plant physiology
Nitrogen assimilation
[ "Chemistry", "Biology" ]
1,276
[ "Plant physiology", "Plants", "Nitrogen cycle", "Cellular processes", "Biochemistry", "Metabolism" ]
958,301
https://en.wikipedia.org/wiki/Shasta%20Lake
Shasta Lake, also popularly known as Lake Shasta, is a reservoir in Shasta County, California, United States. It began to store water in 1944 due to the impounding of the Sacramento River by Shasta Dam, the ninth-tallest dam in the US. Shasta Lake is a key facility of the Central Valley Project and provides flood control for the Sacramento Valley, downstream of the dam. Water outflow generates power through the Shasta Powerplant and is subsequently used for irrigation and municipal purposes. The reservoir lies within the Whiskeytown–Shasta–Trinity National Recreation Area, operated by the Shasta-Trinity National Forest. The California Office of Environmental Health Hazard Assessment (OEHHA) has formed a safe-eating advisory for fish caught in the lake, based on levels of mercury and PCBs found in local species. The Shasta-Keswick Reservoir system is significantly contaminated with heavy metals, primarily due to contributions from four streams. Three of these streams contain acid mine drainage, with Spring Creek being the most notable contributor, releasing high concentrations of cadmium, copper and zinc into the water. At the points where these acid streams mix with lake water, localized toxicity occurs, posing an immediate threat to aquatic life. The synergistic effects of these metals further exacerbate the environmental impact, leading to concerns about the safety of consuming fish from this water source. Geography With a capacity of at full pool, the lake has an elevation of , and a surface area of , making it the state's largest reservoir, and its third-largest body of water after Lake Tahoe and the Salton Sea. Ten miles (16 km) north of the city of Redding, with the town of Lakehead on its northern shore, Shasta Lake is popular for boating, water skiing, camping, house boating and fishing. Formed by the damming of the Sacramento River, the lake has of mostly steep mountainous shoreline covered with tall evergreen trees and manzanita. The maximum depth is . The lake has four major arms, each created by an approaching river: the Sacramento River, the McCloud River, Sulanharas Creek, and the Pit River. The Sacramento River's source is the Klamath Mountains. The McCloud River's source is Mount Shasta. The Pit River flows from Alturas, and the waterfall Potem Falls is located on that arm of the lake. History Shasta Dam was constructed between 1935 and 1945 across the Sacramento River, and Shasta Lake was formed in 1948. The Pit River, McCloud River, and several smaller tributaries had their lower courses and confluences with the Sacramento River submerged by the reservoir. Also beneath the lake is the submerged town of Kennett and many village sites of the Wintun people together with their traditional fishing, hunting, and gathering locations. Parts of the defunct tunnels and right of way of the Southern Pacific Transportation Company can be seen when the water level is low. Shasta Lake hosted the first "Boardstock" event in 1996, which continued there annually through 1999, after which the annual event moved to Clear Lake, California, 170 miles southwest of Shasta Lake. Boardstock drew many professional wakeboard riders from around the world, with an average attendance of 15,000 people. The event lasted for 3 days each year with several wakeboard contests being performed. Marinas There are a number of marinas on Shasta Lake offering a variety of services, including houseboat rentals. Bridge Bay Marina is the largest marina on Shasta Lake, with over 700 slips. It has a restaurant and bar and lodging, as well as retail and other facilities. Visitors to Bridge Bay may rent one of 100 houseboats, as well as ski, fishing and patio boats, and personal watercraft, such as standup paddleboards, jet skis and Jetovators. Bridge Bay sees a busy summer season, with a gas dock, food, ice and all retail amenities. Digger Bay Marina has over 150 boat slips in the marina, as well as a retail store and small boat rental. It is located about 10 miles from Highway 5. Shasta Marina Resort is located off of exit no. 693 from I-5, at 16814 Packers Bay Road, in Lakehead. Offering Large luxurious houseboats, ski and pontoon boats, Sea-Doos, standup paddleboards and kayaks for rent. There is houseboat and covered moorage, and a year-round store with a gas dock, food, ice and gifts. Antler's Marina is Shasta's northernmost marina. Silverthorn Marina is located on the eastern part of the lake and offers large houseboats for rent. Jones Valley Resort is the easternmost marina on the lake, tucked far into a cove, and features six different model rental houseboats, including the largest on the lake, the Titan. Holiday Harbor is located up the McCloud River arm, east of I-5. Sugarloaf Marina is located up the Sacramento River arm and offers a marina store, overnight slips and fuel. Climate Shasta Lake has a hot-summer mediterranean climate (Csa) typical of the interior of Northern California with hot, dry summers and cool, wet winters, along with great diurnal temperature variation. Gallery See also Shasta Dam - creates Shasta Lake by impounding the Sacramento River Shasta Unit — of the Whiskeytown–Shasta–Trinity National Recreation Area. List of dams and reservoirs in California List of largest reservoirs of California List of lakes in California References External links Current Conditions, Shasta Lake, California Department of Water Resources Shasta Sacramento River Central Valley Project Pit River Shasta-Trinity National Forest Trinity Mountains (California) Reservoirs in Northern California Reservoirs in California
Shasta Lake
[ "Engineering" ]
1,165
[ "Irrigation projects", "Central Valley Project" ]
958,449
https://en.wikipedia.org/wiki/Hyperbolic%20orthogonality
In geometry, the relation of hyperbolic orthogonality between two lines separated by the asymptotes of a hyperbola is a concept used in special relativity to define simultaneous events. Two events will be simultaneous when they are on a line hyperbolically orthogonal to a particular timeline. This dependence on a certain timeline is determined by velocity, and is the basis for the relativity of simultaneity. Geometry Two lines are hyperbolic orthogonal when they are reflections of each other over the asymptote of a given hyperbola. Two particular hyperbolas are frequently used in the plane: The relation of hyperbolic orthogonality actually applies to classes of parallel lines in the plane, where any particular line can represent the class. Thus, for a given hyperbola and asymptote A, a pair of lines (a, b) are hyperbolic orthogonal if there is a pair (c, d) such that , and c is the reflection of d across A. Similar to the perpendularity of a circle radius to the tangent, a radius to a hyperbola is hyperbolic orthogonal to a tangent to the hyperbola. A bilinear form is used to describe orthogonality in analytic geometry, with two elements orthogonal when their bilinear form vanishes. In the plane of complex numbers , the bilinear form is , while in the plane of hyperbolic numbers the bilinear form is The vectors z1 and z2 in the complex number plane, and w1 and w2 in the hyperbolic number plane are said to be respectively Euclidean orthogonal or hyperbolic orthogonal if their respective inner products [bilinear forms] are zero. The bilinear form may be computed as the real part of the complex product of one number with the conjugate of the other. Then entails perpendicularity in the complex plane, while implies the ws are hyperbolic orthogonal. The notion of hyperbolic orthogonality arose in analytic geometry in consideration of conjugate diameters of ellipses and hyperbolas. If g and g′ represent the slopes of the conjugate diameters, then in the case of an ellipse and in the case of a hyperbola. When a = b the ellipse is a circle and the conjugate diameters are perpendicular while the hyperbola is rectangular and the conjugate diameters are hyperbolic-orthogonal. In the terminology of projective geometry, the operation of taking the hyperbolic orthogonal line is an involution. Suppose the slope of a vertical line is denoted ∞ so that all lines have a slope in the projectively extended real line. Then whichever hyperbola (A) or (B) is used, the operation is an example of a hyperbolic involution where the asymptote is invariant. Hyperbolically orthogonal lines lie in different sectors of the plane, determined by the asymptotes of the hyperbola, thus the relation of hyperbolic orthogonality is a heterogeneous relation on sets of lines in the plane. Simultaneity Since Hermann Minkowski's foundation for spacetime study in 1908, the concept of points in a spacetime plane being hyperbolic-orthogonal to a timeline (tangent to a world line) has been used to define simultaneity''' of events relative to the timeline, or relativity of simultaneity. In Minkowski's development the hyperbola of type (B) above is in use. Two vectors (, , , ) and (, , , ) are normal (meaning hyperbolic orthogonal) when When = 1 and the s and s are zero, ≠ 0, ≠ 0, then . Given a hyperbola with asymptote A, its reflection in A produces the conjugate hyperbola. Any diameter of the original hyperbola is reflected to a conjugate diameter. The directions indicated by conjugate diameters are taken for space and time axes in relativity. As E. T. Whittaker wrote in 1910, "[the] hyperbola is unaltered when any pair of conjugate diameters are taken as new axes, and a new unit of length is taken proportional to the length of either of these diameters." On this principle of relativity, he then wrote the Lorentz transformation in the modern form using rapidity. Edwin Bidwell Wilson and Gilbert N. Lewis developed the concept within synthetic geometry in 1912. They note "in our plane no pair of perpendicular [hyperbolic-orthogonal] lines is better suited to serve as coordinate axes than any other pair" References G. D. Birkhoff (1923) Relativity and Modern Physics, pages 62,3, Harvard University Press. Francesco Catoni, Dino Boccaletti, & Roberto Cannata (2008) Mathematics of Minkowski Space, Birkhäuser Verlag, Basel. See page 38, Pseudo-orthogonality. Robert Goldblatt (1987) Orthogonality and Spacetime Geometry'', chapter 1: A Trip on Einstein's Train, Universitext Springer-Verlag Minkowski spacetime Angle
Hyperbolic orthogonality
[ "Physics" ]
1,037
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Wikipedia categories named after physical quantities", "Angle" ]
958,752
https://en.wikipedia.org/wiki/Lipid%20raft
The plasma membranes of cells contain combinations of glycosphingolipids, cholesterol and protein receptors organized in glycolipoprotein lipid microdomains termed lipid rafts. Their existence in cellular membranes remains controversial. Indeed, Kervin and Overduin imply that lipid rafts are misconstrued protein islands, which they propose form through a proteolipid code. Nonetheless, it has been proposed that they are specialized membrane microdomains which compartmentalize cellular processes by serving as organising centers for the assembly of signaling molecules, allowing a closer interaction of protein receptors and their effectors to promote kinetically favorable interactions necessary for the signal transduction. Lipid rafts influence membrane fluidity and membrane protein trafficking, thereby regulating neurotransmission and receptor trafficking. Lipid rafts are more ordered and tightly packed than the surrounding bilayer, but float freely within the membrane bilayer. Although more common in the cell membrane, lipid rafts have also been reported in other parts of the cell, such as the Golgi apparatus and lysosomes. Properties One key difference between lipid rafts and the plasma membranes from which they are derived is lipid composition. Research has shown that lipid rafts contain 3 to 5-fold the amount of cholesterol found in the surrounding bilayer. Also, lipid rafts are enriched in sphingolipids such as sphingomyelin, which is typically elevated by 50% compared to the plasma membrane. To offset the elevated sphingolipid levels, phosphatidylcholine levels are decreased which results in similar choline-containing lipid levels between the rafts and the surrounding plasma membrane. Cholesterol interacts preferentially, although not exclusively, with sphingolipids due to their structure and the saturation of the hydrocarbon chains. Although not all of the phospholipids within the raft are fully saturated, the hydrophobic chains of the lipids contained in the rafts are more saturated and tightly packed than the surrounding bilayer. Cholesterol is the dynamic "glue" that holds the raft together. Due to the rigid nature of the sterol group, cholesterol partitions preferentially into the lipid rafts where acyl chains of the lipids tend to be more rigid and in a less fluid state. One important property of membrane lipids is their amphipathic character. Amphipathic lipids have a polar, hydrophilic head group and a non-polar, hydrophobic region. The figure to the right shows the inverted cone-like shape of sphingomyelin and the cone-like shape of cholesterol based on the area of space occupied by the hydrophobic and hydrophilic regions. Cholesterol can pack in between the lipids in rafts, serving as a molecular spacer and filling any voids between associated sphingolipids. Rietveld & Simons related lipid rafts in model membranes to the immiscibility of ordered (Lo phase) and disordered (Ld or Lα phase) liquid phases. The cause of this immiscibility is uncertain, but the immiscibility is thought to minimize the free energy between the two phases. Studies have shown there is a difference in thickness of the lipid rafts and the surrounding membrane which results in hydrophobic mismatch at the boundary between the two phases. This phase height mismatch has been shown to increase line tension which may lead to the formation of larger and more circular raft platforms to minimize the energetic cost of maintaining the rafts as a separate phase. Other spontaneous events, such as curvature of the membrane and fusing of small rafts into larger rafts, can also minimize line tension. By one early definition of lipid rafts, lipid rafts differ from the rest of the plasma membrane. In fact, researchers have hypothesized that the lipid rafts can be extracted from a plasma membrane. The extraction would take advantage of lipid raft resistance to non-ionic detergents, such as Triton X-100 or Brij-98 at low temperatures (e.g., 4 °C). When such a detergent is added to cells, the fluid membrane will dissolve while the lipid rafts may remain intact and could be extracted. Because of their composition and detergent resistance, lipid rafts are also called detergent-insoluble glycolipid-enriched membrane (GEM) complexes or DIGs or Detergent Resistant Membranes (DRMs). However the validity of the detergent resistance methodology of membranes has recently been called into question due to ambiguities in the lipids and proteins recovered and the observation that they can also cause solid areas to form where there were none previously. Function Mediation of substrate presentation. Lipid rafts localize palmitoylated proteins away from the disordered region of the plasma membrane. Disruption of palmitate mediated localization then allows for exposure of a protein to its binding partner or substrate in the disordered region, an activation mechanism termed substrate presentation. For example, a protein is often palmitoylated and binds phosphatidylinositol 4,5-biphosphate (PIP2). PIP2 is polyunsaturated and does not reside in lipid rafts. When the levels of PIP2 increase in the plasma membrane, the protein trafficks to PIP2 clusters where it can be activated directly by PIP2 (or another molecule that associates with PIP2). It is probable that other functions exist. History Until 1982, it was widely accepted that phospholipids and membrane proteins were randomly distributed in cell membranes, according to the Singer-Nicolson fluid mosaic model, published in 1972. However, membrane microdomains were postulated in the 1970s using biophysical approaches by Stier & Sackmann and Klausner & Karnovsky. These microdomains were attributed to the physical properties and organization of lipid mixtures by Stier & Sackmann and Israelachvili et al. In 1974, the effects of temperature on membrane behavior had led to the proposal of "clusters of lipids" in membranes and by 1975, data suggested that these clusters could be "quasicrystalline" regions within the more freely dispersed liquid crystalline lipid molecule. In 1978, X-Ray diffraction studies led to further development of the "cluster" idea defining the microdomains as "lipids in a more ordered state". Karnovsky and co-workers formalized the concept of lipid domains in membranes in 1982. Karnovsky's studies showed heterogeneity in the lifetime decay of 1,6-diphenyl-1,3,5-hexatriene, which indicated that there were multiple phases in the lipid environment of the membrane. One type of microdomain is constituted by cholesterol and sphingolipids. They form because of the segregation of these lipids into a separate phase, demonstrated by Biltonen and Thompson and their coworkers. These microdomains (‘rafts’) were shown to exist also in cell membranes. Later, Kai Simons at the European Molecular Biology Laboratory (EMBL) in Germany and Gerrit van Meer from the University of Utrecht, Netherlands refocused interest on these membrane microdomains, enriched with lipids and cholesterol, glycolipids, and sphingolipids, present in cell membranes. Subsequently, they called these microdomains, lipid "rafts". The original concept of rafts was used as an explanation for the transport of cholesterol from the trans Golgi network to the plasma membrane. The idea was more formally developed in 1997 by Simons and Ikonen. At the 2006 Keystone Symposium of Lipid Rafts and Cell Function, lipid rafts were defined as "small (10-200nm), heterogeneous, highly dynamic, sterol- and sphingolipid-enriched domains that compartmentalize cellular processes. Small rafts can sometimes be stabilized to form larger platforms through protein-protein interactions" In recent years, lipid raft studies have tried to address many of the key issues that cause controversy in this field, including the size and lifetime of rafts. Other questions yet to be answered include: What are the effects of membrane protein levels? What is the physiological function of lipid rafts? What effect does flux of membrane lipids have on raft formation? What effect do diet and drugs have on lipid rafts? What effect do proteins located at raft boundaries have on lipid rafts? Common types Two types of lipid rafts have been proposed: planar lipid rafts (also referred to as non-caveolar, or glycolipid, rafts) and caveolae. Planar rafts are defined as being continuous with the plane of the plasma membrane (not invaginated) and by their lack of distinguishing morphological features. Caveolae, on the other hand, are flask shaped invaginations of the plasma membrane that contain caveolin proteins and are the most readily-observed structures in lipid rafts. Caveolins are widely expressed in the brain, micro-vessels of the nervous system, endothelial cells, astrocytes, oligodendrocytes, Schwann cells, dorsal root ganglia and hippocampal neurons. Planar rafts contain flotillin proteins and are found in neurons where caveolae are absent. Both types have similar lipid composition (enriched in cholesterol and sphingolipids). Flotillin and caveolins can recruit signaling molecules into lipid rafts, thus playing an important role in neurotransmitter signal transduction. It has been proposed that these microdomains spatially organize signaling molecules to promote kinetically favorable interactions which are necessary for signal transduction. Conversely, these microdomains can also separate signaling molecules, inhibiting interactions and dampening signaling responses. Role in signal transduction The specificity and fidelity of signal transduction are essential for cells to respond efficiently to changes in their environment. This is achieved in part by the differential localization of proteins that participate in signalling pathways. In the plasma membrane, one approach of compartmentalization utilizes lipid rafts. One reasonable way to consider lipid rafts is that small rafts can form concentrating platforms after ligand binding activation for individual receptors. Lipid rafts have been found by researchers to be involved in many signal transduction processes, such as Immunoglobulin E signalling, T cell antigen receptor signalling, B cell antigen receptor signalling, EGF receptor signalling, insulin receptor signalling and so on. In order to illustrate these principles, detailed examples of signalling pathways that involve lipid rafts are described below. Epidermal growth factor signaling Epidermal growth factor (EGF) binds to EGF receptor, also known as HER-1 or ErbB1, to initiate transmembrane signaling. Lipid rafts have been suggested to play a bipartite role in this process. Certain aspects of lipid rafts inhibit EGF receptor function: the ganglioside component of lipid rafts was shown to inhibit receptor activation the membrane dipole potential, which was shown to be higher in lipid rafts than in the rest of the membrane, was demonstrated to inhibit EGF binding to its receptor EGF binding was shown to be inhibited by non-caveolar lipid rafts due to a decrease in the number of receptors available for ligand binding EGF and ErbB2 (HER-2) were shown to migrate out of lipid rafts or caveolae during or after activation disruption of lipid rafts was shown to induce ligand-independent activation of EGF receptor At the same time lipid rafts seem to be necessary for or potentiate transmembrane signaling: sequestration of ErbB2 from lipid rafts have been shown to inhibit EGF-induced signaling the membrane dipole potential, which is higher in lipid rafts than in the rest of the membrane, potentiates EGF-induced signaling EGF was shown to bring about coalescence of individual lipid rafts, similar to what has been suggested to play a role in the activation of the T-cell receptor localization of EGF receptor to lipid rafts induces resistance to tyrosine kinase inhibitors Immunoglobulin E signaling Immunoglobulin E (IgE) signaling is the first convincingly demonstrated signaling process which involves lipid rafts. Evidence for this fact includes decreased solubility of Fc-epsilon receptors (FcεR) in Triton X-100 from steady state to crosslinking state, formation of patches large enough to be visualized by fluorescence microscopy from gangliosides and GPI-anchored proteins, abolition of IgE signaling by surface cholesterol depletion with methyl-β-cyclodextrin and so on. This signaling pathway can be described as follows: IgE first binds to Fc-epsilon receptors (FcεR) residing in the plasma membrane of mast cells and basophils through its Fc segment. FcεR is a tetramer consist of one α, one β and two γ chains. It is monomeric and binds one IgE molecule. The α chain binds IgE and the other three chains contain immune receptor tyrosine-based activation motifs (ITAM). Then oligomeric antigens bind to receptor-bound IgE to crosslink two or more of these receptors. This crosslinking then recruits doubly acylated non-receptor Src-like tyrosine kinase Lyn to phosphorylate ITAMs. After that, Syk family tyrosine kinases bind these phosphotyrosine residues of ITAMs to initiate the signaling cascade. Syk can, in turn, activate other proteins such as LAT. Through crosslinking, LAT can recruit other proteins into the raft and further amplify the signal. T-cell antigen receptor signaling T cell antigen receptor (TCR) is a molecule found on the surface of T lymphocytes (T cells). It is composed of αβ-heterodimers, CD3 (γδε) complex and ξ-homodimer. The α- and β- subunits contain extracellular binding sites for peptides that are presented by the major histocompatibility complex (MHC) class I and class II proteins on the surface of antigen presenting cells (APCs). The CD3 and ξ- subunits contain cytoplasmic ITAM motifs. During the signaling process, MHCs binding to TCRs brings two or more receptors together. This crosslinking, similar to IgE signaling, then recruits doubly acylated non-receptor Src-like tyrosine kinases to phosphorylate ITAM tyrosine residues. In addition to recruiting Lyn, TCR signaling also recruits Fyn. Following this procedure, ZAP-70 (which is also different with IgE signalling) binds to phosphorylated ITAMs, which leads to its own activation and LAT activation. LAT activation is the source of signal amplification. Another difference between IgE and T cell antigen receptor signalling is that Lck activation by TCR could result in more severe raft clustering thus more signal amplification. One possible mechanism of down-regulating this signaling involves the binding of cytosolic kinase Csk to the raft associated protein CBP. Csk may then suppress the Src-family kinases through phosphorylation. B-cell antigen receptor signaling B cell antigen receptor (BCR) is a complex between a membrane bound Ig (mIg) molecule and a disulfide-linked Igα- Igβ heterodimer of two polypeptides. Igα and Igβ each contains an amino acid motif, called ITAM, whose sequence is D/ExxYxxL/Ix7YxxL/I. The process of B cell antigen receptor signalling is similar to Immunoglobulin E signalling and T-cell antigen receptor signalling. It is commonly believed that other than BCR, lipid rafts play an important role in many of the cell surface events involved in B cell activation. Their functions include signaling by BCR, modulation of that signaling by co-receptors, signaling by CD40, endocytosis of antigen bound to the BCR and its routing to late endosomes to facilitate loading of antigen-derived peptides onto class II MHC molecules, routing of those peptide/MHC-II complexes to the cell surface, and their participation in antigen presentation to T cells. As platforms for virus entry Viruses, as obligate intracellular parasites, have to involve specific interaction of virus and cellular receptor expressed at the plasma membrane in order to enter cells. Accumulated evidence supports that viruses enter cells via penetration of specific membrane microdomains, including lipid rafts. Nonenveloped virus The best studied models of lipid rafts-related nonenveloped viral entry are simian virus 40 (SV40, Papovaviridae) and echovirus type 1 (EV1, Picornaviridae). SV40 utilizes two different receptors to bind onto cell surface: ganglioside GM1 located in lipid rafts and major histocompatibility (MHC) class I molecule. Binding of SV40 with MHC class I molecules triggers receptor clustering and redistribution. SV40 may recruit more caveolae from the cytoplasm or even new caveolae formed at the site of entry. A cascade of virus-induced signaling events triggered by attachment results in caveolae-mediated endocytosis in about 20 min. In some cell types the virus can enter the caveosomes directly from lipid rafts in non-coated vesicles. EV1 uses α2β1-integrin as cellular receptor. Multiple integrin heterodimers can bind to the adjacent sites of the virus capsid. Similar to SV40, attachment and binding with cells triggers clustering and relocation of integrin molecules from lipid rafts to the caveolae-like structures. Depletion of cholesterol in lipid rafts inhibits EV1 infection. There are also viruses that use the non-caveolar raft-mediated endocytosis, such as Echovirus 11 (EV11, picornavirus). However, detailed mechanisms still need to be further characterized. Enveloped virus Influenza viruses bind to the cellular receptor sialic acid, which links to glycoconjugate on the cell surface, to initiate endocytosis. After transportation into late endosomes, low-pH-dependent conformation changes of HA induces fusion, and viral ribonucleoprotein complexes (RNP) are released by proton influx of viral ion channel M2 proteins that requires binding with cholesterol. Semliki Forest virus (SFV) and Sindbis virus (SIN) require cholesterol and sphingolipids in target membrane lipid rafts for envelope glycoprotein-mediated membrane fusion and entry. Human T-lymphotropic virus Type I (HTLV-1) enter cells via glucose transporter 1 (GLUT-1). Ebola virus and Marburg virus use folate receptor-α (FRα), which is a GPI-anchored protein, as a cellular receptor. Hepatitis B virus recognizes human complement receptor type 2 (CR2, or known as CD21). Human herpesvirus 6 (HHV-6) binds to human CD46 on host cell surface. All these viral receptors are located in lipid rafts or would be relocated into lipid rafts after infection. Human Immunodeficiency virus (HIV), as a sexually-transmitted animal virus, must first penetrate a barrier of epithelial cells, who don't express CD4 and chemokine receptors, to establish a productive infection. An alternative receptor for HIV-1 envelope glycoprotein on epithelial cells is glycosphingolipid galactosyl-ceramide (GalCer), which enriches at lipid raft. SARS-Cov-2 The SARS-CoV-2 virus that causes COVID-19 was shown to enter through endocytosis using lipid rafts. The omicron variant predominantly enters through endocytosis, presumably through lipid rafts. Hydroxychloroquine blocks the entry of SARS-CoV-2 by blocking ACE2 association with enodocytic lipids. Visualization One of the primary reasons for the controversy over lipid rafts has stemmed from the challenges of studying lipid rafts in living cells, which are not in thermodynamic equilibrium. Lipid rafts are small microdomains ranging from 10 to 200 nm in size. Due to their size being below the classical diffraction limit of a light microscope, lipid rafts have proved difficult to visualize directly. Currently synthetic membranes are studied; however, there are many drawbacks to using these membranes. First, synthetic membranes have a lower concentration of proteins compared to biomembranes. Also, it is difficult to model membrane-cytoskeletal interactions which are present in biomembranes. Other pitfalls include lack of natural asymmetry and inability to study the membranes in non-equilibrium conditions. Despite this, fluorescence microscopy is used extensively in the field. For example, fluorophores conjugated to cholera-toxin B-subunit, which binds to the raft constituent ganglioside GM1 is used extensively. Also used are lipophilic membrane dyes which either partition between rafts and the bulk membrane, or change their fluorescent properties in response to membrane phase. Laurdan is one of the prime examples of such a dye. Rafts may also be labeled by genetic expression of fluorescent fusion proteins such as Lck-GFP. Manipulation of cholesterol is one of the most widely used techniques for studying lipid rafts. Sequestration (using filipin, nystatin or amphotericin), depletion and removal (using methyl-B-cyclodextrin) and inhibition of cholesterol synthesis (using HMG-CoA reductase inhibitors) are ways cholesterol are manipulated in lipid raft studies. These studies allow for the observations of effects on neurotransmitter signaling upon reduction of cholesterol levels. Sharma and colleagues used combination of high resolution imaging and mathematical modeling to provide the view that raft proteins are organized into high density nanoclusters with radii ranging over 5–20 nm. Using measurements of fluorescence resonance energy transfer between the same probes (homo-FRET or fluorescence anisotropy), Sharma and colleagues reported that a fraction (20–40%) of GPI-anchored proteins are organized into high density clusters of 4–5 nm radius, each consisting of a few molecules and different GPI-anchored proteins. To combat the problems of small size and dynamic nature, single particle and molecule tracking using cooled, sensitive CCD cameras and total internal reflection (TIRF) microscopy is coming to prominence. This allows information of the diffusivity of particles in the membrane to be extracted as well as revealing membrane corrals, barriers and sites of confinement. Other optical techniques are also used: Fluorescence Correlation and Cross-Correlation Spectroscopy (FCS/FCCS) can be used to gain information of fluorophore mobility in the membrane, Fluorescence Resonance Energy Transfer (FRET) can detect when fluorophores are in close proximity and optical tweezer techniques can give information on membrane viscosity. Not only optical techniques, but also scanning probe techniques like atomic force microscopy (AFM) or Scanning Ion Conductance Microscopy (SICM) can be used to detect the topological and mechanical properties of synthetic lipids or native cell membranes isolated by cell unroofing. Also used are dual polarisation interferometry, Nuclear Magnetic Resonance (NMR) although fluorescence microscopy remains the dominant technique. In the future it is hoped that super-resolution microscopy such as Stimulated Emission Depletion (STED) or various forms of structured illumination microscopy may overcome the problems imposed by the diffraction limit. Other techniques used in the analysis of lipid rafts include ELISA, western blotting, and FACS. Controversy The role of rafts in cellular signaling, trafficking, and structure has yet to be determined despite many experiments involving several different methods, and their very existence is controversial despite all the above. Arguments against the existence of lipid rafts include the following: First, a line tension should exist between the Lα and Lo phases. This line has been seen in model membranes, but has not been readily observed in cell systems. Second, there is no consensus on lipid raft size, which has been reported anywhere between 1 and 1,000 nanometers. Third, the time scale of lipid raft existence is unknown. If lipid rafts exist, they may only occur on a time scale that is irrelevant to biological processes. Fourth, the entire membrane may exist in the Lo phase. A first rebuttal to this point suggests that the Lo phase of the rafts is more tightly packed due to the intermolecular hydrogen bonding exhibited between sphingolipids and cholesterol that is not seen elsewhere. A second argument questions the effectiveness of the experimental design when disrupting lipid rafts. Pike and Miller discuss potential pitfalls of using cholesterol depletion to determine lipid raft function. They noted that most researchers were using acute methods of cholesterol depletion, which disrupt the rafts, but also disrupt another lipid known as PI(4,5)P2. PI(4,5)P2 plays a large role in regulating the cell's cytoskeleton, and disrupting PI(4,5)P2 causes many of the same results as this type of cholesterol depletion, including lateral diffusion of the proteins in the membrane. Because the methods disrupt both rafts and PI(4,5)P2, Kwik et al. concluded that loss of a particular cellular function after cholesterol depletion cannot necessarily be attributed solely to lipid raft disruption, as other processes independent of rafts may also be affected. Finally, while lipid rafts are believed to be connected in some way to proteins, Edidin argues that proteins attract the lipids in the raft by interactions of proteins with the acyl chains on the lipids, and not the other way around. References External links Database of proteins involved in lipid rafts "Lipid Rafts, Signalling and the Cytoskeleton" at University of Edinburgh Satyajit Mayor's Seminar: "Membrane Rafts" Membrane biology Lipids
Lipid raft
[ "Chemistry" ]
5,611
[ "Biomolecules by chemical classification", "Membrane biology", "Organic compounds", "Molecular biology", "Lipids" ]
958,884
https://en.wikipedia.org/wiki/Lifestyle%20drug
Lifestyle drug is an imprecise term commonly applied to medications which treat non–life-threatening and non-painful conditions such as baldness, wrinkles, erectile dysfunction, or acne, which the speaker perceives as either not medical problems at all or as minor medical conditions relative to others. It is sometimes intended as a pejorative, bearing the implication that the scarce medical research resources allocated to develop such drugs were spent frivolously when they could have been better spent researching cures for more serious medical conditions. Proponents, however, point out that improving the patient's subjective quality of life has always been a primary concern of medicine, and argue that these drugs are doing just that. It finds broad use in both media and scholarly journals. Concept and impact on society There is direct impact of lifestyle drugs on society, particularly in the developing world. Implications associated with labeling of indications and products sales of these lifestyle drugs may be varied. Drugs can, over time, switch from 'lifestyle' to 'mainstream' use. Bioethics and medical policy debate Though no precise widely accepted definition or criteria are associated with the term, there is much debate within the fields of pharmacology and bioethics around the propriety of developing such drugs, particularly after the commercial debut of Viagra. The German government's health insurance scheme has denied insurance coverage for some Lifestyle-Medikament ("lifestyle drugs") which they deem spurious. Critics of pharmaceutical firms claim that pharmaceutical firms actively medicalize; that is, they invent novel disorders and diseases which were not recognized as such before their "cures" could be profitably marketed, in effect pathologizing what were widely regarded as normal conditions of human existence. The consequences are said to include generally greater worries about health, misallocation of limited medical research resources to comparatively minor conditions while many serious diseases remain uncured, and needless health care expenditure. This medicalization of some element of human condition has significance, in principle, as a matter for political discourse or dialogue in civil society concerning values or morals. Social critics also question the propriety of devoting huge research budgets towards creating these drugs when far more dangerous diseases like cancer and AIDS remain uncured. It is sometimes claimed that lifestyle drugs amount to little more than medically sanctioned recreational drug use. Examples of lifestyle drugs Modafinil (when used off-label or for shift-work); Modafinil is indicated for the treatment shift-work sleep disorder, sometimes dubbed a lifestyle condition. In this aspect, modafinil is already recognized as a lifestyle drug by the FDA. Modafinil's off-label use for increasing productivity is another example of its use as a lifestyle drug. Modafinil has been used to support lifestyles requiring long and irregular working hours (such as in shift-work), frequent memory recall throughout the day (amongst students and academics), and unfatigued decision making capabilities (such as amongst U.S. air force pilots, where modafinil is an approved "go pill"). Compared to other productivity drugs, users of modafinil for this purpose more often report possessing prescriptions for modafinil. Finasteride (when used to treat balding); The use of DHT blocking medications like finasteride for the treatment of balding without the intent to treat dangerous pathologies can be considered lifestyle-enhancing use of the medication. Minoxidil; The use of minoxidil and other topical vasodilating medications for the treatment of balding without the intent to treat dangerous pathologies can be considered lifestyle-enhancing use of the medication. References External links Drugs
Lifestyle drug
[ "Chemistry" ]
755
[ "Pharmacology", "Chemicals in medicine", "Drugs", "Products of chemical industry" ]
958,950
https://en.wikipedia.org/wiki/Elektronika
Elektronika, also spelt Electronika and Electronica (, "Electronics"), is the brand name used for many different electronic products built by factories belonging to the Soviet Ministry of Electronic Industry, including calculators, electronic watches, portable games, and radios. Many Elektronika designs were the result of efforts by Soviet engineers, who were working for the Soviet military–industrial complex but were challenged with producing consumer goods that were in great shortage in the Soviet Union. The brand is still in use in Belarus. Calculators Most notable is a line of calculators, which started production in 1968. The Elektronika calculators were produced in a variety of sizes and function sets, ranging from large, bulky four-function calculators to smaller models designed for use in schools operating on a special, safer 42V standard (like the MK-SCH-2). As time progressed, Elektronika calculators were produced that supported more advanced calculations, with some of the most recent models even offering full programmability and functionality similar to today's American-designed graphing calculators. The Elektronika brand is now used by Novosibirsk RPN programmable calculators Elektronika MK-152 (:ru:Электроника МК-152) and Elektronika MK-161 (:ru:Электроника МК-161). Computers The following Elektronika computers used a Soviet Intel-compatible CPU: MS 1502, MS 1504 – XT clone KR-series (01/02/03/04) – mass production of popular Russian 8-bit homebrew RK-86 (:ru:Радио 86РК) The following Elektronika computers used a Soviet CPU, compatible with PDP-11: Elektronika 60 UKNC DVK – clone of SM EVM, stripped for mass production to satisfy general scientific and R&D needs BK-0010 and BK-0011M – stripped and low-cost version of DVK, targeted at teenagers and home users Electronic games & Toys Most Elektronika-branded electronic toys were Nintendo Game & Watch clones. These used the KB1013VK1-2 microprocessor, a Soviet clone of the Sharp SM-5A used in Game & Watch consoles. The vast majority of the Elektronika electronic toys had model names that start with IM (ИМ Игра Микропроцессорная, a Russian acronym for "microprocessor based game)". Some model names for Elektronika branded clones start with IE (ИЭ Игра Электронная, a Russian acronym for "Electronic game)". The Elektronika electronic toys that had model names beginning with MG were manufactured by Angstrem and were designed for export with English packaging and inserts. The known models include: IM-01 Chess computer (1986) – Designed and manufactured by Svetlana IM-01T Chess computer (1992) – Improved version of the IM-01, designed and manufactured by Svetlana IM-02 Well, Just You Wait! (1984) – Nintendo EG-26 Egg, a variation of Mickey Mouse without the Disney license. IM-03 Mysteries of the Ocean (1986) – Nintendo OC-22 Octopus IM-04 Merry Cook (1989) – Nintendo FP-24 Chef IM-05 Chess computer (1989) – Improved version of the IM-01, designed and manufactured by Svetlana MG-09 Space Bridge (1989) – Nintendo FR-27 Fire IM-10 Ice Hockey (1988) – Ice hockey-themed clone of Nintendo EG-26 Egg' IM-11 Lunokhod (1983) – Milton Bradley Big Trak programmable battery-powered toy tank IM-12 Winnie the Pooh (1991) – Nintendo CJ-93 Donkey Kong Jr. panorama MG-13 Explorers of Space, also known as Space Scouts (1989) IM-15 Electronic Football – Tomy World Cup Soccer IM-18 Fowling (1989) IM-19 Biathlon IM-22 Monkey Goalkeeper, also known as Merry Footballer (1989) IM-23 Autoslalom (1989) / Car Slalom (1991) 24-01 Mickey Mouse (1984) – Nintendo MC-25 Mickey Mouse IM-26 Interchangeable. Display cartridges included IM-02 Well, Just You Wait!, IM-22 Merry Footballer, IM-23 Autoslalom, IM-10 Ice Hockey, and IM-32 Cat Fisherman (1991) – Bandai Digi Casse IM-27 Space game IM-28 Electronic quiz game IM-29 Chess Partner (1991) – Mattel Electronics Computer Chess IM-30 Orpheus synthesizer (1991) – Designed and manufactured by Svetlana IM-32 Cat Fisherman (1991) IM-37 Football IM-45 English learning computer IM-50 Space Flight (1992) MG-50 Amusing Arithmetics (1989) IM-55 Basketball Post-1992 versions: I-01 Car Slalom I-02 Merry Cook I-03 Space Bridge I-04 Fisher Tom-Cat I-05 Naval Combat I-06 Just You wait! I-07 Frog boaster I-08 Fowling I-09 Explorers of Space I-10 Biathlon I-11 Circus I-12 Hockey I-13 Merry Footballer I-14 Night Thiefes I-15 Mysteries of the Ocean I-20 (option 1) Air Shooting Range (1994) – Nintendo BU-201 Spitball Sparky I-20 (option 2) Supercubes (1994) – Tetris'' Tape recorders (audio) Reel-to-reel 100S (1970, portable stereo) ТА1-003 Stereo (1980) 004 Stereo MPK 007 S (1987) Cassette 203-S (1980, portable stereo) 204-S (1984, stereo deck) MH-205 stereo (1985, car stereo player) 206-stereo 211-S (1983, portable stereo) 301 (1972, portable) 302, 302-1, 302-2 (1974 till 1990s, portable) 305 (1984, portable) 306 (1986, portable stereo) 311-S (1977, portable stereo) 321/322 (1978, portable) 323/324 (1981, portable) M-327 (1987, portable) M-334S (1990, portable stereo component system with detachable recorder M-332S) М-402S (1990, pocket stereo) Elektronika-mini (199?, pocket stereo) External links Museum of Soviet Calculators On the Web (MOSCOW) Collection of Elektronika watches Article on Elektronika watches Soviet Digital Electronics Museum References Science and technology in the Soviet Union Computing in the Soviet Union Soviet brands Electronics companies of the Soviet Union Ministry of the Electronics Industry (Soviet Union)
Elektronika
[ "Technology" ]
1,457
[ "Computing in the Soviet Union", "History of computing" ]
958,988
https://en.wikipedia.org/wiki/History%20of%20astrology
Astrological belief in relation between celestial observations and terrestrial events have influenced various aspects of human history, including world-views, language and many elements of culture. It has been argued that astrology began as a study as soon as human beings made conscious attempts to measure, record, and predict seasonal changes by reference to astronomical cycles. Early evidence of such practices appears as markings on bones and cave walls, which show that the lunar cycle was being noted as early as 25,000 years ago; the first step towards recording the Moon's influence upon tides and rivers, and towards organizing a communal calendar. With the Neolithic Revolution new needs were also being met by the increasing knowledge of constellations, whose appearances in the night-time sky change with the seasons, thus allowing the rising of particular star-groups to herald annual floods or seasonal activities. By the 3rd millennium BCE, widespread civilisations had developed sophisticated understanding of celestial cycles, and are believed to have consciously oriented their temples to create alignment with the heliacal risings of the stars. There is scattered evidence to suggest that the oldest known astrological references are copies of texts made during this period, particularly in Mesopotamia. Two, from the Venus tablet of Ammisaduqa (compiled in Babylon round 1700 BC) are reported to have been made during the reign of king Sargon of Akkad (2334–2279 BC). Another, showing an early use of electional astrology, is ascribed to the reign of the Sumerian ruler Gudea of Lagash (c. 2144–2124 BC). However, there is controversy over whether they were genuinely recorded at the time or merely ascribed to ancient rulers by posterity. The oldest undisputed evidence of the use of astrology as an integrated system of knowledge is attributed to records that emerge from the first dynasty of Mesopotamia (1950–1651 BC). Among West Eurasian peoples, the earliest evidence for astrology dates from the 3rd millennium BC, with roots in calendrical systems used to predict seasonal shifts and to interpret celestial cycles as signs of divine communications. Until the 17th century, astrology was considered a scholarly tradition, and it helped drive the development of astronomy. It was commonly accepted in political and cultural circles, and some of its concepts were used in other traditional studies, such as alchemy, meteorology and medicine. By the end of the 17th century, emerging scientific concepts in astronomy, such as heliocentrism, undermined the theoretical basis of astrology, which subsequently lost its academic standing and became regarded as a pseudoscience. Empirical scientific investigation has shown that predictions based on these systems are not accurate. In the 20th century, astrology gained broader consumer popularity through the influence of regular mass media products, such as newspaper horoscopes. Babylonian astrology Babylonian astrology is the earliest recorded organized system of astrology, arising in the 2nd millennium BC. There is speculation that astrology of some form appeared in the Sumerian period in the 3rd millennium BC, but the isolated references to ancient celestial omens dated to this period are not considered sufficient evidence to demonstrate an integrated theory of astrology. The history of scholarly celestial divination is therefore generally reported to begin with late Old Babylonian texts (), continuing through the Middle Babylonian and Middle Assyrian periods (). By the 16th century BC the extensive employment of omen-based astrology can be evidenced in the compilation of a comprehensive reference work known as Enuma Anu Enlil. Its contents consisted of 70 cuneiform tablets comprising 7,000 celestial omens. Texts from this time also refer to an oral tradition – the origin and content of which can only be speculated upon. At this time Babylonian astrology was solely mundane, concerned with the prediction of weather and political matters, and prior to the 7th century BC the practitioners' understanding of astronomy was fairly rudimentary. Astrological symbols likely represented seasonal tasks, and were used as a yearly almanac of listed activities to remind a community to do things appropriate to the season or weather (such as symbols representing times for harvesting, gathering shell-fish, fishing by net or line, sowing crops, collecting or managing water reserves, hunting, and seasonal tasks critical in ensuring the survival of children and young animals for the larger group). By the 4th century, their mathematical methods had progressed enough to calculate future planetary positions with reasonable accuracy, at which point extensive ephemerides began to appear. Babylonian astrology developed within the context of divination. A collection of 32 tablets with inscribed liver models, dating from about 1875 BC, are the oldest known detailed texts of Babylonian divination, and these demonstrate the same interpretational format as that employed in celestial omen analysis. Blemishes and marks found on the liver of the sacrificial animal were interpreted as symbolic signs which presented messages from the gods to the king. The gods were also believed to present themselves in the celestial images of the planets or stars with whom they were associated. Evil celestial omens attached to any particular planet were therefore seen as indications of dissatisfaction or disturbance of the god that planet represented. Such indications were met with attempts to appease the god and find manageable ways by which the god's expression could be realised without significant harm to the king and his nation. An astronomical report to the king Esarhaddon concerning a lunar eclipse of January 673 BC shows how the ritualistic use of substitute kings, or substitute events, combined an unquestioning belief in magic and omens with a purely mechanical view that the astrological event must have some kind of correlate within the natural world: Ulla Koch-Westenholz, in her 1995 book Mesopotamian Astrology, argues that this ambivalence between a theistic and mechanic worldview defines the Babylonian concept of celestial divination as one which, despite its heavy reliance on magic, remains free of implications of targeted punishment with the purpose of revenge, and so "shares some of the defining traits of modern science: it is objective and value-free, it operates according to known rules, and its data are considered universally valid and can be looked up in written tabulations". Koch-Westenholz also establishes the most important distinction between ancient Babylonian astrology and other divinatory disciplines as being that the former was originally exclusively concerned with mundane astrology, being geographically oriented and specifically applied to countries, cities and nations, and almost wholly concerned with the welfare of the state and the king as the governing head of the nation. Mundane astrology is therefore known to be one of the oldest branches of astrology. It was only with the gradual emergence of horoscopic astrology, from the 6th century BC, that astrology developed the techniques and practice of natal astrology. Hellenistic Egypt In 525 BC Egypt was conquered by the Persians so there is likely to have been some Mesopotamian influence on Egyptian astrology. Arguing in favour of this, historian Tamsyn Barton gives an example of what appears to be Mesopotamian influence on the Egyptian zodiac, which shared two signs – the Balance and the Scorpion, as evidenced in the Dendera Zodiac (in the Greek version the Balance was known as the Scorpion's Claws). After the occupation by Alexander the Great in 332 BC, Egypt came under Hellenistic rule and influence. The city of Alexandria was founded by Alexander after the conquest and during the 3rd and 2nd centuries BC, the Ptolemaic scholars of Alexandria were prolific writers. It was in Ptolemaic Alexandria that Babylonian astrology was mixed with the Egyptian tradition of Decanic astrology to create Horoscopic astrology. This contained the Babylonian zodiac with its system of planetary exaltations, the triplicities of the signs and the importance of eclipses. Along with this it incorporated the Egyptian concept of dividing the zodiac into thirty-six decans of ten degrees each, with an emphasis on the rising decan, the Greek system of planetary Gods, sign rulership and four elements. The decans were a system of time measurement according to the constellations. They were led by the constellation Sothis or Sirius. The risings of the decans in the night were used to divide the night into 'hours'. The rising of a constellation just before sunrise (its heliacal rising) was considered the last hour of the night. Over the course of the year, each constellation rose just before sunrise for ten days. When they became part of the astrology of the Hellenistic Age, each decan was associated with ten degrees of the zodiac. Texts from the 2nd century BC list predictions relating to the positions of planets in zodiac signs at the time of the rising of certain decans, particularly Sothis. The earliest Zodiac found in Egypt dates to the 1st century BC, the Dendera Zodiac. Particularly important in the development of horoscopic astrology was the Greco-Roman astrologer and astronomer Ptolemy, who lived in Alexandria during Roman Egypt. Ptolemy's work the Tetrabiblos laid the basis of the Western astrological tradition, and as a source of later reference is said to have "enjoyed almost the authority of a Bible among the astrological writers of a thousand years or more". It was one of the first astrological texts to be circulated in Medieval Europe after being translated from Arabic into Latin by Plato of Tivoli (Tiburtinus) in Spain, 1138. According to Firmicus Maternus (4th century), the system of horoscopic astrology was given early on to an Egyptian pharaoh named Nechepso and his priest Petosiris. The Hermetic texts were also put together during this period and Clement of Alexandria, writing in the Roman era, demonstrates the degree to which astrologers were expected to have knowledge of the texts in his description of Egyptian sacred rites: This is principally shown by their sacred ceremonial. For first advances the Singer, bearing some one of the symbols of music. For they say that he must learn two of the books of Hermes, the one of which contains the hymns of the gods, the second the regulations for the king's life. And after the Singer advances the Astrologer, with a horologe in his hand, and a palm, the symbols of astrology. He must have the astrological books of Hermes, which are four in number, always in his mouth. Greece and Rome The conquest of Asia by Alexander the Great exposed the Greeks to the cultures and cosmological ideas of Syria, Babylon, Persia and central Asia. Greek overtook cuneiform script as the international language of intellectual communication and part of this process was the transmission of astrology from cuneiform to Greek. Sometime around 280 BC, Berossus, a priest of Bel from Babylon, moved to the Greek island of Kos in order to teach astrology and Babylonian culture to the Greeks. With this, what historian Nicholas Campion calls, "the innovative energy" in astrology moved west to the Hellenistic world of Greece and Egypt. According to Campion, the astrology that arrived from the Eastern World was marked by its complexity, with different forms of astrology emerging. By the 1st century BC two varieties of astrology were in existence, one that required the reading of horoscopes in order to establish precise details about the past, present and future; the other being theurgic (literally meaning 'god-work'), which emphasised the soul's ascent to the stars. While they were not mutually exclusive, the former sought information about the life, while the latter was concerned with personal transformation, where astrology served as a form of dialogue with the Divine. As with much else, Greek influence played a crucial role in the transmission of astrological theory to Rome. However, our earliest references to demonstrate its arrival in Rome reveal its initial influence upon the lower orders of society, and display concern about uncritical recourse to the ideas of Babylonian 'star-gazers'. Among the Greeks and Romans, Babylonia (also known as Chaldea) became so identified with astrology that 'Chaldean wisdom' came to be a common synonym for divination using planets and stars. The first definite reference to astrology comes from the work of the orator Cato, who in 160 BC composed a treatise warning farm overseers against consulting with Chaldeans. The 2nd-century Roman poet Juvenal, in his satirical attack on the habits of Roman women, also complains about the pervasive influence of Chaldeans, despite their lowly social status, saying "Still more trusted are the Chaldaeans; every word uttered by the astrologer they will believe has come from Hammon's fountain, ... nowadays no astrologer has credit unless he has been imprisoned in some distant camp, with chains clanking on either arm". One of the first astrologers to bring Hermetic astrology to Rome was Thrasyllus, who, in the first century AD, acted as the astrologer for the emperor Tiberius. Tiberius was the first emperor reported to have had a court astrologer, although his predecessor Augustus had also used astrology to help legitimise his Imperial rights. In the second century AD, the astrologer Claudius Ptolemy was so obsessed with getting horoscopes accurate that he began the first attempt to make an accurate world map (maps before this were more relativistic or allegorical) so that he could chart the relationship between the person's birthplace and the heavenly bodies. While doing so, he coined the term "geography". Even though some use of astrology by the emperors appears to have happened, there was also a prohibition on astrology to a certain extent as well. In the 1st century AD, Publius Rufus Anteius was accused of the crime of funding the banished astrologer Pammenes, and requesting his own horoscope and that of then emperor Nero. For this crime, Nero forced Anteius to commit suicide. At this time, astrology was likely to result in charges of magic and treason. Cicero's De divinatione (44 BC), which rejects astrology and other allegedly divinatory techniques, is a fruitful historical source for the conception of scientificity in Roman classical Antiquity. The Pyrrhonist philosopher Sextus Empiricus compiled the ancient arguments against astrology in his book Against the Astrologers. Islamic world Astrology was taken up enthusiastically by Islamic scholars following the collapse of Alexandria to the Arabs in the 7th century, and the founding of the Abbasid empire in the 8th century. The second Abbasid caliph, Al Mansur (754–775) founded the city of Baghdad to act as a centre of learning, and included in its design a library-translation centre known as Bayt al-Hikma 'Storehouse of Wisdom', which continued to receive development from his heirs and was to provide a major impetus for Arabic translations of Hellenistic astrological texts. The early translators included Mashallah, who helped to elect the time for the foundation of Baghdad, and Sahl ibn Bishr (a.k.a. Zael), whose texts were directly influential upon later European astrologers such as Guido Bonatti in the 13th century, and William Lilly in the 17th century. Knowledge of Arabic texts started to become imported into Europe during the Latin translations of the 12th century. In the 9th century, Persian astrologer Albumasar was thought to be one of the greatest astrologer at that time. His practical manuals for training astrologers profoundly influenced Muslim intellectual history and, through translations, that of western Europe and Byzantium In the 10th century. Albumasar's Introductorium in Astronomiam was one of the most important sources for the recovery of Aristotle for medieval European scholars. Another was the Persian mathematician, astronomer, astrologer and geographer Al Khwarizmi. The Arabs greatly increased the knowledge of astronomy, and many of the star names that are commonly known today, such as Aldebaran, Altair, Betelgeuse, Rigel and Vega retain the legacy of their language. They also developed the list of Hellenistic lots to the extent that they became historically known as Arabic parts, for which reason it is often wrongly claimed that the Arabic astrologers invented their use, whereas they are clearly known to have been an important feature of Hellenistic astrology. During the advance of Islamic science some of the practices of astrology were refuted on theological grounds by astronomers such as Al-Farabi (Alpharabius), Ibn al-Haytham (Alhazen) and Avicenna. Their criticisms argued that the methods of astrologers were conjectural rather than empirical, and conflicted with orthodox religious views of Islamic scholars through the suggestion that the Will of God can be precisely known and predicted in advance. Such refutations mainly concerned 'judicial branches' (such as horary astrology), rather than the more 'natural branches' such as medical and meteorological astrology, these being seen as part of the natural sciences of the time. For example, Avicenna's 'Refutation against astrology' Resāla fī ebṭāl aḥkām al-nojūm, argues against the practice of astrology while supporting the principle of planets acting as the agents of divine causation which express God's absolute power over creation. Avicenna considered that the movement of the planets influenced life on earth in a deterministic way, but argued against the capability of determining the exact influence of the stars. In essence, Avicenna did not refute the essential dogma of astrology, but denied our ability to understand it to the extent that precise and fatalistic predictions could be made from it. Medieval and Renaissance Europe While astrology in the East flourished following the break up of the Roman world, with Indian, Persian and Islamic influences coming together and undergoing intellectual review through an active investment in translation projects, Western astrology in the same period had become "fragmented and unsophisticated ... partly due to the loss of Greek scientific astronomy and partly due to condemnations by the Church." Translations of Arabic works into Latin started to make their way to Spain by the late 10th century, and in the 12th century the transmission of astrological works from Arabia to Europe "acquired great impetus". By the 13th century astrology had become a part of everyday medical practice in Europe. Doctors combined Galenic medicine (inherited from the Greek physiologist Galen - AD 129–216) with studies of the stars. By the end of the 1500s, physicians across Europe were required by law to calculate the position of the Moon before carrying out complicated medical procedures, such as surgery or bleeding. Influential works of the 13th century include those of the British monk Johannes de Sacrobosco ( 1195–1256) and the Italian astrologer Guido Bonatti from Forlì (Italy). Bonatti served the communal governments of Florence, Siena and Forlì and acted as advisor to Frederick II, Holy Roman Emperor. His astrological text-book Liber Astronomiae ('Book of Astronomy'), written around 1277, was reputed to be "the most important astrological work produced in Latin in the 13th century". Dante Alighieri immortalised Bonatti in his Divine Comedy (early 14th century) by placing him in the eighth Circle of Hell, a place where those who would divine the future are forced to have their heads turned around (to look backwards instead of forwards). In medieval Europe, a university education was divided into seven distinct areas, each represented by a particular planet and known as the seven liberal arts. Dante attributed these arts to the planets. As the arts were seen as operating in ascending order, so were the planets in decreasing order of planetary speed: grammar was assigned to the Moon, the quickest moving celestial body, dialectic was assigned to Mercury, rhetoric to Venus, music to the Sun, arithmetic to Mars, geometry to Jupiter and astrology/astronomy to the slowest moving body, Saturn. Medieval writers used astrological symbolism in their literary themes. For example, Dante's Divine Comedy builds varied references to planetary associations within his described architecture of Hell, Purgatory and Paradise, (such as the seven layers of Purgatory's mountain purging the seven cardinal sins that correspond to astrology's seven classical planets). Similar astrological allegories and planetary themes are pursued through the works of Geoffrey Chaucer. Chaucer's astrological passages are particularly frequent and knowledge of astrological basics is often assumed through his work. He knew enough of his period's astrology and astronomy to write a Treatise on the Astrolabe for his son. He pinpoints the early spring season of the Canterbury Tales in the opening verses of the prologue by noting that the Sun "hath in the Ram his halfe cours yronne". He makes the Wife of Bath refer to "sturdy hardiness" as an attribute of Mars, and associates Mercury with "clerkes". In the early modern period, astrological references are also to be found in the works of William Shakespeare and John Milton. One of the earliest English astrologers to leave details of his practice was Richard Trewythian (b. 1393). His notebook demonstrates that he had a wide range of clients, from all walks of life, and indicates that engagement with astrology in 15th-century England was not confined to those within learned, theological or political circles. During the Renaissance, court astrologers would complement their use of horoscopes with astronomical observations and discoveries. Many individuals now credited with having overturned the old astrological order, such as Tycho Brahe, Galileo Galilei and Johannes Kepler, were themselves practicing astrologers. At the end of the Renaissance the confidence placed in astrology diminished, with the breakdown of Aristotelian Physics and rejection of the distinction between the celestial and sublunar realms, which had historically acted as the foundation of astrological theory. Keith Thomas writes that although heliocentrism is consistent with astrology theory, 16th and 17th century astronomical advances meant that "the world could no longer be envisaged as a compact inter-locking organism; it was now a mechanism of infinite dimensions, from which the hierarchical subordination of earth to heaven had irrefutably disappeared". Initially, amongst the astronomers of the time, "scarcely anyone attempted a serious refutation in the light of the new principles" and in fact astronomers "were reluctant to give up the emotional satisfaction provided by a coherent and interrelated universe". By the 18th century the intellectual investment which had previously maintained astrology's standing was largely abandoned. Historian of science Ann Geneva writes: India The earliest recorded use of astrology in India is recorded during the Vedic period. Astrology, or jyotiṣa is listed as a Vedanga, or branch of the Vedas of the Vedic religion. The only work of this class to have survived is the Vedanga Jyotisha, which contains rules for tracking the motions of the sun and the moon in the context of a five-year intercalation cycle. The date of this work is uncertain, as its late style of language and composition, consistent with the last centuries BC, albeit pre-Mauryan, conflicts with some internal evidence of a much earlier date in the 2nd millennium BC. Indian astronomy and astrology developed together. The earliest treatise on Jyotisha, the Bhrigu Samhita, was compiled by the sage Bhrigu during the Vedic era. The sage Bhirgu is also called the 'Father of Hindu Astrology', and is one of the venerated Saptarishi or seven Vedic sages. The Saptarishis are also symbolized by the seven main stars in the Ursa Major constellation. The documented history of Jyotisha in the subsequent newer sense of modern horoscopic astrology is associated with the interaction of Indian and Hellenistic cultures through the Greco-Bactrian and Indo-Greek Kingdoms. The oldest surviving treatises, such as the Yavanajataka or the Brihat-Samhita, date to the early centuries AD. The oldest astrological treatise in Sanskrit is the Yavanajataka ("Sayings of the Greeks"), a versification by Sphujidhvaja in 269/270 AD of a now lost translation of a Greek treatise by Yavanesvara during the 2nd century AD under the patronage of the Indo-Scythian king Rudradaman I of the Western Satraps. Written on pages of tree bark, the Samhita (Compilation) is said to contain five million horoscopes comprising all who have lived in the past or will live in the future. The first named authors writing treatises on astronomy are from the 5th century AD, the date when the classical period of Indian astronomy can be said to begin. Besides the theories of Aryabhata in the Aryabhatiya and the lost Arya-siddhānta, there is the Pancha-Siddhāntika of Varahamihira. China The Chinese astrological system is based on native astronomy and calendars, and its significant development is tied to that of native astronomy, which came to flourish during the Han dynasty (2nd century BC – 2nd century AD). Chinese astrology has a close relation with Chinese philosophy (theory of three harmonies: heaven, earth and water) and uses the principles of yin and yang, and concepts that are not found in Western astrology, such as the wu xing teachings, the 10 Celestial stems, the 12 Earthly Branches, the lunisolar calendar (moon calendar and sun calendar), and the time calculation after year, month, day and shichen (時辰). Astrology was traditionally regarded highly in China, and Confucius is said to have treated astrology with respect saying: "Heaven sends down its good or evil symbols and wise men act accordingly". The 60-year cycle combining the five elements with the twelve animal signs of the zodiac has been documented in China since at least the time of the Shang (Shing or Yin) dynasty (c. 1766 BC – c. 1050 BC). Oracle bones have been found dating from that period with the date according to the 60-year cycle inscribed on them, along with the name of the diviner and the topic being divined. Astrologer Tsou Yen lived around 300 BC, and wrote: "When some new dynasty is going to arise, heaven exhibits auspicious signs for the people". There is debate as to whether the Babylonian astrology influenced early development of Chinese astrology. Later in the 6th century, the translation of the Mahāsaṃnipāta Sūtra brought the Babylonian system to China. Though it did not displace Chinese astrology, it was referenced in several poems. Mesoamerica The calendars of Pre-Columbian Mesoamerica are based upon a system which had been in common use throughout the region, dating back to at least the 6th century BC. The earliest calendars were employed by peoples such as the Zapotecs and Olmecs, and later by such peoples as the Maya, Mixtec and Aztecs. Although the Mesoamerican calendar did not originate with the Maya, their subsequent extensions and refinements to it were the most sophisticated. Along with those of the Aztecs, the Maya calendars are the best-documented and most completely understood. The distinctive Mayan calendar used two main systems, one plotting the solar year of 360 days, which governed the planting of crops and other domestic matters; the other called the Tzolkin of 260 days, which governed ritual use. Each was linked to an elaborate astrological system to cover every facet of life. On the fifth day after the birth of a boy, the Mayan astrologer-priests would cast his horoscope to see what his profession was to be: soldier, priest, civil servant or sacrificial victim. A 584-day Venus cycle was also maintained, which tracked the appearance and conjunctions of Venus. Venus was seen as a generally inauspicious and baleful influence, and Mayan rulers often planned the beginning of warfare to coincide with when Venus rose. There is evidence that the Maya also tracked the movements of Mercury, Mars and Jupiter, and possessed a zodiac of some kind. The Mayan name for the constellation Scorpio was also 'scorpion', while the name of the constellation Gemini was 'peccary'. There is some evidence for other constellations being named after various beasts. The most famous Mayan astrological observatory still intact is the Caracol observatory in the ancient Mayan city of Chichen Itza in modern-day Mexico. The Aztec calendar shares the same basic structure as the Mayan calendar, with two main cycles of 360 days and 260 days. The 260-day calendar was called Tonalpohualli and was used primarily for divinatory purposes. Like the Mayan calendar, these two cycles formed a 52-year 'century', sometimes called the Calendar Round. See also Astrology and science Classical planets in Western alchemy Jewish views on astrology List of astrological traditions, types, and systems Worship of heavenly bodies Notes Sources Nicholas Campion, A History of Western Astrology Vol. 2, The Medieval and Modern Worlds, Continuum 2009. . . (PDF version) Further reading External links Astrology Obsolete scientific theories
History of astrology
[ "Astronomy" ]
6,047
[ "History of astrology", "History of astronomy" ]
958,996
https://en.wikipedia.org/wiki/Messier%2015
Messier 15 or M15 (also designated NGC 7078 and sometimes known as the Great Pegasus Cluster) is a globular cluster in the constellation Pegasus. It was discovered by Jean-Dominique Maraldi in 1746 and included in Charles Messier's catalogue of comet-like objects in 1764. At an estimated billion years old, it is one of the oldest known globular clusters. Characteristics M 15 is about 35,700 light-years from Earth, and 175 light-years in diameter. It has an absolute magnitude of −9.2, which translates to a total luminosity of 360,000 times that of the Sun. Messier 15 is one of the most densely packed globulars known in the Milky Way galaxy. Its core has undergone a contraction known as "core collapse" and it has a central density cusp with an enormous number of stars surrounding what may be a central black hole. Home to over 100,000 stars, the cluster is notable for containing a large number of variable stars (112) and pulsars (8), including one double neutron star system, M15-C. It also contains Pease 1, the first planetary nebula discovered within a globular cluster in 1928. Just three others have been found in globular clusters since then. Amateur astronomy At magnitude 6.2, M15 approaches naked eye visibility under good conditions and can be observed with binoculars or a small telescope, appearing as a fuzzy star. Telescopes with a larger aperture (at least 6 in. (150 mm)) will start to reveal individual stars, the brightest of which are of magnitude +12.6. The cluster appears 18 arc minutes in size (three tenths of a degree across). M15 is around 4° WNW of the brightest star of Pegasus, Epsilon Pegasi. X-ray sources Earth-orbiting satellites Uhuru and Chandra X-ray Observatory have detected two bright X-ray sources in this cluster: Messier 15 X-1 (4U 2129+12) and Messier 15 X-2. The former appears to be the first astronomical X-ray source detected in Pegasus. Gallery See also List of Messier objects X-ray astronomy References External links Messier 15, SEDS Messier pages Messier 15, Galactic Globular Clusters Database page Globular Cluster Photometry With the Hubble Space Telescope. V. WFPC Study of M15's Central density Cusp Wikisky.org SDSS image of M15 Messier 015 Messier 015 015 Messier 015 Astronomical X-ray sources X-ray astronomy 17460907
Messier 15
[ "Astronomy" ]
542
[ "Pegasus (constellation)", "Constellations", "X-ray astronomy", "Astronomical X-ray sources", "Astronomical objects", "Astronomical sub-disciplines" ]
959,018
https://en.wikipedia.org/wiki/Omega%20Nebula
The Omega Nebula is an H II region in the constellation Sagittarius. It was discovered by Philippe Loys de Chéseaux in 1745. Charles Messier catalogued it in 1764. It is by some of the richest starfields of the Milky Way, figuring in the northern two-thirds of Sagittarius. This feature is also known as the Swan Nebula, Checkmark Nebula, Lobster Nebula, and the Horseshoe Nebula, and catalogued as Messier 17 or M17 or NGC 6618. Characteristics The Omega Nebula is between 5,000 and 6,000 light-years from Earth and it spans some 15 light-years in diameter. The cloud of interstellar matter of which this nebula is a part is roughly 40 light-years in diameter and has a mass of 30,000 solar masses. The total mass of the Omega Nebula is an estimated 800 solar masses. It is considered one of the brightest and most massive star-forming regions of our galaxy. Its local geometry is similar to the Orion Nebula except that it is viewed edge-on rather than face-on. The open cluster NGC 6618 lies embedded in the nebulosity and causes the gases of the nebula to shine due to radiation from these hot, young stars; however, the actual number of stars in the nebula is much higher – up to 800, 100 of spectral type earlier than B9, and 9 of spectral type O, plus over a thousand stars in formation on its outer regions. It is also one of the youngest clusters known, with an age of just 1 million years. The luminous blue variable HD 168607, in the south-east part of the nebula, is generally assumed to be associated with it; its close neighbor, the blue hypergiant HD 168625, may be too. The Swan portion of M17, the Omega Nebula in the Sagittarius nebulosity is said to resemble a barber's pole. Early research The first attempt to accurately draw the nebula (as part of a series of sketches of nebulae) was made by John Herschel in 1833, and published in 1836. He described the nebula as such: A second, more detailed sketch was made during his visit to South Africa in 1837. The nebula was also studied by Johann von Lamont and separately by an undergraduate at Yale College, Mr Mason, starting from around 1836. When Herschel published his 1837 sketch in 1847, he wrote: Sketches were also made by William Lassell in 1862 using his four-foot telescope at Malta, and by M. Trouvelot from Cambridge, Massachusetts, and Edward Singleton Holden in 1875 using the twenty-six inch Clark refractor at the United States Naval Observatory. Observations by SOFIA In January 2020, the Stratospheric Observatory for Infrared Astronomy (SOFIA) provided new insights into the Omega Nebula. SOFIA's composite image revealed that blue areas (20 microns) near the center indicate gas heated by massive stars, while green areas (37 microns) trace dust warmed by massive stars and newborn stars. Nine previously unseen protostars were discovered primarily in the southern regions. Red areas near the edges represent cold dust detected by the Herschel Space Telescope (70 microns), and the white star field was observed by the Spitzer Space Telescope (3.6 microns). These observations suggest that parts of the nebula formed separately, contributing to its distinctive swan-like shape. Gallery See also List of Messier objects Messier object New General Catalogue References External links Messier 17, SEDS Messier pages Omega Nebula at ESA/Hubble Omega Nebula (Messier 17) at Constellation Guide Facts about Omega Nebula Carina–Sagittarius Arm H II regions Messier objects NGC objects Sagittarius (constellation) Sharpless objects ? Articles containing video clips Star-forming regions
Omega Nebula
[ "Astronomy" ]
778
[ "Sagittarius (constellation)", "Constellations" ]
959,261
https://en.wikipedia.org/wiki/Perpetual%20stew
A perpetual stew, also known as forever soup, hunter's pot, or hunter's stew, is a pot into which foodstuffs are placed and cooked, continuously. The pot is never or rarely emptied all the way, and ingredients and liquid are replenished as necessary. Such foods can continue cooking for decades or longer if properly maintained. The concept is often a common element in descriptions of medieval inns. Foods prepared in a perpetual stew have been described as being flavorful due to the manner in which the ingredients blend together. Various ingredients can be used in a perpetual stew such as root vegetables, tubers (potatoes, yams, etc.), and various meats. Historical examples Perpetual stews are speculated to have been common in medieval cuisine, often as pottage or pot-au-feu: A batch of pot-au-feu was claimed by one writer to be maintained as a perpetual stew in Perpignan from the 15th century until World War II, when it ran out of ingredients to keep the stew going due to the German occupation. Modern examples The tradition of perpetual stew remains prevalent in South and East Asian countries. Notable examples include beef and goat noodle soup served by Wattana Panich in Bangkok, Thailand, which has been cooking for over years , and oden broth from Otafuku in Asakusa, Japan, which has served the same broth daily since 1945. Between August 2014 and April 2015, a New York restaurant served a master stock in the style of a perpetual stew for over eight months. In July 2023, a "Perpetual Stew Club" organized by social media personality Annie Rauwerda gained headlines for holding weekly gatherings in Bushwick, Brooklyn, to consume perpetual stew. Hundreds attended the event and brought their own ingredients to contribute to the stew. The stew lasted for 60 days. See also List of stews Master stock Ship of Theseus References Medieval cuisine Stews
Perpetual stew
[ "Physics" ]
396
[ "Spacetime", "Duration", "Physical quantities", "Time" ]
959,283
https://en.wikipedia.org/wiki/Satellite%20%28biology%29
A satellite is a subviral agent that depends on the coinfection of a host cell with a helper virus for its replication. Satellites can be divided into two major classes: satellite viruses and satellite nucleic acids. Satellite viruses, which are most commonly associated with plants, are also found in mammals, arthropods, and bacteria. They encode structural proteins to enclose their genetic material, which are therefore distinct from the structural proteins of their helper viruses. Satellite nucleic acids, in contrast, do not encode their own structural proteins, but instead are encapsulated by proteins encoded by their helper viruses. The genomes of satellites range upward from 359 nucleotides in length for satellite tobacco ringspot virus RNA (STobRV). Most viruses have the capability to use host enzymes or their own replication machinery to independently replicate their own viral RNA. Satellites, in contrast, are completely dependent on a helper virus for replication. The symbiotic relationship between a satellite and a helper virus to catalyze the replication of a satellite genome is also dependent on the host to provide components like replicases to carry out replication. A satellite virus of mamavirus that inhibits the replication of its host has been termed a virophage. However, the usage of this term remains controversial due to the lack of fundamental differences between virophages and classical satellite viruses. History and discovery The tobacco necrosis virus was the virus that led to the discovery of the first satellite virus in 1962. Scientists discovered that the first satellite had the components to make its own protein shell. A few years later in 1969, scientists discovered another symbiotic relationship with the tobacco ringspot nepovirus (TobRV) and another satellite virus. The emergence of satellite RNA is said to have come from either the genome of the host or its co-infecting agents, and any vectors leading to transmission. A satellite virus important to human health that demonstrates the need for co-infection to replicate and infect within a host is the virus that causes hepatitis D. Hepatitis D or delta virus (HDV) was discovered in 1977 by Mario Rizzetto and is differentiated from hepatitis A, B, and C because it requires viral particles from hepatitis B virus (HBV) to replicate and infect liver cells. HBV provides a surface antigen, HBsAg, which is utilized by HDV to create a super-infection resulting in liver failure. HDV is found all over the globe but is most prevalent in Africa, the Middle East and southern Italy. Satellite compared to a virus Classification The classification of subviral agents is ongoing. The following uses an outline for subviral agents in a 2011 ICTV report. Many of the taxa have since been assigned more formal names in 2019, so these are included when possible. Satellite viruses Some satellite viruses have been assigned a taxon. The following reflects the results of a 2015 proposal that has since been accepted (Taxoprop 2015.009a). Single-stranded RNA satellite viruses (unassigned to a family) Albetovirus – Tobacco necrosis satellite virus 1, 2, and C Aumaivirus – Maize white line mosaic satellite virus Papanivirus – Panicum mosaic satellite virus Virtovirus – Tobacco mosaic satellite virus, aka Tobacco necrosis satellite virus Family Sarthroviridae Macronovirus – Macrobrachium satellite virus 1 (extra small virus) (unnamed genus) – Nilaparvata lugens commensal X virus (unnamed genus) – Chronic bee-paralysis satellite virus Double-stranded DNA satellite viruses Family Lavidaviridae – Virophages Sputnik virophage Zamilon virophage Mavirus virophage Organic Lake virophage Single-stranded DNA satellite viruses Genus Dependoparvovirus – Adeno-associated virus group Satellite nucleic acids The following may not be comprehensive in its ICTV coverage. The nomenclature for satellite RNAs is to prefix the host virus name with "sat". Satellite-like nucleic acids resemble satellite nucleic acids, in that they replicate with the aid of helper viruses. However they differ in that they can encode functions that can contribute to the success of their helper viruses; while they are sometimes considered to be genomic elements of their helper viruses, they are not always found within their helper viruses. Single-stranded satellite DNAs Family Alphasatellitidae (encoding a replication initiator protein) Family Tolecusatellitidae Genus Betasatellites (encoding a pathogenicity determinant βC1) Genus Deltasatellites (appears defective in βC1, but is their own group) Double-stranded satellite RNAs Saccharomyces cerevisiae virus satellite M Saccharomyces cerevisiae M1 virus satellite Saccharomyces cerevisiae M2 virus satellite Saccharomyces cerevisiae M28 virus satellite Ustilago maydis virus H satellite M Ustilago maydis M-P1 virus satellite Ustilago maydis M-P4 virus satellite Ustilago maydis M-P6 virus satellite Trichomonas vaginalis T1 virus satellite Partitiviridae associated virus satellites dsRNA1 dsRNA1 Zygosaccharomyces bailii virus satellite M / Zybavirus balii satellite M Single-stranded satellite RNAs Large linear satellite RNAs Arabis mosaic virus large satellite RNA Bamboo mosaic virus satellite RNA (satBaMV) Chicory yellow mottle virus large satellite RNA Grapevine Bulgarian latent virus satellite RNA Grapevine fanleaf virus satellite RNA Myrobalan latent ringspot virus satellite RNA Tomato black ring virus satellite RNA Beet ringspot virus satellite RNA Beet necrotic yellow vein virus RNA5 Small linear satellite RNAs Cucumber mosaic virus satellite RNA Cymbidium ringspot virus satellite RNA Pea enation mosaic virus satellite RNA Groundnut rosette virus satellite RNA Panicum mosaic virus small satellite RNA Peanut stunt virus satellite RNA Turnip crinkle virus satellite RNA Tomato bushy stunt virus satellite RNA, B10 Tomato bushy stunt virus satellite RNA, B1 Tobacco bushy top virus satellite RNA Circular satellite RNAs or "virusoids" Arabis mosaic virus small satellite RNA Tobacco ringspot virus satellite RNA (satTRsV) above two forms a clade Chicory yellow mottle virus satellite RNA (satCYMoV) Solanum nodiflorum mottle virus satellite RNA Subterranean clover mottle virus satellite RNA Velvet tobacco mottle virus satellite RNA above four forms a clade Lucerne transient streak virus satellite RNA (satLTSV) Cereal yellow dwarf virus-RPV satellite RNA Cherry small circular viroid-like RNA Realm Ribozyviria / Family Kolmioviridae – Deltavirus-like satellite-like RNAs Genus Deltavirus – Hepadnavirus-associated satellite-like RNAs Polerovirus-associated RNAs See also Virus Virusoid Viroid Virophage WikiSpecies:Virus References External links ICTV Subcellular Life Forms Virology Subviral agents
Satellite (biology)
[ "Biology" ]
1,463
[ "Viruses", "Satellite viruses" ]
959,404
https://en.wikipedia.org/wiki/Electrofax
An electrofax was a type of fax. It involved electrostatic printer and copier technology, where an image was formed directly on the paper, instead of first on a drum, then transferred to paper, as it would be in xerography. It was used in the United States from the 1950s through the 1980s. The paper used in this process was coated with a zinc oxide powder, adhered with a resin, to make it able to hold an electrostatic charge, and absorb toner, to form an image and allow the evaporation of toner dispersants. Users of electrofax machines purchased paper with the coating already applied. Copiers typically feed paper from a roll, where it is given a static electric charge, after which it is exposed to light reflected from the original document, and focused through a lens; zinc oxide particles either preserve or discharge the electric charge, depending on the amount of light reaching them. After exposure, the paper is passed through a toner station: toner is typically carbon black suspended in an organic liquid known as a dispersant, then spread over the paper. Black toner particles adhere to areas on the paper that remain charged; discharged areas do not attract toner particles. A knife then cuts the paper to the proper length (typically letter or legal size). The subsequently independent sheet of paper then passes from the toner station, where excess dispersant is wiped off. Typically, the paper is finally sent to an output tray where any remaining dispersant evaporates, leaving copies with a faint "kerosene" odor. In printers and plotters, paper is typically electrostatically charged by passing it over a bar containing hundreds or thousands of charging contacts. As paper passes over it, an image is formed by either applying or removing a charge from each individual contact. The result is a grid of charged dots on the paper. Toner is then applied as described above. In the early 1950s, this technology was first developed at RCA (Radio Corporation of America). Subsequently, many office machine companies, including SCM (Smith Corona Marchant), Savin, etc., introduced copiers that utilized this technology. Versatec was a brand of computer printers and plotters using this process; electrostatic printers like the Versatec were important stepping-stones to later laser printers. Programs like vtroff, by Tom Ferrin at UCSF, consequently justified early more expensive laser printers. Copying machines using electrofax technology were common from the 1960s through the 1980s. They were less expensive to manufacture than xerographic copiers, although the paper was slightly more costly than the plain paper used by xerography. Electrofax fell out of favor when other copier technologies could produce markedly better quality copies at less expense. By comparison, electrofax suffered a number of drawbacks, including: weak blacks in the image (most machines could only produce a dark gray), dampness and odor of the copies, the need for special paper, and multiple-bottle liquid toner replacement. Similarly, the need for electrofax based printers & plotters faded, as laser printers became cheaper, followed by inkjet printers. See also List of duplicating processes Printing devices
Electrofax
[ "Physics", "Technology" ]
662
[ "Physical systems", "Machines", "Printing devices" ]
959,407
https://en.wikipedia.org/wiki/List%20of%20adductors%20of%20the%20human%20body
Adduction is an anatomical term of motion referring to a movement which brings a part of the anatomy closer to the middle sagittal plane of the body. Upper limb Arm and shoulder of arm at shoulder (lowering arm) Subscapularis Teres major Pectoralis major Triceps brachii (long head) Latissimus dorsi Coracobrachialis Hand and wrist of hand at wrist Flexor carpi ulnaris Extensor carpi ulnaris of fingers Palmar interossei of thumb Adductor pollicis Lower limb of thigh at hip medial compartment of thigh/adductor muscles of the hip Adductor longus Adductor brevis Adductor magnus Pectineus Gracilis Foot and toes of toes (S2-S3) Adductor hallucis Plantar interossei Other eyeball Superior rectus muscle Inferior rectus muscle Medial rectus muscle jaw (muscles of mastication, the closing of the jaw is adduction): masseter pterygoid muscles (lateral and medial) temporalis vocal folds Lateral cricoarytenoid muscle References See also Anatomical terms of motion
List of adductors of the human body
[ "Biology" ]
244
[ "Behavior", "Anatomical terms of motion", "Motor control" ]
959,658
https://en.wikipedia.org/wiki/Silhouette
A silhouette (, ) is the image of a person, animal, object or scene represented as a solid shape of a single colour, usually black, with its edges matching the outline of the subject. The interior of a silhouette is featureless, and the silhouette is usually presented on a light background, usually white, or none at all. The silhouette differs from an outline, which depicts the edge of an object in a linear form, while a silhouette appears as a solid shape. Silhouette images may be created in any visual artistic medium, but were first used to describe pieces of cut paper, which were then stuck to a backing in a contrasting colour, and often framed. Cutting portraits, generally in profile, from black card became popular in the mid-18th century, though the term silhouette was seldom used until the early decades of the 19th century, and the tradition has continued under this name into the 21st century. They represented a cheap but effective alternative to the portrait miniature, and skilled specialist artists could cut a high-quality bust portrait, by far the most common style, in a matter of minutes, working purely by eye. Other artists, especially from about 1790, drew an outline on paper, then painted it in, which could be equally quick. From its original graphic meaning, the term silhouette has been extended to describe the sight or representation of a person, object or scene that is backlit and appears dark against a lighter background. Anything that appears this way, for example, a figure standing backlit in a doorway, may be described as "in silhouette". Because a silhouette emphasises the outline, the word has also been used in fields such as fashion, fitness, and concept art to describe the shape of a person's body or the shape created by wearing clothing of a particular style or period. Etymology and origins The word silhouette is derived from the name of Étienne de Silhouette, a French finance minister who, in 1759, was forced by France's credit crisis during the Seven Years' War to impose severe economic demands upon the French people, particularly the wealthy. Because of de Silhouette's austere economies, his name became synonymous with anything done or made cheaply and so with these outline portraits. Prior to the advent of photography, silhouette profiles cut from black card were the cheapest way of recording a person's appearance. The term silhouette, although existing from the 18th century, was not applied to the art of portrait-making until the 19th century. In the 18th and early 19th century, "profiles" or "shades" as they were called were made by one of three methods: painted on ivory, plaster, paper, card, or in reverse on glass; "Hollow cut" where the negative image was traced and then cut away from light colored paper which was then laid on a dark background; and "cut and paste" where the figure was cut out of dark paper (usually free hand) and then pasted onto a light background. History Greek origins Pliny the Elder recounts the history of painting in books 34 and 35 of his Natural History (ca. 77 CE). In book 35, chapter 5, he writes of silhouette as a starting point in the development of painting: "We have no certain knowledge as to the commencement of the art of painting, nor does this enquiry fall under our consideration. The Egyptians assert that it was invented among themselves, six thousand years before it passed into Greece; a vain boast, it is very evident. As to the Greeks, some say that it was invented at Sicyon, others at Corinth; but they all agree that it originated in tracing lines round the human shadow [omnes umbra hominis lineis circumducta]." In chapter 15, he tells the story of Butades of Corinth as an originator of this modeling technique: "Butades, a potter of Sicyon, was the first who invented, at Corinth, the art of modelling portraits in the earth which he used in his trade. It was through his daughter that he made the discovery, who, being deeply in love with a young man about to depart on a long journey, traced the profile of his face, as thrown upon the wall by the light of the lamp [umbram ex facie eius ad lucernam in pariete lineis circumscripsit]. Upon seeing this, her father filled in the outline, by compressing clay upon the surface, and so made a face in relief, which he then hardened by fire along with other articles of pottery." Greek black-figure pottery painting, also known as the black-figure style or black-figure ceramic (Greek, μελανόμορφα, melanomorpha), common between the 7th and 5th centuries BCE, employs the silhouette and characteristic profile views of figures and objects on pottery forms. The pots themselves exhibit strong forms in outline that are indicators of their purpose, as well as being decorative. Profile portraits For the depiction of portraits, the profile image has marked advantage over a full-face image in many circumstances, because it depends strongly upon the proportions and relationship of the bony structures of the face (the forehead, nose and chin) making the image is clear and simple. For this reason, profile portraits have been employed on coinage since the Roman era. The early Renaissance period saw a fashion for painted profile portraits and people such as Federico da Montefeltro and Ludovico Sforza were depicted in profile portraits. The profile portrait is strongly linked to the silhouette. Recent research at Stanford University indicates that where previous studies of face recognition have been based on frontal views, studies with silhouettes show humans are able to extract accurate information about gender and age from the silhouette alone. This is an important concept for artists who design characters for visual media, because the silhouette is the most immediately recognisable and identifiable shape of the character. Profile portrait techniques A silhouette portrait can be painted or drawn. However, the traditional method of creating silhouette portraits is to cut them from lightweight black cardboard and mount them on a pale (usually white) background. This was the work of specialist artists, often working out of booths at fairs or markets, whose trade competed with that of the more expensive miniaturists patronised by the wealthy. A traditional silhouette portrait artist would cut the likeness of a person, freehand, within a few minutes. Some modern silhouette artists also make silhouette portraits from photographs of people taken in profile. These profile images are often head and shoulder length (bust) but can also be full length. Nineteenth-century popularity and development The work of the physiognomist Johann Caspar Lavater, who used silhouettes to analyse facial types, is thought to have promoted the art. The 18th century silhouette artist August Edouart cut thousands of portraits in duplicate. His subjects included French and British nobility and US presidents. Much of his personal collection was lost in a shipwreck. In England, the best known silhouette artist, a painter not a cutter, was John Miers, who travelled and worked in different cities, but had a studio on the Strand in London. He advertised "three minute sittings", and the cost might be as low as half a crown around 1800. Miers' superior products could be in grisaille, with delicate highlights added in gold or yellow, and some examples might be painted on various backings, including gesso, glass or ivory. The size was normally small, with many designed to fit into a locket, but otherwise a bust some 3 to 5 inches high was typical, with half- or full-length portraits proportionately larger. In America, silhouettes were highly popular from about 1790 to 1840. The physionotrace apparatus invented by Frenchman Gilles-Louis Chrétien in 1783-84 facilitated the production of silhouette portraits by deploying the mechanics of the pantograph to transmit the tracing (via an eyepiece) of the subject's profile silhouette to a needle moving on an engraving plate, from which multiple portrait copies could be printed. The invention of photography signaled the end of the silhouette as a widespread form of portraiture. Maintaining the tradition The skill was not lost, and travelling silhouette artists continued to work at state fairs into the 20th century. E. J. Perry and Dai Vernon were artists active in Coney Island at this time as well. The popularity of the silhouette portrait is being reborn in a new generation of people who appreciate the silhouette as a nostalgic way of capturing a significant occasion. In the United States and the UK silhouette artists have websites advertising their services at weddings and other such functions. In England there is an active group of silhouette artists. In Australia, S. John Ross plied his scissors at agricultural shows for 60 years until his death in 2008. Other artists such as Douglas Carpenter produce silhouette images using pen and ink. In art, media and illustrations Since the late 18th century, silhouette artists have also made small scenes cut from card and mounted on a contrasting background like the portraits. These pictures, known as "paper cuts", were often, but not necessarily, silhouette images. European paper cuts traditionally have differed from Asian paper cuts, which are often made of several layers of brightly coloured and patterned paper, with many formal decorative elements such as flower petals. Among 19th century artists to work with papercutting was the author Hans Christian Andersen. The modern artist Robert Ryan creates intricate images by this technique, sometimes using them to produce silk-screen prints. In the late 19th and early 20th century several illustrators employed designs of similar appearance for making book illustrations. Silhouette pictures could easily be printed by blocks that were cheaper to produce and longer lasting than detailed black and white illustrations. Silhouette pictures sometimes appear in books of the early 20th century in conjunction with colour plates. (The colour plates were expensive to produce and each one was glued into the book by hand.) Illustrators who produced silhouette pictures at this time include Arthur Rackham and William Heath Robinson. In breaking with literal realism, artists of the Vorticist, Futurist and Cubist movements employed the silhouette. Illustrators of the late 20th century to work in silhouette include Jan Pienkowski and Jan Ormerod. In the early 1970s, French artist Philippe Derome uses the black cut silhouette in his portraits of black people. In the 21st century, American artist Kara Walker develops this use of silhouette to present racial issues in confronting images. Shadow theatre Originating in Asia with traditions such as the shadow theatres (wayang) of Indonesia, the shadow play became a popular entertainment in Paris during the 18th and 19th centuries. In late 19th-century Paris, shadow theatre was particularly associated with the cabaret Le Chat Noir, where Henri Rivière was the designer. Movies Since their pioneering use by Lotte Reiniger in silent films, silhouettes have been used for a variety of iconic, graphic, emotional, or conversely for distancing, effects in many movies. These include many of the opening credit sequences of the James Bond films. The opening sequence of the television series Alfred Hitchcock Presents features a silhouetted profile of Alfred Hitchcock stepping into a caricatured outline of himself, and in his movie Psycho, the killer in the shower scene manifests as a terrifying silhouette. A scene from E.T. showing the central characters on a flying bicycle silhouetted against the full moon became a well-known movie poster. Harry Potter and the Deathly Hallows – Part 1 contains an animated sequence in silhouette illustrating a short story The Tale of the Three Brothers that is embedded in the film. The sequence was produced by Ben Hibon for Framestore, with artwork by Alexis Lidell. Silhouettes have also been used by recording artists in music videos. One example is the video for "Buttons" by The Pussycat Dolls, in which Nicole Scherzinger is seen in silhouette. Michael Jackson used his own distinctive silhouette both on stage and in videos such as "You Rock My World". Early iPod commercials portrayed silhouetted dancers wearing an iPod and earbuds. The cult television program, Mystery Science Theater 3000 features the three main characters of the series watching a movie as silhouettes at the bottom of the screen. Architecture The discipline of architecture that studies the shadows cast by or upon buildings is called sciography. The play of shadows upon buildings was very much in vogue a thousand years ago as evidenced by the surviving examples of muqarnas decoration, where the shadows of three-dimensional ornamentation with stone masonry around the entrance of mosques form pictures. As outright pictures were avoided in Islam, tessellations and calligraphic pictures were allowed, "accidental" silhouettes are a creative alternative. Photography Many photographers use the technique of photographing people, objects or landscape elements against the light, to achieve an image in silhouette. The background light might be natural, such as a cloudy or open sky, mist or fog, sunset or an open doorway (a technique known as contre-jour), or it might be contrived in a studio; see low-key lighting. Silhouetting requires that the exposure be adjusted so that there is no detail (underexposure) within the desired silhouette element, and overexposure for the background to render it bright; so, a lighting ratio of 16:1 or greater is the ideal. The Zone System was an aid to film photographers in achieving the required exposure ratios. High contrast film, adjustment of film development, and/or high contrast photographic paper may be used in chemical-based photography to enhance the effect in the darkroom. With digital processing the contrast may be enhanced through the manipulation of the contrast curve for the image. Photographic silhouettes Graphic design In media the term "to silhouette" is used for the process of separating or masking a portion of an image (such as the background) so that it does not show. Traditionally silhouettes have often been used in advertising, particularly in poster design, because they can be cheaply and effectively printed. Other uses Fashion and fitness The word "silhouette", because it implies the outline of a form, has been used in both fashion and fitness to describe the outline shape of the body from a particular angle, as altered by clothing in fashion usage, and clothed or unclothed where fitness is concerned, (e.g. a usage applied here by the Powerhouse Museum). Advertising for both these fields urges people, women in particular, to achieve a particular appearance, either by corsetry, diet or exercise. The term was in use in advertising by the early 20th century. Many gyms and fitness studios use the word "silhouette" either in their name or in their advertising. Historians of costume also use the term when describing the effect achieved by the clothes of different periods, so that they might describe and compare the silhouette of the 1860s with that of the other decades of the 19th century. A desirable silhouette could be influenced by many factors. The invention of crinoline steel influenced the silhouette of women in the 1850s and 60s. The posture of the Princess Alexandra influenced the silhouette of English women in the Edwardian period. Icons Because silhouettes give a very clear image, they are often used in any field where the speedy identification of an object is necessary. Silhouettes have many practical applications. They are used for traffic signs. They are used to identify towns or countries with silhouettes of monuments or maps. They are used to identify natural objects such as trees, insects and dinosaurs. They are used in forensic science. Journalism For interviews, some individuals choose to be videotaped in silhouette to mask their facial features and protect their anonymity, typically accompanied by a dubbed voice. This is done when the individuals may be endangered if it is known they were interviewed. Computer modelling Computer vision researchers have been able to build computational models for perception that are capable of generating and reconstructing 3D shapes from single or multi-view depth maps or silhouettes Business documents Silhouettes have also been used to create images that serve as business documents. Slave owners have had silhouettes made of the people they enslaved in order to document them as property and in order to accompany other business documents such as a bill of sale. Military and firearms Silhouettes of ships, planes, tanks, and other military vehicles are used by soldiers and sailors for learning to identify different craft. Notable examples Rachel Creefield silhouette Osbourne bull Kara Walker See also Silhouette artists Clipping path References Bibliography Film Reiniger, Lotte: Homage to the Inventor of the Silhouette Film. Dir. Katja Raganelli. DVD. Milestone Film, 1999. External links GAP Guild of American Papercutters Profile Likenesses of the Executive and Legislature of Georgia (Silhouette Book), by William H. Brown, 1855 from the collection of the Georgia Archives . Kara Walker's A Horrible Beautiful Beast Kara Walker's 2007 Whitney Exhibit Paper art Photographic techniques Composition in visual art Shadows
Silhouette
[ "Physics" ]
3,430
[ "Optical phenomena", "Physical phenomena", "Shadows" ]
959,928
https://en.wikipedia.org/wiki/Getting%20Things%20Done
Getting Things Done (GTD) is a personal productivity system developed by David Allen and published in a book of the same name. GTD is described as a time management system. Allen states "there is an inverse relationship between things on your mind and those things getting done". The GTD method rests on the idea of moving all items of interest, relevant information, issues, tasks and projects out of one's mind by recording them externally and then breaking them into actionable work items with known time limits. This allows one's attention to focus on taking action on each task listed in an external record, instead of recalling them intuitively. First published in 2001, a revised edition of the book was released in 2015 to reflect the changes in information technology during the preceding decade. Themes Allen first demonstrates stress reduction from the method with the following exercise, centered on a task that has an unclear outcome or whose next action is not defined. Allen calls these sources of stress "open loops", "incompletes", or "stuff". The most annoying, distracting, or interesting task is chosen, and defined as an "incomplete". A description of the successful outcome of the "incomplete" is written down in one sentence, along with the criteria by which the task will be considered completed. The next step required to approach completion of the task is written down. A self-assessment is made of the emotions experienced after completing the steps of this process. He claims stress can be reduced and productivity increased by putting reminders about everything one is not working on into a trusted system external to one's mind. In this way, one can work on the task at hand without distraction from the "incompletes". The system in GTD requires one to have the following tools within easy reach: An inbox A trash can A filing system for reference material Several lists (detailed below) A calendar (either a paper-based or digital calendar) These tools can be physical or electronic as appropriate (e.g., a physical "in" tray or an email inbox). Then, as "stuff" enters one's life, it is captured in these tools and processed with the following workflow. Workflow The GTD workflow consists of five stages. The workflow is driven by five steps (numbered on the top-left in the diagram on the right): capture, clarify, organize, reflect, and engage. (The first edition used the names collect, process, organize, plan, and do; the descriptions of the stages are similar in both editions). Once all the material ("stuff") is captured (or collected) in the inbox, each item is clarified and organized by asking and answering questions about each item in turn as shown in the black boxes in the logic tree diagram. As a result, items end up in one of the eight oval end points in the diagram: in the trash on the someday/maybe list in a neat reference filing system on a list of tasks, with the outcome and next action defined if the "incomplete" is a "project" (i.e., if it will require two or more steps to complete it) immediately completed and checked off if it can be completed in under two minutes delegated to someone else and, if one wants a reminder to follow up, added to a "waiting for" list on a context-based "next action" list if there is only one step to complete it on one's calendar Empty one's inbox or inboxes daily or at least weekly ("in" to empty). Do not use one's inbox as a "to do" list. Do not put clarified items back into the inbox. Emptying one's inbox does not mean finishing everything. It just means applying the "capture, clarify, organize" steps to all one's "stuff". Next, reflection (termed planning in the first edition) occurs. Multi-step projects identified above are assigned a desired outcome and a single "next action". Finally, a task from one's task list is worked on ("engage" in the 2nd edition, "do" in the 1st edition) unless the calendar dictates otherwise. One selects which task to work on next by considering where one is (i.e., the "context", such as at home, at work, out shopping, by the phone, at one's computer, with a particular person), time available, energy available, and priority. Implementation Because hardware and software is changing so rapidly, GTD is deliberately technologically neutral. (In fact, Allen advises people to start with a paper-based system.) Many task management tools claim to implement GTD methodology and Allen maintains a list of some technology that has been adopted in or designed for GTD. Some are designated "GTD Enabled", meaning Allen was involved in the design. Perspective Allen emphasizes two key elements of GTD—control and perspective. The workflow is the center of the control aspect. The goal of the control processes in GTD is to get everything except the current task out of one's head and into this trusted system external to one's mind. He borrows a simile used in martial arts termed "mind like water". When a small object is thrown into a pool of water, the water responds appropriately with a small splash followed by quiescence. When a large object is thrown in the water again responds appropriately with a large splash followed by quiescence. The opposite of "mind like water" is a mind that never returns to quiescence but remains continually stressed by every input. With a trusted system and "mind like water" one can have a better perspective on one's life. Allen recommends reflection from six levels, called "Horizons of Focus": Horizon 5: Life Horizon 4: Long-term visions Horizon 3: 1–2 year goals Horizon 2: Areas of focus and accountability Horizon 1: Current projects Ground: Current actions Unlike some theories, which focus on top-down goal-setting, GTD works in the opposite direction. Allen argues that it is often difficult for individuals to focus on big picture goals if they cannot sufficiently control the day-to-day tasks that they frequently must face. By developing and using the trusted system that deals with day-to-day inputs, an individual can free up mental space to begin moving up to the next level. Allen recommends scheduling a weekly review, reflecting on the six different levels. The perspective gained from these reviews should drive one's priorities at the project level. Priorities at the project level in turn determine the priority of the individual tasks and commitments gathered during the workflow process. During a weekly review, determine the context for the tasks and put each task on its appropriate list. An example of grouping together similar tasks would be making a list of outstanding telephone calls, or the tasks/errands to perform while out shopping. Context lists can be defined by the set of tools available or by the presence of individuals or groups for whom one has items to discuss or present. Summary GTD is based on storing, tracking, and retrieving the information related to each thing that needs to get done. Mental blocks we encounter are caused by insufficient 'front-end' planning. This involves thinking in advance, and generating a series of actions which can later be undertaken without further planning. The mind's "reminder system" is inefficient and seldom (or too often) reminds us of what we need to do at the time and place when we can do it. Consequently, the "next actions" stored by context in the "trusted system" act as an external support which ensures that we are presented with the right reminders at the right time. As GTD relies on external reminders, it can be seen as an application of the theories of distributed cognition or the extended mind. Reception In 2004, James Fallows in The Atlantic described GTD's main promise as not only allowing the practitioner to do more work but to feel less anxious about what they can and cannot do. In 2005, Wired called GTD a "new cult for the info age", describing the enthusiasm for this method among information technology and knowledge workers as a kind of cult following. Allen's ideas have also been popularized through The Howard Stern Show (Stern referenced it daily throughout 2012's summer) and the Internet, especially via blogs such as 43 Folders, Lifehacker, and The Simple Dollar. In 2005, Ben Hammersley interviewed David Allen for The Guardian article titled "Meet the man who can bring order to your universe", saying: "For me, as with the hundreds of thousands around the world who press the book into their friends' hands with fire in their eyes, Allen's ideas are nothing short of life-changing". In 2007, Time magazine called Getting Things Done the self-help business book of its time. In 2007, Wired ran another article about GTD and Allen, quoting him as saying "the workings of an automatic transmission are more complicated than a manual transmission ... to simplify a complex event, you need a complex system". A 2008 paper in the journal Long Range Planning by Francis Heylighen and Clément Vidal of the Free University of Brussels showed "recent insights in psychology and cognitive science support and extend GTD's recommendations". See also Human multitasking Life hack Pomodoro Technique Notes References Further reading External links Management books Self-help books Personal development Time management 2001 non-fiction books Penguin Books books
Getting Things Done
[ "Physics", "Biology" ]
1,949
[ "Personal development", "Behavior", "Physical quantities", "Time", "Time management", "Spacetime", "Human behavior" ]
960,235
https://en.wikipedia.org/wiki/Claudia%20Severa
Claudia Severa (born 11 September in first century, fl. 97–105) was a literate Roman woman, the wife of Aelius Brocchus, commander of an unidentified fort near Vindolanda fort in northern England. She is known for a birthday invitation she sent around 100 AD to Sulpicia Lepidina, wife of Flavius Cerialis, commander at Vindolanda. This invitation, written in ink on a thin wooden tablet, was discovered in the 1970s and is probably the best-known item of the Vindolanda Tablets. The first part of the letter was written in formal style in a professional hand evidently by a scribe; the last four lines are added in a different handwriting, thought to be Claudia's own. The translation is as follows: Claudia Severa to her Lepidina greetings. On 11 September, sister, for the day of the celebration of my birthday, I give you a warm invitation to make sure that you come to us, to make the day more enjoyable for me by your arrival, if you are present. Give my greetings to your Cerialis. My Aelius and my little son send him their greetings. (2nd hand) I shall expect you, sister. Farewell, sister, my dearest soul, as I hope to prosper, and hail. (Back, 1st hand) To Sulpicia Lepidina, (wife) of Cerialis, from Cl. Severa." The Latin reads as follows: Cl. Severá Lepidinae [suae] [sa]l[u]temiii Idus Septembres soror ad diemsollemnem natalem meum rogólibenter faciás ut veniasad nos iucundiorem mihi [diem] interventú tuo facturá siaderisCerial[em t]uum salutá Aelius meus [...]et filiolus salutant sperabo te sororvale soror animamea ita valeamkarissima et have The Vindolanda Tablets also contain a fragment from another letter in Claudia's hand. These two letters are thought to be the oldest extant writing by a woman in Latin found in Britain, or perhaps anywhere. The letters show that correspondence between the two women was frequent and routine, and that they were in the habit of visiting one another, although it is not known at which fort Severa lived. There are several aspects of Severa's letters that should be regarded as literary, even though they were not written for a wide readership. In particular, they share several thematic and stylistic features with other surviving writings in Latin by women from Greek and Roman antiquity. Although Severa's name reveals that she is unlikely to be related to Sulpicia Lepidina, she refers frequently to Lepidina as her sister, and uses the word iucundus to evoke a strong and sensual sense of the pleasure Lepidina's presence would bring, creating a sense of affection through her choice of language. In the post-script written in her own hand, she appears to draw on another Latin, literary model, from the fourth book of the Aeneid, in which at 4.8 Vergil characterises Anna as Dido's unanimam sororem, "sister sharing a soul", and at 4.31, she is "cherished more than life" (luce magis dilecta sorori). Although this is not proof that Severa and Lepidina were familiar with Virgil's writing, another letter in the archive, written between two men, directly quotes a line from the Aeneid, suggesting that the sentiments and language Sulpicia used do indeed draw on a Virgilian influence. The Latin word that was chosen to describe the birthday festivities, sollemnis, is also noteworthy, as it means "ceremonial, solemn, performed in accordance with the forms of religion", and suggests that Severa has invited Lepidina to what was an important annual religious occasion. Display of letter The invitation was acquired in 1986 by the British Museum, where it holds registration number 1986,1001.64. The museum has a selection of the Vindolanda Tablets on display, and loans some to the museum at Vindolanda. References External links Vindolanda Tablets Online: Correspondence of Lepidina: tablets 291–294 1st-century births 1st-century Roman women 1st-century Romans 2nd-century Roman women 1st-century women writers 1st-century writers in Latin 2nd-century women writers 2nd-century writers in Latin Ancient Romans in Britain Hadrian's Wall Letter writers in Latin Ancient Roman women writers Date of death unknown Year of birth unknown Year of death unknown Claudii Silver Age Latin writers
Claudia Severa
[ "Engineering" ]
989
[ "Hadrian's Wall", "Fortification lines" ]
960,361
https://en.wikipedia.org/wiki/Transduction%20%28machine%20learning%29
In logic, statistical inference, and supervised learning, transduction or transductive inference is reasoning from observed, specific (training) cases to specific (test) cases. In contrast, induction is reasoning from observed training cases to general rules, which are then applied to the test cases. The distinction is most interesting in cases where the predictions of the transductive model are not achievable by any inductive model. Note that this is caused by transductive inference on different test sets producing mutually inconsistent predictions. Transduction was introduced in a computer science context by Vladimir Vapnik in the 1990s, motivated by his view that transduction is preferable to induction since, according to him, induction requires solving a more general problem (inferring a function) before solving a more specific problem (computing outputs for new cases): "When solving a problem of interest, do not solve a more general problem as an intermediate step. Try to get the answer that you really need but not a more general one.". An example of learning which is not inductive would be in the case of binary classification, where the inputs tend to cluster in two groups. A large set of test inputs may help in finding the clusters, thus providing useful information about the classification labels. The same predictions would not be obtainable from a model which induces a function based only on the training cases. Some people may call this an example of the closely related semi-supervised learning, since Vapnik's motivation is quite different. The most well-known example of a case-bases learning algorithm is the k-nearest neighbor algorithm, which is related to transductive learning algorithms. Another example of an algorithm in this category is the Transductive Support Vector Machine (TSVM). A third possible motivation of transduction arises through the need to approximate. If exact inference is computationally prohibitive, one may at least try to make sure that the approximations are good at the test inputs. In this case, the test inputs could come from an arbitrary distribution (not necessarily related to the distribution of the training inputs), which wouldn't be allowed in semi-supervised learning. An example of an algorithm falling in this category is the Bayesian Committee Machine (BCM). Historical Context The mode of inference from particulars to particulars, which Vapnik came to call transduction, was already distinguished from the mode of inference from particulars to generalizations in part III of the Cambridge philosopher and logician W.E. Johnson's 1924 textbook, Logic. In Johnson's work, the former mode was called 'eduction' and the latter was called 'induction'. Bruno de Finetti developed a purely subjective form of Bayesianism in which claims about objective chances could be translated into empirically respectable claims about subjective credences with respect to observables through exchangeability properties. An early statement of this view can be found in his 1937 La Prévision: ses Lois Logiques, ses Sources Subjectives and a mature statement in his 1970 Theory of Probability. Within de Finetti's subjective Bayesian framework, all inductive inference is ultimately inference from particulars to particulars. Example problem The following example problem contrasts some of the unique properties of transduction against induction. A collection of points is given, such that some of the points are labeled (A, B, or C), but most of the points are unlabeled (?). The goal is to predict appropriate labels for all of the unlabeled points. The inductive approach to solving this problem is to use the labeled points to train a supervised learning algorithm, and then have it predict labels for all of the unlabeled points. With this problem, however, the supervised learning algorithm will only have five labeled points to use as a basis for building a predictive model. It will certainly struggle to build a model that captures the structure of this data. For example, if a nearest-neighbor algorithm is used, then the points near the middle will be labeled "A" or "C", even though it is apparent that they belong to the same cluster as the point labeled "B", compare semi-supervised learning. Transduction has the advantage of being able to consider all of the points, not just the labeled points, while performing the labeling task. In this case, transductive algorithms would label the unlabeled points according to the clusters to which they naturally belong. The points in the middle, therefore, would most likely be labeled "B", because they are packed very close to that cluster. An advantage of transduction is that it may be able to make better predictions with fewer labeled points, because it uses the natural breaks found in the unlabeled points. One disadvantage of transduction is that it builds no predictive model. If a previously unknown point is added to the set, the entire transductive algorithm would need to be repeated with all of the points in order to predict a label. This can be computationally expensive if the data is made available incrementally in a stream. Further, this might cause the predictions of some of the old points to change (which may be good or bad, depending on the application). A supervised learning algorithm, on the other hand, can label new points instantly, with very little computational cost. Transduction algorithms Transduction algorithms can be broadly divided into two categories: those that seek to assign discrete labels to unlabeled points, and those that seek to regress continuous labels for unlabeled points. Algorithms that seek to predict discrete labels tend to be derived by adding partial supervision to a clustering algorithm. Two classes of algorithms can be used: flat clustering and hierarchical clustering. The latter can be further subdivided into two categories: those that cluster by partitioning, and those that cluster by agglomerating. Algorithms that seek to predict continuous labels tend to be derived by adding partial supervision to a manifold learning algorithm. Partitioning transduction Partitioning transduction can be thought of as top-down transduction. It is a semi-supervised extension of partition-based clustering. It is typically performed as follows: Consider the set of all points to be one large partition. While any partition P contains two points with conflicting labels: Partition P into smaller partitions. For each partition P: Assign the same label to all of the points in P. Of course, any reasonable partitioning technique could be used with this algorithm. Max flow min cut partitioning schemes are very popular for this purpose. Agglomerative transduction Agglomerative transduction can be thought of as bottom-up transduction. It is a semi-supervised extension of agglomerative clustering. It is typically performed as follows: Compute the pair-wise distances, D, between all the points. Sort D in ascending order. Consider each point to be a cluster of size 1. For each pair of points {a,b} in D: If (a is unlabeled) or (b is unlabeled) or (a and b have the same label) Merge the two clusters that contain a and b. Label all points in the merged cluster with the same label. Manifold transduction Manifold-learning-based transduction is still a very young field of research. See also Epilogism References De Finetti, Bruno. "La prévision: ses lois logiques, ses sources subjectives." Annales de l'institut Henri Poincaré. Vol. 7. No. 1. 1937. de Finetti, Bruno (1970). Theory of Probability: A Critical Introductory Treatment. New York: John Wiley. W.E. Johnson Logic part III, CUP Archive, 1924. B. Russell. The Problems of Philosophy, Home University Library, 1912. . V. N. Vapnik. Statistical learning theory. New York: Wiley, 1998. (See pages 339-371) V. Tresp. A Bayesian committee machine, Neural Computation, 12, 2000, pdf. External links A Gammerman, V. Vovk, V. Vapnik (1998). "Learning by Transduction." An early explanation of transductive learning. "A Discussion of Semi-Supervised Learning and Transduction," Chapter 25 of Semi-Supervised Learning, Olivier Chapelle, Bernhard Schölkopf and Alexander Zien, eds. (2006). MIT Press. A discussion of the difference between SSL and transduction. Waffles is an open source C++ library of machine learning algorithms, including transduction algorithms, also Waffles. SVMlight is a general purpose SVM package that includes the transductive SVM option. Machine learning
Transduction (machine learning)
[ "Engineering" ]
1,789
[ "Artificial intelligence engineering", "Machine learning" ]
960,428
https://en.wikipedia.org/wiki/Clevis%20fastener
A clevis fastener is a two-piece fastener system consisting of a clevis and a clevis pin head. Terms The clevis is a U-shaped piece that has holes at the end of the prongs to accept the clevis pin. The clevis pin is similar to a bolt, but is either partially threaded or unthreaded with a cross-hole for a split pin. A tang is a piece that is sometimes fitted in the space within the clevis and is held in place by the clevis pin. The combination of a simple clevis fitted with a pin is commonly called a shackle, although a clevis and pin is only one of the many forms a shackle may take. Usage Clevises are used in a wide variety of fasteners used in farming equipment and sailboat rigging, as well as the automotive, aircraft and construction industries. They are also widely used to attach control surfaces and other accessories to servo controls in airworthy model aircraft. As a part of a fastener, a clevis provides a method of allowing rotation in some axes while restricting rotation in others. Clevis pin There are two main types of clevis pins: threaded and unthreaded. Unthreaded clevis pins have a domed head at one end and a cross-hole at the other end. A cotter pin (US usage) or split pin is used to keep the clevis pin in place. Threaded clevis pins have a partially threaded shank on one end and a formed head on the other. The formed head has a lip, which acts as a stop when threading the pin into the shackle, and a flattened tab with a cross-hole. The flattened tab allows for easy installation of the pin and the cross-hole allows the pin to be moused. A bolt can function as a clevis pin, but a bolt is not intended to take the lateral stress that a clevis pin must handle. Normal bolts are manufactured to handle tension loads, whereas clevis pins and bolts are designed to withstand shearing forces. The sheering strength of a threaded bolt is determined by its inner thread diameter. Clevis pins should be closely fitted to the holes in the clevis to limit wear and reduce the failure rate of either the pin or the clevis. Twist clevis A twist shackle provides a loop at a right angle to the axis of rotation. Older farming implements intended to be pulled by a team of draft animals often require a twist shackle to be hitched. Clevis hanger A clevis hanger consists of one U-shaped clevis and a second V-shaped clevis with a hole in a flattened section at the base of the V, joined together with a bolt or pin. Clevis hangers are used as a pipe attachment providing vertical adjustment for pipes. Clevis bracket A clevis bracket generally takes the form of a solid metal piece with a flat rectangular base, fitted with holes for bolts or machine screws, and two rounded wings in parallel forming a clevis. Commonly used in aircraft and cars, clevis brackets allow mounting of rods to flat surfaces. Clevis hook A clevis hook is a hook, with or without a snap lock, with a clevis and bolt or pin at the base. The clevis is used to fasten the hook to a bracket or chain. Clevis rod end A clevis rod end is a folded or machined piece formed into a clevis and fitted with a hole at its base to which a rod is attached. In machined pieces, the hole is most often threaded. Twin clevis A twin clevis is a solid piece with two clevises directly opposite one another, each fitted with a pin. Twin clevises are commonly used to join two lengths of chain. References Fasteners
Clevis fastener
[ "Engineering" ]
821
[ "Construction", "Fasteners" ]
960,455
https://en.wikipedia.org/wiki/Yakub%20%28Nation%20of%20Islam%29
Yakub (also spelled Yacub or Yaqub) is a figure in the mythology of the Nation of Islam (NOI) and its offshoots. According to the NOI's doctrine, Yakub was a black Meccan scientist who lived 6,600 years ago and created the white race. According to the story, following his discovery of the law of attraction and repulsion, he gathered followers and began the creation of the white race through a form of selective breeding referred to as "grafting" on the island of Patmos; Yakub died at the age of 150, but his followers continued the process after his death. According to the NOI, the white race was created with an evil nature, and were destined to rule over black people for a period of 6,000 years through the practice of "tricknology", which ended in 1914. The story and idea of Yakub originated in the writings of the NOI's founder Wallace Fard Muhammad. Scholars have variously traced its origins in Fard's thought to the idea of the Yakubites propounded by the Moorish Science Temple, the Battle of Alarcos, or alternatively say it may have been created originally with little basis in any other tradition. Scholars have argued the tale is an example of a black theodicy, with similarities to gnosticism with Yakub as demiurge, as well as the story of Genesis. It has also been interpreted as a reversal of the contemporary racist ideas that asserted the inferiority of black people. The story has, throughout its history, caused disputes within the NOI. Under its current leader Louis Farrakhan, the NOI continues to assert that the story of Yakub is true, not a metaphor, and has been proven by modern science. Several other splinter groups and other black nationalist religious organizations, including the Nuwaubian Nation, the Five-Percent Nation and the United Nation of Islam, share a belief in Yakub. Summary Original version According to the story, at the start of human history, a variety of types of black people inhabited the moon; when a black "god-scientist" became frustrated that all those living on the moon did not speak one language, he blew up the moon. A piece of this destroyed moon became the Earth, which was then populated by a community of surviving, morally righteous black people, some of whom settled in the city of Mecca. Yakub was born a short distance outside the city, and was among the third of original black people who were discontented with life in this society. A member of the Meccan branch of the Tribe of Shabazz, Yakub acquired the nickname "big head", because of his unusually large head and arrogance. At the age of six, he discovered the law of attraction and repulsion by playing with magnets made of steel. He connected this to the rules of human attraction: the "unlike" people would attract, manipulate the original "like" people. By the age of 18, he had finished his education and had learned everything that Mecca's universities had to teach him, widely known as a successful scientist. He then discovered that the original black man contained both a "black germ" and a "brown germ", with the brown being the recessive one, and believed that if he could separate them by "grafting", he could graft the brown germ into a white germ. This insight led to a plan to create a new people, who, using tricks and lies, could rule the original black man and destroy them. He attracted a following but caused trouble, leading the Meccan authorities to exile him and his 59,999 followers. They then went to an isle in the Aegean Sea called Pelan, which Elijah Muhammad identified as modern-day Patmos. Yakub developed Christianity to fool the black people into supporting him and to trick them into not knowing their true history. Once there, he established a despotic regime, starting to breed out the black traits of his followers. This entailed breeding new children, with those who were too dark being killed at birth and their bodies being fed to wild animals or incinerated. Yakub died at the age of 150, but his followers carried on his work as he passed down his knowledge. After 600 years, the white race was created. All the races other than the black race were by-products of Yakub's work, as the "red, yellow and brown" races were created during the "bleaching" process, with the red germ coming out of the brown, the yellow coming from the red, and from the yellow the white. The brutal conditions of their creation determined the evil nature of the new race: "by lying to the black mother of the baby, this lie was born into the very nature of the white baby; and, murder for the black people was also born in them—or made by nature a liar and murderer". As a group of people distinct from the Original Asiatic Race, the white race are bereft of divinity, being intrinsically prone to lying, violence, and brutality. According to the Nation's teachings, Yakub's newly created white race sowed discord among the black race, and thus were exiled to live in the caves of Europe ("West Asia"). In this narrative, it was in Europe that the white race engaged in bestiality and degenerated, losing everything except their language. They were kept in Europe by guards. Elijah Muhammad also asserted that some of the new white race tried to become black, but failed. As a result, they became gorillas and other monkeys. To help the whites develop, the ruling Allah then sent prophets to them, the first of whom was Musa (Moses), who taught the whites to cook and wear clothes. Moses tried to civilize them, but eventually gave up and blew up 300 of the most troublesome white people with dynamite. According to the Nation, Jesus was also a prophet sent to try and civilize the white race. However, the whites had learned to use "tricknology"; a plan to use their trickery and lack of empathy and emotion to usurp power and enslave the black population, bringing the first slaves to America. According to NOI doctrine, Yakub's progeny were destined to rule for 6,000 years before the original black peoples of the world regained dominance, the end of which was the year 1914. Nuwaubian version An alternative version of the story was told by the Nuwaubian Nation, a black supremacist new religious movement run by Dwight York: this is set out in a roughly 1,700 page book called The Holy Tablets. In the Nuwaubian telling of the Yakub myth, 17 million years before the first of many "intergalactic battles", the ancestors of black people (given a variety of names, including Riziquians) were gods, but subservient to the "Supreme God". Riziquians lived in another galaxy on a planet known as "Rizk", which was located in the "Original Tri-Solar System" which featured a "moveable throne"/spaceship, Nibiru. In their telling the original protective atmospheric layer of this planet, necessary to protect from the UV rays of its three suns, had been destroyed by an evil being who was the leader of the fallen angels, Shaitan. Shaitan had been asked by the supreme god to move, either off the planet entirely or to a different location on it. He refused, and instead set off an atomic explosion "like an H-bomb", destroying part of the atmosphere. The scientists of the planet were able to repair it with gold, but there wasn't enough gold on the planet, necessitating excursions into space on the Nibiru to mine gold from planet Earth, where colonies were established. The Riziquians did not want to mine gold, believing it was beneath their status as angels. They spliced genes of Homo erectus with their own genomes, producing mankind to do it for them. Humans originally had various psychic abilities, but after wars and Cain and Abel, the gland responsible for these psychic powers was removed from the human brain by the Riziquians. Yakub was born with two brains (the Nuwaubian explanation for the size of his large head), making him a genius capable of gene-splicing experiments, which resulted in white people. After his experiments were finished, one of his brains exploded, resulting in his death. Origins of the story The story of Yakub originated in the writings of Wallace Fard Muhammad, the founder of the Nation of Islam, in his doctrinal Q&A pamphlet Lost Found Moslem Lesson No. 2 from the early 1930s. It was developed by his successor Elijah Muhammad in several writings, most fully in a chapter entitled "The Making of Devil" in his book Message to the Blackman in America. The story of Yakub includes Jews as part of a wider artificially created "white" race. In the Book of Genesis, biblical patriarch Jacob makes a deal with his uncle Laban to divide livestock amongst themselves. The black goats and sheep will belong to Laban, while spotted, speckled or brown goats will belong to Jacob. After Laban agrees, Jacob places wood "with white streaks" in front of the strongest animals during breeding so as to produce spotted offspring. He further uses selective breeding to ensure "the feebler would be Laban's, and the stronger Jacob's". Leaders in the early-20th century Eugenics movement like James Barr cited the Jacob story in their literature, often from an anti-Semitic point of view. Knight opines: "The prominence of Jacob as not only a controller of animal heredity but a selfish, scheming deceiver presents him as a natural candidate for the engineer of the white race". In speeches by Malcolm X, Yakub is identified completely with the Jacob of Genesis. Referring to the story of Jacob wrestling with the angel, Malcolm X states that Elijah Muhammad told him that "Jacob was Yacub, and the angel that Jacob wrestled with wasn't God, it was the government of the day". This was because Yakub was seeking funds for his expedition to Patmos, "so when it says Jacob wrestled with an angel, 'angel' is only used as a symbol to hide the one he was really wrestling with". However, Malcolm X also states that John of Patmos was also Yakub, and that the Book of Revelation refers to his deeds: "John was Yacub. John was out there getting ready to make a new race, he said, for the word of the Lord". Ernest Allen argues that "the Yakub myth may have been created out of whole cloth by Prophet Fard". Allen says the Yakub story could conceivably have been influenced by a real historical event during the struggle between Muslims and Christians for control of Spain. Muslim leader Abu Yusuf Yaqub al-Mansur defeated the Franks at the Battle of Alarcos (1195). After the battle, 40,000 European prisoners of war were taken to Morocco to labor on Yaqub's building projects. They were then set free and "allowed to form a valley settlement located somewhere between Fez and Marrakesh. On his deathbed Ya'qub lamented his decision to allow these Shibanis (as they came to be called) to form an enclave on Moroccan soil, thereby posing a potential threat to the stability of the Moorish empire". Yusuf Nuruddin says that a more direct source was the doctrine of the "Yacobites" or "Yakubites" propounded by Timothy Drew's Moorish Science Temple, to which Fard may have belonged before he founded the NOI. According to Drew, early pre-Columbian civilizations were founded by a West African Moor "named Yakub who landed on the Yucatan Peninsula", whose people evolved into "a race of scientific geniuses with large heads". Drew's followers said this was supported by the large heads of the Olmec statues, which they claimed reflected African features; Nuruddin argues this indicated that the Yakub myth was influenced by the Moorish Science Temple's theology. Role in the Nation of Islam The Yakub story attempts to rationalize "black suffering" through the lens of Islamic theologies, trying to give it a religious meaning and understanding. Even for those members who refused to take the story literally, it provided a useful metaphor for racial relations and oppression. Elijah Muhammad repeatedly referred to whites as "the devil". The Nation maintains that most white people are unaware of their true origins, but that such knowledge is held by senior white Freemasons. The doctrine is not present or substantiated in mainstream Islam. As a result, it has led to controversy: Malcolm X in his Autobiography notes that, in his travels in the Middle East, many Muslims reacted with shock upon hearing about the doctrine of Yakub. When Malcolm founded his own religion organization, Muslim Mosque, Inc., he did not carry over the concept of Yakub. Louis Farrakhan reinstated the original Nation of Islam, and has reasserted his belief in the literal truth of the story of Yakub. In a 1996 interview, Henry Louis Gates, Chairman of Harvard University's Afro-American Studies Department, asked him whether the story was a metaphor or literal. Farrakhan claimed that aspects of the story had been proven accurate by modern genetic science and insisted that "Personally, I believe that Yakub is not a mythical figure—he is a very real scientist. Not a big-head silly thing, as they would like to say". However, he did later cease speaking of the related "white devil" concept. Farrakhan's periodical The Final Call continues to publish articles asserting the truth of the story, arguing that modern science supports the accuracy of Elijah Muhammad's account of Yakub. The NOI splinter groups the Five-Percent Nation and the United Nation of Islam also believe in the Yakub doctrine. Commentary Harold Bloom in his book The American Religion argues that Yakub combines elements of the biblical God and the Gnostic concept of the Demiurge, saying that "Yakub has an irksome memorability as a crude but pungent Gnostic Demiurge". Nathaniel Deutsch also notes that Fard and Muhammad draw on the concept of the Demiurge, along with traditions of esotericism in Biblical interpretation, absorbing aspects of Biblical tales to the new narrative, such as the swords of the Muslim warriors keeping the "white devils" from Paradise, like the flaming sword of the angel protecting the Garden of Eden in Genesis. Yusuf Nuruddin also compared the Yakub story to the Genesis story, with the opposing group to the initial utopian society being comparable to the snake in the Garden of Eden. In his view the story of the later expulsion of Yakub was comparable to the expulsion of Adam and Eve, as well as the fall of man. Edward Curtis calls the story "a black theodicy: a story grounded in a mythological view of history that explained the fall of black civilization, the Middle Passage from Africa to the Americas, and the practice of Christian religion among slaves and their descendants". Stephen C. Finley also called it a theodicy. Several commentators state that the story, by associating blacks with ancient high civilizations and whites with cave-dwelling barbarians and gorillas, both uses and spectacularly reverses the populist and scientific racism of the era which identified Africans as primitive, or closer to apes than whites. This drew on earlier criticisms of white supremacist Nordicism, creating a mythic version of "attacks on AngloSaxon lineage and behavior that had been voiced by more mainstream black thinkers during the nineteenth century. [...] With these references the [NOI] Muslims replicated the images of European savagery in the Middle Ages that were so pervasive in nineteenth-century black racial thought". In popular culture The American author and playwright Amiri Baraka's play A Black Mass (1965) takes inspiration from the story of Yakub. In Baraka's version the experiment creates a single Frankenstein-like "white" monster who kills Jacoub and the other magician-scientists and bites a woman, transforming her in a vampire-like way into a white-devil mate for himself. From this monstrous couple the white race is descended. According to critic Melani McAlister, "the character of Yakub, now called Jacoub, is introduced as one of three 'Black Magicians' who together symbolize the black origin of all religions". McAlister argues that Baraka turns the Yakub story "into a reinterpretation of the Faust story and a simultaneous meditation on the role and function of art." saying that "As with Faust, Jacoub's individualism and egotism are his undoing, but his failings also signal the destruction of a community." He also compared his version of the story to Frankenstein, in its conflation of "the six hundred years of Elijah Muhammad's "history" into a single, terrible moment of the creation of a monster." According to Charise L. Cheney, the doctrine of Yakub has had a significant influence in rap culture, mentioning several rappers. She argues that the rapper Kam (a member of the NoI), in his 1995 song "Keep tha Peace", uses the Yakub doctrine in order to explain "the roots of black-on-black crime and gang violence in America's inner cities", noting the lyrics: She also notes Grand Puba's 1990 lyric, in which he announces that "his calling was to bring enlightenment to black people and an end to white domination" saying "Here comes the god to send the devil right back to his cave. […] We're gonna drop the bomb on the Yakub crew". Chuck D of Public Enemy also refers to the story in his song "Party for Your Right to Fight", referring to the Yakub story by attributing the deaths of African American radicals to the "grafted devils" conspiring against the "Black Asiatic Man". See also Xenu Notes References Works cited Primary sources Academic articles Books Alleged extraterrestrial beings Antisemitic tropes Anti-white racism in the United States Jacob Legendary progenitors National mysticism Nation of Islam Nuwaubianism Patmos Pseudohistory Scientific racism Theodicy
Yakub (Nation of Islam)
[ "Biology" ]
3,867
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
960,519
https://en.wikipedia.org/wiki/Square%20Leg
Square Leg was a 1980 British government home defence Command Post and field exercise, which tested the Transition to War and Home Defence roles of the Ministry of Defence and British government. Part of the exercise involved a mock nuclear attack on Britain. It was assumed that 131 nuclear weapons would fall on Britain with a total yield of 205 megatons (69 ground burst; 62 air burst) with yields of 500 kt to 3 Mt. That was felt to be a reasonably realistic scenario, but the report stated that a total strike in excess of 1,000 megatons would be likely. The ratio of ground bursts to air bursts was increased to provide all the regional NBC cells with radioactive fallout challenges. Furthermore, the scenario was altered from official assessments as these were highly classified and many participants did not have the appropriate clearance to see them. Mortality was estimated at 29 million (53 percent of the population), serious injuries at 7 million (12 percent), and short-term survivors at 19 million (35 percent). Square Leg was criticised for a number of reasons: the weapons used were exclusively in the high-yield megaton range, with an average of 1.5 Mt per bomb, but a realistic attack based on known Soviet capabilities would have seen mixed weapons yields, including many missile-based warheads in the low-hundred-kiloton range. Also, no targets in Inner London were attacked (for example, Whitehall, the centre of British government), though collateral damage from strikes on Outer London targets and on Potters Bar and Ongar meant that much of the Inner London area was still destroyed; towns such as Eastbourne were hit for no obvious reason. All government and military bunkers were assumed to have survived for exercise purposes, although Kelvedon Hatch Sub-Regional Headquarters had difficulty in establishing regional control. In addition, the exercise only covered Great Britain, with Northern Ireland being ignored. The United Kingdom Warning and Monitoring Organisation was not a "live" participant, with the strike data it would have provided instead being pre-recorded and played into the exercise as it proceeded, an aspect that was criticised by participants after the exercise. The Lothian Regional Council refused to participate in Square Leg, and the exercise came under journalistic scrutiny after its details were leaked to the press, but otherwise it was not met with significant opposition in the way that the later Hard Rock exercise would be. Timeline of main events and civil and armed forces actions Transition to War The following table shows the hypothetical pre-strike event list drawn from the national Main Event List for Square Leg, testing the Transition to War stage. Survival The 'survival' stage details the events that occurred post nuclear attack, based on extracts from the War Diary of Warwickshire County that was used during Square Leg. Recovery The 'recovery' phase reports are drawn from the Gloucestershire log of requests for military support. See also Nuclear weapons and the United Kingdom World War III The Warsaw Pact operation Seven Days to the River Rhine RAF Greenham Common airfield United Kingdom Warning and Monitoring Organisation References Doomsday, Britain after Nuclear Attack, Stan Openshaw, Philip Steadman and Owen Greene, Basil Blackwell, 1983 War Plan UK, Duncan Campbell, The National Archives, FCO 46/2446 to 46/2448, HO 322/950 Footnotes Nuclear warfare United Kingdom nuclear command and control Cold War history of the United Kingdom 1980 in the United Kingdom
Square Leg
[ "Chemistry" ]
681
[ "Radioactivity", "Nuclear warfare" ]
960,522
https://en.wikipedia.org/wiki/Messier%2019
Messier 19 or M19 (also designated NGC 6273) is a globular cluster in the constellation Ophiuchus. It was discovered by Charles Messier on June 5, 1764 and added to his catalogue of comet-like objects that same year. It was resolved into individual stars by William Herschel in 1784. His son, John Herschel, described it as "a superb cluster resolvable into countless stars". The cluster is located 4.5° WSW of Theta Ophiuchi and is just visible as a fuzzy point of light using binoculars. Using a telescope with a aperture, the cluster shows an oval appearance with a core and a halo. M19 is one of the most oblate of the known globular clusters. This flattening may not accurately reflect the physical shape of the cluster because the emitted light is being strongly absorbed along the eastern edge. This is the result of extinction caused by intervening gas and dust. When viewed in the infrared, the cluster shows almost no flattening. It lies at a distance of about from the Solar System, and is quite near to the Galactic Center at only about away. This cluster contains an estimated 1,100,000 times the mass of the Sun and it is around 11.9 billion years old. The stellar population includes four Cepheids and RV Tauri variables, plus at least one RR Lyrae variable for which a period is known. Observations made during the ROSAT mission failed to reveal any low-intensity X-ray sources. See also List of Messier objects References External links Messier 19, SEDS Messier pages Messier 19, Galactic Globular Clusters Database page Messier 019 Messier 019 019 Messier 019 17640605 Discoveries by Charles Messier
Messier 19
[ "Astronomy" ]
366
[ "Ophiuchus", "Constellations" ]
960,559
https://en.wikipedia.org/wiki/Stromal%20cell-derived%20factor%201
The stromal cell-derived factor 1 (SDF-1), also known as C-X-C motif chemokine 12 (CXCL12), is a chemokine protein that in humans is encoded by the CXCL12 gene on chromosome 10. It is ubiquitously expressed in many tissues and cell types. Stromal cell-derived factors 1-alpha and 1-beta are small cytokines that belong to the chemokine family, members of which activate leukocytes and are often induced by proinflammatory stimuli such as lipopolysaccharide, TNF, or IL1. The chemokines are characterized by the presence of 4 conserved cysteines that form 2 disulfide bonds. They can be classified into 2 subfamilies. In the CC subfamily, the cysteine residues are adjacent to each other. In the CXC subfamily, they are separated by an intervening amino acid. The SDF1 proteins belong to the latter group. CXCL12 signaling has been observed in several cancers. The CXCL12 gene also contains one of 27 SNPs associated with increased risk of coronary artery disease. Structure Gene The CXCL12 gene resides on chromosome 10 at the band 10q11.21 and contains 4 exons.. This gene produces 7 isoforms through alternative splicing. Protein This protein belongs to the intercrine alpha (chemokine CXC) family. SDF-1 is produced in two forms, SDF-1α/CXCL12a and SDF-1β/CXCL12b, by alternate splicing of the same gene. Chemokines are characterized by the presence of four conserved cysteines, which form two disulfide bonds. The CXCL12 proteins belong to the group of CXC chemokines, whose initial pair of cysteines are separated by one intervening amino acid. In addition, the first 8 residues of the CXCL12 N-terminal serve as a receptor binding site, though only Lys-1 and Pro-2 directly participated in activating the receptor. Meanwhile, the RFFESH motif (residues 12-17) in the loop region function as a docking site for CXCL12 receptor binding. Function CXCL12 is expressed in many tissues in mice including brain, thymus, heart, lung, liver, kidney, spleen, platelets and bone marrow. CXCL12 is strongly chemotactic for lymphocytes. During embryogenesis, it directs the migration of hematopoietic cells from fetal liver to bone marrow and the formation of large blood vessels. It has also been shown that CXCL12 signalling regulates the expression of CD20 on B cells. CXCL12 is also chemotactic for mesenchymal stem cells and is expressed in the area of inflammatory bone destruction, where it mediates their suppressive effect on osteoclastogenesis. In adulthood, CXCL12 plays an important role in angiogenesis by recruiting endothelial progenitor cells (EPCs) from the bone marrow through a CXCR4 dependent mechanism. CXCR4, previously called LESTR or fusin, is the receptor for CXCL12. This CXCL12-CXCR4 interaction used to be considered exclusive (unlike for other chemokines and their receptors), but recently, it was suggested that CXCL12 may also bind the CXCR7 receptor (now called ACKR3). By blocking CXCR4, a major coreceptor for HIV-1 entry, CXCL12 acts as an endogenous inhibitor of CXCR4-tropic HIV-1 strains. CNS During embryonic development, CXCL12 plays a role in cerebellar formation through the migration of neurons. Within the CNS, CXCL12 contributes to cell proliferation, neurogenesis (nervous tissue development and growth), as well as neuroinflammation. Neural progenitor cells (NPCs) are stem cells that differentiate into glial and neuronal cells. CXCL12 promotes their migration to lesion sites within the brain, specifically over extensive ranges. Once at the site of damage, NPCs may begin stem cell based tissue repair to the lesion. The CXCL12/CXCR4 axis provides guidance cues for axons and neurites hence promoting neurite outgrowth (neurons forming projections) and neurogenesis. Like other chemokines, CXCL12 is involved with cell migration that contributes to inflammation. In regards to the CNS, CXCL12 plays a role in neuroinflammation by attracting leukocytes across the blood brain barrier. however, excessive production and accumulation of CXCL12 can become toxic and the inflammation produced may result in serious consequences. Clinical significance In humans, CXCL12 has been implicated in a wide variety of biomedical conditions involving several organ systems. Furthermore, CXCL12 signaling in conjunction with CXCR7 signaling has been implicated in the progression of pancreatic cancer. In the urinary tract system, methylation of the CXCL12 promoter and expression of PD-L1 may be powerful prognostic biomarkers for biochemical recurrence in prostate carcinoma patients after radical prostatectomy, and further studies are ongoing to confirm if CXCL12 methylation may aid in active surveillance strategies. In the field of oncology, melanoma associated fibroblasts are stimulated by stimulation of the A2B adenosine receptor followed by stimulation of fibroblast growth factor and increased expression of CXCL12. Clinical marker A multi-locus genetic risk score study based on a combination of 27 loci, including the CXCL12 gene, identified individuals at increased risk for both incident and recurrent coronary artery disease events, as well as an enhanced clinical benefit from statin therapy. The study was based on a community cohort study (the Malmo Diet and Cancer study) and four additional randomized controlled trials of primary prevention cohorts (JUPITER and ASCOT) and secondary prevention cohorts (CARE and PROVE IT-TIMI 22). Multiple Sclerosis A neurological condition that results from a faulty interaction between the immune and nervous systems in multiple sclerosis. MS is characterized by demyelination of nerves due to the body's immune system attacking the CNS. Elevated levels of CXCL12 are observed in the cerebral spinal fluid of patients with MS. CXCL12 crosses the blood–brain barrier and causes neuroinflammation that contributes to axonal damage and therefore the progression of multiple sclerosis. Alzheimer's disease Though CXCL12 may be detrimental for those with MS, recent research is suggesting that this chemokine may be beneficial in decreasing the progression of patients with Alzheimer's. Alzheimer's is another neurological condition and the most common form of dementia where cognition significantly declines. One main characteristic of Alzheimer's is the accumulation of a brain plaque known as beta-amyloid. There are neuroprotective aspects of CXCL12 in mice with these plaques/Alzheimer's. PAK is a protein associated with maintaining dendritic spines, which are essential at synapses in receiving information from axons. Mislocalization of PAK occurs in patients with Alzheimer's, however pretreatment of neurons in mice with CXCL12 showed a suppression of that mislocalization. Additionally, this pretreatment with CXCL decreased the prevalence of apoptosis and oxidative damage normally caused by the presence of the beta-amyloid plaque. As a drug target Chemokines and chemokine receptors, of which CXCR stands out, regulate multiple processes such as morphogenesis, angiogenesis, and immune responses and are considered potential targets for drug development. It is indicated by clinical samples that a high expression level of CXCR4 in idiopathic pulmonary fibrosis lungs. Experimental evidence further indicate that CXCR4/CXCR12 is associated with the pathogenesis of lung fibrosis. In the gastrointestinal tract system, the CXCL12-CXCR4 axis is under investigation as an anti-fibrotic therapy in the treatment for chronic pancreatitis. For instance, blocking CXCR4, the receptor for CXCL12, with Plerixafor (AMD-3100) increased the effectiveness of combretastatin in a mouse model of breast cancer, presumably by preventing macrophages from being recruited to tumours.[15][16] AMD-3100 is also widely used in combination with G-CSF for mobilizing hematopoietic stem cells into the blood stream, allowing collection for bone marrow transplant. References Further reading Cytokines
Stromal cell-derived factor 1
[ "Chemistry" ]
1,869
[ "Cytokines", "Signal transduction" ]
960,581
https://en.wikipedia.org/wiki/Load-bearing%20wall
A load-bearing wall or bearing wall is a wall that is an active structural element of a building, which holds the weight of the elements above it, by conducting its weight to a foundation structure below it. Load-bearing walls are one of the earliest forms of construction. The development of the flying buttress in Gothic architecture allowed structures to maintain an open interior space, transferring more weight to the buttresses instead of to central bearing walls. In housing, load-bearing walls are most common in the light construction method known as "platform framing". In the birth of the skyscraper era, the concurrent rise of steel as a more suitable framing system first designed by William Le Baron Jenney, and the limitations of load-bearing construction in large buildings, led to a decline in the use of load-bearing walls in large-scale commercial structures. Description A load-bearing wall or bearing wall is a wall that is an active structural element of a that is, it bears the weight of the elements above said wall, resting upon it by conducting its weight to a foundation structure. The materials most often used to construct load-bearing walls in large buildings are concrete, block, or brick. By contrast, a curtain wall provides no significant structural support beyond what is necessary to bear its own materials or conduct such loads to a bearing wall. History Load-bearing walls are one of the earliest forms of construction. The development of the flying buttress in Gothic architecture allowed structures to maintain an open interior space, transferring more weight to the buttresses instead of to central bearing walls. The Notre Dame Cathedral is an example of a load-bearing wall structure with flying buttresses. Application Depending on the type of building and the number of floors, load-bearing walls are gauged to the appropriate thickness to carry the weight above them. Without doing so, it is possible that an outer wall could become unstable if the load exceeds the strength of the material used, potentially leading to the collapse of the structure. The primary function of this wall is to enclose or divide space of the building to make it more functional and useful. It provides privacy, affords security, and gives protection against heat, cold, sun or rain. Housing In housing, load-bearing walls are most common in the light construction method known as "platform framing", and each load-bearing wall sits on a wall sill plate which is mated to the lowest base plate. The sills are bolted to the masonry or concrete foundation. The top plate or ceiling plate is the top of the wall, which sits just below the platform of the next floor (at the ceiling). The base plate or floor plate is the bottom attachment point for the wall studs. Using a top plate and a bottom plate, a wall can be constructed while it lies on its side, allowing for end-nailing of the studs between two plates, and then the finished wall can be tipped up vertically into place atop the wall sill; this not only improves accuracy and shortens construction time, but also produces a stronger wall. Skyscrapers Due to the immense weight of skyscrapers, the base and walls of the lower floors must be extremely strong. Pilings are used to anchor the building to the bedrock underground. For example, the Burj Khalifa, the world's tallest building as well as the world's tallest structure, uses specially treated and mixed reinforced concrete. Over of concrete, weighing more than were used to construct the concrete and steel foundation, which features 192 piles, with each pile being 1.5 m diameter × 43 m long ( × ) and buried more than deep. See also Column – in most larger, multi-storey buildings, vertical loads are primarily borne by columns / pillars instead of structural walls Tube frame structure – Some of the world's tallest skyscrapers use load-bearing outer frames – be it single tube (e.g. the old WTC Twin Towers), or bundled tube (e.g. the Willis Tower or the Burj Khalifa) References Structural system Types of wall
Load-bearing wall
[ "Technology", "Engineering" ]
819
[ "Structural system", "Types of wall", "Structural engineering", "Building engineering" ]
960,583
https://en.wikipedia.org/wiki/Messier%2021
Messier 21 or M21, also designated NGC 6531 or Webb's Cross, is an open cluster of stars located to the north-east of Sagittarius in the night sky, close to the Messier objects M20 to M25 (except M24). It was discovered and catalogued by Charles Messier on June 5, 1764. This cluster is relatively young and tightly packed. A few blue giant stars have been identified in the cluster, but Messier 21 is composed mainly of small dim stars. With a magnitude of 6.5, M21 is not visible to the naked eye; however, with the smallest binoculars it can be easily spotted on a dark night. The cluster is positioned near the Trifid nebula (NGC 6514), but is not associated with that nebulosity. It forms part of the Sagittarius OB1 association. This cluster is located away from Earth with an extinction of 0.87. Messier 21 is around 6.6 million years old with a mass of . It has a tidal radius of 11.7 pc, with a nucleus radius of and a coronal radius of . There are at least members within the coronal radius down to visual magnitude 15.5, including many early B-type stars. An estimated 40–60 of the observed low-mass members are expected to be pre-main-sequence stars, with 26 candidates identified based upon hydrogen alpha emission and the presence of lithium in the spectrum. The stars in the cluster do not show a significant spread in ages, suggesting that the star formation was triggered all at once. As of January 2022, Messier 21 is one of the few remaining objects within the Messier Catalog to not have been photographed by the Hubble Space Telescope. Gallery See also List of Messier objects References External links Messier 21, SEDS Messier pages Messier 021 Carina–Sagittarius Arm Messier 021 021 Messier 021 17640605 Discoveries by Charles Messier
Messier 21
[ "Astronomy" ]
412
[ "Sagittarius (constellation)", "Constellations" ]
960,596
https://en.wikipedia.org/wiki/Messier%2022
Messier 22 or M22, also known as NGC 6656 or the Great Sagittarius Cluster, is an elliptical globular cluster of stars in the constellation Sagittarius, near the Galactic bulge region. It is one of the brightest globulars visible in the night sky. The brightest stars are 11th magnitude, with hundreds of stars bright enough to resolve with an 8" telescope. It is just south of the sun's position in mid-December, and northeast of Lambda Sagittarii (Kaus Borealis), the northernmost star of the "Teapot" asterism. M22 was one of the first globulars to be discovered, in 1665 by Abraham Ihle and it was included in Charles Messier's catalog of comet-like objects in 1764. It was one of the first globular clusters to be carefully studied – first by Harlow Shapley in 1930. He placed within it roughly 70,000 stars and found it had a dense core. Then Halton Arp and William G. Melbourne continued studies in 1959. Due to the large color spread of its red giant branch (RGB) sequence, akin to that in Omega Centauri, it became the object of intense scrutiny starting in 1977 with James E. Hesser et al. M22 is one of the nearer globular clusters to Earth – at about 10,600 light-years away. It spans 32′ on the sky which means its diameter (width across) is 99 ± 9 light-years, given its estimated distance. 32 variable stars have been recorded in M22. It is in front of part of the galactic bulge and is therefore useful for its microlensing effect on those background stars. Despite its relative proximity to us, this metal-poor cluster's light is limited by dust extinction, giving it an apparent magnitude of 5.5; even so, it is the brightest globular cluster visible from mid-northern latitudes (such as Japan, Korea, Europe and most of North America). From those latitudes due to its declination of nearly 24° south of the (celestial) equator, its daily path is low in the southern sky. It thus appears less impressive to people in the temperate northern hemisphere than counterparts fairly near in angle (best viewed in the Summer night sky) such as M13 and M5. M22 is one of only four globulars of our galaxy known to contain a planetary nebula (an expanding, glowing gas swell from a massive star, often a red giant). It was an object first noted of interest using the IRAS satellite by Fred Gillett and his associates in 1986, as a pointlike light source and its nature was found in 1989 by Gillett et al. The planetary nebula's central star is a blue star. The nebula, designated GJJC1, is likely about only 6,000 years old. Two black holes of between 10 and 20 solar masses () each were unearthed with the Very Large Array radio telescope in New Mexico and corroborated by the Chandra X-ray telescope in 2012. These imply that gravitational ejection of black holes from clusters is not as efficient as was previously thought, and leads to estimates of a total 5 to 100 black holes within M22. Interactions between stars and black holes could explain the unusually large core of the cluster. Gallery See also New General Catalogue Messier object List of Messier objects List of globular clusters Footnotes and references Footnotes References External links Messier 22, SEDS Messier pages Messier 22, Galactic Globular Clusters Database page Messier 022 Messier 022 022 Messier 022 16650826
Messier 22
[ "Astronomy" ]
759
[ "Sagittarius (constellation)", "Constellations" ]
960,668
https://en.wikipedia.org/wiki/Ernst%20Leonard%20Lindel%C3%B6f
Ernst Leonard Lindelöf (; 7 March 1870 – 4 June 1946) was a Finnish mathematician, who made contributions in real analysis, complex analysis and topology. Lindelöf spaces are named after him. He was the son of mathematician Lorenz Leonard Lindelöf and brother of the philologist . He was secretary of the Finnish Society of Science and Letters (societas scientiarum Fennica) in its centenary year, 1938. Biography Lindelöf studied at the University of Helsinki, where he completed his PhD in 1893, became a docent in 1895 and professor of Mathematics in 1903. He was a member of the Finnish Society of Sciences and Letters. In addition to working in a number of different mathematical domains including complex analysis, conformal mappings, topology, ordinary differential equations and the gamma function, Lindelöf promoted the study of the history of Finnish mathematics. He is known for the Picard–Lindelöf theorem on differential equations and the Phragmén–Lindelöf principle, one of several refinements of the maximum modulus principle that he proved in complex function theory. He was the PhD supervisor for Lars Ahlfors at the University of Helsinki. Selected bibliography Le calcul des résidus et ses applications à la théorie des fonctions (Paris, 1905)Mémoire sur la théorie des fonctions entières d'ordre fini ("Acta societatis scientiarum fennicae" 31, 1903)With Lars Edvard Phragmén: "Sur une extension d'un principe classique de l'analyse et sur quelques propriétés des fonctions monogènes dans le voisinage d'un point singulier", in: Acta Mathematica'' 31, 1908. References External links 1870 births 1946 deaths Scientists from Helsinki 20th-century Finnish mathematicians Topologists 19th-century Finnish mathematicians Academic staff of the University of Helsinki Members of the Royal Society of Sciences in Uppsala Mathematicians from the Russian Empire
Ernst Leonard Lindelöf
[ "Mathematics" ]
417
[ "Topologists", "Topology" ]
960,802
https://en.wikipedia.org/wiki/Dimitri%20Riabouchinsky
Dimitri Pavlovitch Riabouchinsky (,6 November 1882– 22 August 1962) was a Russian fluid dynamicist noted for his discovery of the Riabouchinsky solid technique. With the aid of Nikolay Zhukovsky he founded the Institute of Aerodynamics in 1904, the first in Europe. He also independently discovered equivalent results to the Buckingham Pi Theorem in 1911. Riabouchinsky left Russia following the October Revolution and his short-term arrest, spending the rest of his life in Paris, yet he never accepted the French citizenship and used his Nansen passport up till death. He was a member of the Moscow State University, the University of Paris, the French Academy of Sciences as well as one of the co-founders of the Russian Higher Technical School in France. Over 200 scientific works were published during his lifetime. He was an Invited Speaker of the ICM in 1920 at Strasbourg, in 1928 at Bologna, and in 1932 at Zurich. Notes External links Рябушинский Дмитрий Павлович Воспоминания о Рябушинском. Лекция из цикла «Выдающиеся ученые — математики и механики» в мемориальном кабинете-музее Л. И. Седова «Российский научный некрополь за рубежом» РЯБУШИНСКИЙ (Riabouchinsky, Riaboushinsky) Дмитрий Павлович Dimitri Pavlovitch Riabouchinsky (1882-1962) 1882 births 1962 deaths Scientists from Moscow Naturalized citizens of France Scientists from Paris Fluid dynamicists Members of the French Academy of Sciences Academic staff of Moscow State University Academic staff of the University of Paris Aerodynamicists Aviation pioneers
Dimitri Riabouchinsky
[ "Chemistry" ]
452
[ "Fluid dynamicists", "Fluid dynamics" ]
960,826
https://en.wikipedia.org/wiki/Messier%2023
Messier 23, also known as NGC 6494, is an open cluster of stars in the northwest of the southern constellation of Sagittarius. It was discovered by Charles Messier in 1764. It can be found in good conditions with binoculars or a modestly sized telescope. It is in front of "an extensive gas and dust network", which there may be no inter-association. It is within 5° the sun's position (namely in mid-December) so can be occulted by the moon. The cluster is centered about 2,050 light years away. Estimates for the number of its members range from 169 up to 414, with a directly-counted mass of ; by application of the virial theorem. The cluster is around 330 million years old with a near-solar metallicity of [Fe/H] = −0.04. The brightest component (lucida) is of magnitude 9.3. Five of the cluster members are candidate red giants, while orange variable VV Sgr in the far south, is a candidate asymptotic giant branch star. A 6th-magnitude star, shown in the top-right corner, figures in the far north-west as a foreground star – HD 163245 (HR 6679). Its parallax shift is , having taken into account proper motion, which means it is about away. Gallery See also List of Messier objects Footnotes and References Footnotes References External links Messier 23, SEDS Messier pages Messier 023 Orion–Cygnus Arm Messier 023 023 Messier 023 17640620 Discoveries by Charles Messier
Messier 23
[ "Astronomy" ]
338
[ "Sagittarius (constellation)", "Constellations" ]
960,890
https://en.wikipedia.org/wiki/Small%20Sagittarius%20Star%20Cloud
The Small Sagittarius Star Cloud (also known as Messier 24 and IC 4715) is a star cloud in the constellation of Sagittarius approximately 600 light years wide, which was catalogued by Charles Messier in 1764. The stars, clusters and other objects comprising M24 are part of the Sagittarius or Sagittarius-Carina arms of the Milky Way galaxy. Messier described M24 as a "large nebulosity containing many stars" and gave its dimensions as being some 1.5° across. Some sources, improperly, identify M24 as the small open cluster NGC 6603. The location of the Small Sagittarius Star Cloud is near the Omega Nebula (also known as M17) and open cluster Messier 18, both north of M24. M24 is one of only three Messier objects that are not actual deep sky objects. M24 fills a space of significant volume to a depth of 10,000 to 16,000 light-years. The star cloud is the most dense concentration of individual stars visible using binoculars, with around 1,000 stars visible within a single field of view. In telescopes it is best seen at low magnification, with a field of view of at least 2 degrees. Described as "a virtual carpet of stellar jewels", M24 is visible to the naked eye whenever the Milky Way itself is visible as well. It holds a collection of numerous types of stars that are visible through the galaxy's obscuring band of interstellar dust. The light of M24 is spread out over a large area, which makes estimating its brightness difficult. Older references give the star cloud's magnitude as 4.6, but more recent estimates place it a full two magnitudes brighter, at 2.5. HD 167356 is the brightest star within the Small Sagittarius Star Cloud, a white supergiant with an apparent magnitude of 6.05. This star is an Alpha-2 Canum Venaticorum variable, showing small changes in brightness as it rotates. There are three other stars in M24 with visual magnitudes between 6.5 and 7.0. The star cloud incorporates two prominent dark nebulae which are vast clouds of dense, obscuring interstellar dust. This dust blocks light from the more distant stars, which keeps them from being seen from Earth. Lying on the northwestern side is Barnard 92, which is the darker of the two. Within the star field, the nebula appears as an immense round hole devoid of stars. American astronomer Edward Emerson Barnard discovered this dark nebula in 1913. Along the northeast side lies Barnard 93, as large as Barnard 92 though less obvious. There are also other dark nebulae within M24, including Barnard 304 and Barnard 307. The Small Sagittarius Star Cloud also contains two planetary nebulae, M 1-43 and NGC 6567. Located within a spiral arm of the Milky Way, Messier 24 holds some similarities with NGC 206, a bright, large star cloud within the Andromeda Galaxy. See also Messier object List of Messier objects New General Catalogue Large Sagittarius Star Cloud References External links Finder Chart for Messier 24 Messier 24, SEDS Messier pages Carina–Sagittarius Arm Star clouds Milky Way Sagittarius (constellation) Messier objects IC objects 17640620 Discoveries by Charles Messier
Small Sagittarius Star Cloud
[ "Astronomy" ]
700
[ "Sagittarius (constellation)", "Constellations" ]
960,927
https://en.wikipedia.org/wiki/NGC%206603
NGC 6603 is an open cluster discovered by John Herschel on July 15, 1830 located in Sagittarius constellation. Situated within the brightest part of star cloud Messier 24, it is classified by Shapley as type "g". This cluster consists of about 30 stars in a field of about 5 arc minutes in diameter, and is about 9400 light years remote. Thus its linear diameter should be about 14 light years. The hottest stars are about B9 (pointing to an intermediate age of several 100 million years, an estimate of which is not known to the present author), and the brightest of photographic mag 14. Many sources improperly identify NGC 6603 as Messier 24. References External links 6603 Open clusters Astronomical objects discovered in 1830 Sagittarius (constellation)
NGC 6603
[ "Astronomy" ]
162
[ "Sagittarius (constellation)", "Constellations" ]
960,972
https://en.wikipedia.org/wiki/Schoenflies%20notation
The Schoenflies (or Schönflies) notation, named after the German mathematician Arthur Moritz Schoenflies, is a notation primarily used to specify point groups in three dimensions. Because a point group alone is completely adequate to describe the symmetry of a molecule, the notation is often sufficient and commonly used for spectroscopy. However, in crystallography, there is additional translational symmetry, and point groups are not enough to describe the full symmetry of crystals, so the full space group is usually used instead. The naming of full space groups usually follows another common convention, the Hermann–Mauguin notation, also known as the international notation. Although Schoenflies notation without superscripts is a pure point group notation, optionally, superscripts can be added to further specify individual space groups. However, for space groups, the connection to the underlying symmetry elements is much more clear in Hermann–Mauguin notation, so the latter notation is usually preferred for space groups. Symmetry elements Symmetry elements are denoted by i for centers of inversion, C for proper rotation axes, σ for mirror planes, and S for improper rotation axes (rotation-reflection axes). C and S are usually followed by a subscript number (abstractly denoted n) denoting the order of rotation possible. By convention, the axis of proper rotation of greatest order is defined as the principal axis. All other symmetry elements are described in relation to it. A vertical mirror plane (containing the principal axis) is denoted σv; a horizontal mirror plane (perpendicular to the principal axis) is denoted σh. Point groups In three dimensions, there are an infinite number of point groups, but all of them can be classified by several families. Cn (for cyclic) has an n-fold rotation axis. Cnh is Cn with the addition of a mirror (reflection) plane perpendicular to the axis of rotation (horizontal plane). Cnv is Cn with the addition of n mirror planes containing the axis of rotation (vertical planes). Cs denotes a group with only mirror plane (for Spiegel, German for mirror) and no other symmetry elements. Sn (for Spiegel, German for mirror) contains only a n-fold rotation-reflection axis. The index, n, should be even because when it is odd an n-fold rotation-reflection axis is equivalent to a combination of an n-fold rotation axis and a perpendicular plane, hence Sn = Cnh for odd n. Cni has only a rotoinversion axis. This notation is rarely used because any rotoinversion axis can be expressed instead as rotation-reflection axis: For odd n, Cni = S2n and C2ni = Sn = Cnh, and for even n, C2ni = S2n. Only the notation Ci (meaning C1i) is commonly used, and some sources write C3i, C5i etc. Dn (for dihedral, or two-sided) has an n-fold rotation axis plus n twofold axes perpendicular to that axis. Dnh has, in addition, a horizontal mirror plane and, as a consequence, also n vertical mirror planes each containing the n-fold axis and one of the twofold axes. Dnd has, in addition to the elements of Dn, n vertical mirror planes which pass between twofold axes (diagonal planes). T (the chiral tetrahedral group) has the rotation axes of a tetrahedron (three 2-fold axes and four 3-fold axes). Td includes diagonal mirror planes (each diagonal plane contains only one twofold axis and passes between two other twofold axes, as in D2d). This addition of diagonal planes results in three improper rotation operations S4. Th includes three horizontal mirror planes. Each plane contains two twofold axes and is perpendicular to the third twofold axis, which results in inversion center i. O (the chiral octahedral group) has the rotation axes of an octahedron or cube (three 4-fold axes, four 3-fold axes, and six diagonal 2-fold axes). Oh includes horizontal mirror planes and, as a consequence, vertical mirror planes. It contains also inversion center and improper rotation operations. I (the chiral icosahedral group) indicates that the group has the rotation axes of an icosahedron or dodecahedron (six 5-fold axes, ten 3-fold axes, and 15 2-fold axes). Ih includes horizontal mirror planes and contains also inversion center and improper rotation operations. All groups that do not contain more than one higher-order axis (order 3 or more) can be arranged as shown in a table below; symbols in red are rarely used. In crystallography, due to the crystallographic restriction theorem, n is restricted to the values of 1, 2, 3, 4, or 6. The noncrystallographic groups are shown with grayed backgrounds. D4d and D6d are also forbidden because they contain improper rotations with n = 8 and 12 respectively. The 27 point groups in the table plus T, Td, Th, O and Oh constitute 32 crystallographic point groups. Groups with n = ∞ are called limit groups or Curie groups. There are two more limit groups, not listed in the table: K (for Kugel, German for ball, sphere), the group of all rotations in 3-dimensional space; and Kh, the group of all rotations and reflections. In mathematics and theoretical physics they are known respectively as the special orthogonal group and the orthogonal group in three-dimensional space, with the symbols SO(3) and O(3). Space groups The space groups with given point group are numbered by 1, 2, 3, ... (in the same order as their international number) and this number is added as a superscript to the Schönflies symbol for the corresponding point group. For example, groups numbers 3 to 5 whose point group is C2 have Schönflies symbols C, C, C. While in case of point groups, Schönflies symbol defines the symmetry elements of group unambiguously, the additional superscript for space group doesn't have any information about translational symmetry of space group (lattice centering, translational components of axes and planes), hence one needs to refer to special tables, containing information about correspondence between Schönflies and Hermann–Mauguin notation. Such table is given in List of space groups page. See also Crystallographic point group Point groups in three dimensions List of spherical symmetry groups References Flurry, R. L., Symmetry Groups : Theory and Chemical Applications. Prentice-Hall, 1980. LCCN: 79-18729 Cotton, F. A., Chemical Applications of Group Theory, John Wiley & Sons: New York, 1990. Harris, D., Bertolucci, M., Symmetry and Spectroscopy. New York, Dover Publications, 1989. External links Symmetry @ Otterbein Symmetry Spectroscopy Crystallography
Schoenflies notation
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,437
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Materials science", "Crystallography", "Condensed matter physics", "Geometry", "Spectroscopy", "Symmetry" ]
961,108
https://en.wikipedia.org/wiki/Messier%2025
Messier 25, also known as IC 4725, is an open cluster of stars in the southern constellation of Sagittarius. The first recorded observation of this cluster was made by Philippe Loys de Chéseaux in 1745 and it was included in Charles Messier's list of nebulous objects in 1764. The cluster is located near some obscuring features, with a dark lane passing near the center. M25 is at a distance of about away from Earth and is 67.6 million years old. The spatial dimension of this cluster is about across. It has an estimated mass of , of which about 24% is interstellar matter. A Delta Cephei type variable star designated U Sagittarii is a member of this cluster, as are two red giants, one of which is a binary system. New research indicates M25 may constitute a ternary star cluster together with NGC 6716 and Collinder 394. See also List of Messier objects References External links Messier 25, The Messier Catalog. Students for the Exploration and Development of Space (SEDS) Open Cluster M25. Astronomy Picture of the Day 2009 August 31 Messier 025 Orion–Cygnus Arm Messier 025 025 IC objects ?
Messier 25
[ "Astronomy" ]
255
[ "Sagittarius (constellation)", "Constellations" ]
961,138
https://en.wikipedia.org/wiki/Messier%2026
Messier 26, also known as NGC 6694, is an open cluster of stars in the southern constellation of Scutum. It was discovered by Charles Messier in 1764. This 8th magnitude cluster is a challenge to find in ideal skies with typical binoculars, where it can be, with any modern minimum aperture device. It is south-southwest of the open cluster Messier 11 and is across. About 25 stars are visible in a telescope with a aperture. M26 spans a linear size of 22 light years across with a tidal radius of , and is at a distance of 5,160 light years from the Earth. The brightest star is of magnitude 11 and the age of this cluster has been calculated to be 85.3 million years. It includes one known spectroscopic binary system. An interesting feature of M26 is a region of low star density near the nucleus. A hypothesis was that it was caused by an obscuring cloud of interstellar matter between us and the cluster, but a paper by James Cuffey suggested that this is not possible and that it really is a "shell of low stellar space density". In 2015, Michael Merrifield of the University of Nottingham said that there is, as yet, no clear explanation for the phenomenon. Gallery See also List of Messier objects NGC 1193 Footnotes and references Footnotes References External links Messier 26, SEDS Messier pages Messier 026 Carina–Sagittarius Arm Messier 026 026 Messier 026 17640620 Discoveries by Charles Messier
Messier 26
[ "Astronomy" ]
316
[ "Scutum (constellation)", "Constellations" ]
961,157
https://en.wikipedia.org/wiki/Messier%2028
Messier 28 or M28, also known as NGC 6626, is a globular cluster of stars in the center-west of Sagittarius. It was discovered by French astronomer Charles Messier in 1764. He briefly described it as a "nebula containing no star... round, seen with difficulty in 3-foot telescope; Diam 2′." In the sky it is less than a degree to the northwest of the 3rd magnitude star Kaus Borealis (Lambda ). This cluster is faintly visible as a hazy patch with a pair of binoculars and can be readily found in a small telescope with an aperture, showing as a nebulous feature spanning 11.2 arcminutes. Using an aperture of , the core becomes visible and a few distinct stars can be resolved, along the periphery. Larger telescopes will provide greater resolution, one of revealing a dense 2′ core, with more density within. It is about 18,300 light-years away from Earth. It is about and its metallicity (averaging −1.32 which means more than 10 times less than our own star), coherency and preponderence of older stellar evolution objects, support its dating to very roughly 12 billion years old. 18 RR Lyrae type variable stars have been found within. It bore the first discovery of a millisecond pulsar in a globular cluster – PSR B1821–24. This was using the Lovell Telescope at Jodrell Bank Observatory, England. A total of 11 further of these have since been detected in it with the telescope at Green Bank Observatory, West Virginia. As of 2011, these number the third-most in a cluster tied to the Milky Way, following Terzan 5 and 47 Tucanae. Gallery See also List of Messier objects References and footnotes References Footnotes External links Globular Cluster M28 @ SEDS Messier pages Messier 28, Galactic Globular Clusters Database page Messier 028 Messier 028 Messier 028 028 17640727 Discoveries by Charles Messier
Messier 28
[ "Astronomy" ]
426
[ "Sagittarius (constellation)", "Constellations" ]
961,172
https://en.wikipedia.org/wiki/Messier%2029
Messier 29 or M29, also known as NGC 6913 or the Cooling Tower Cluster, is a quite small, bright open cluster of stars just south of the central bright star Gamma Cygni of a northerly zone of the sky, Cygnus. It was discovered by Charles Messier in 1764, and can be seen from Earth by using binoculars. M29 is well within the several degrees of the arms and bulge of the Milky Way. It is at least many hundreds of light years short of the yardstick distance to the Galactic Center, as is between 4,000 and 7,200 light years away. A 1998 popular work gives a value within this range. Data from Gaia EDR3 gives a parallactic distance of about 5,240 light years. The uncertainty is due to poorly known absorption of the cluster's light. Its extinction greatly is from faint surrounding nebulosity and other foreground interstellar matter of this cross-section of the spiral arms (see Orion–Cygnus Arm, which is our local arm). According to the Sky Catalogue, M29 is included in the Cygnus OB1 association, and the radial velocity component of three-dimensional motion, by default factoring in the solar system's current trajectory, is one of approaching at 28 km/s (noted, thus, as negative). Its age is estimated at 10 million years, as its five hottest stars are all giants of spectral class B0. Kepple and his associates give the apparent brightness of the brightest star as eighth magnitude of in the mid-wavelength (and frequency) "visual" band. The cluster's absolute magnitude is estimated at −8.2, a luminosity of 160,000 solar luminosities (). The linear diameter was estimated at only 11 light years. Its Trumpler class is III,3,p,n (as it is associated with nebulosity), although Götz gives, differently, II,3,m, and Kepple gives I,2,m,n. The Sky Catalogue lists it with 50 member stars; earlier Becvar estimated 20 members. North of 47 degrees north, the cluster is for part or all of the day above the horizon. It can be made out in binoculars in a good sky. In telescopes, lowest powers are best. The brightest of its stars form a "stubby dipper", per Mallas. The four brightest stars form a quadrilateral, and another set a small triangle just north of the northernmost of the four. It is often known as the "cooling tower" due to its resemblance to the hyperboloid-shaped structures. A few fainter stars are around them, but the cluster appears quite isolated, especially in smaller telescopes. In photographs, many faint Milky Way background stars appear. Messier 29 can be found quite easily as it is about 1.7 degrees south of Gamma or 37 Cygni (Sadr). Angularly close, and almost certainly nearby in space, is diffuse nebulosity. The especially hot binary Wolf–Rayet star WR 143 (WC4+Be) (HD 195177) can be found near this cluster. See also List of Messier objects References and footnotes External links Messier 29, SEDS Messier pages Messier 29 RGB Image Messier 29 LRGB image – 2 hrs total exposure Messier 029 Messier 029 029 Messier 029 Orion–Cygnus Arm 17640729 Discoveries by Charles Messier
Messier 29
[ "Astronomy" ]
726
[ "Cygnus (constellation)", "Constellations" ]
961,184
https://en.wikipedia.org/wiki/Messier%2030
Messier 30 (also known as M30, NGC 7099, or the Jellyfish Cluster) is a globular cluster of stars in the southeast of the southern constellation of Capricornus, at about the declination of the Sun when the latter is at December solstice. It was discovered by the French astronomer Charles Messier in 1764, who described it as a circular nebula without a star. In the New General Catalogue, compiled during the 1880s, it was described as a "remarkable globular, bright, large, slightly oval." It can be easily viewed with a pair of 10×50 binoculars, forming a patch of hazy light some 4 arcminutes wide that is slightly elongated along the east–west axis. With a larger instrument, individual stars can be resolved and the cluster will cover an angle of up to 12 arcminutes across graduating into a compressed core about one arcminute wide that has further star density within. It is longest observable (opposed to the Sun) in the first half of August. M30 is centered 27,100 light-years away from Earth with a roughly 2.5% margin of error, and is about 93 light-years across. The estimated age is roughly 12.9 billion years and it forms a mass of about 160,000 times the mass of the Sun (). The cluster is following a retrograde orbit (against the general flow) through the inner galactic halo, suggesting that it was acquired from a satellite galaxy rather than forming within the Milky Way. It is in this epoch , from the center of the galaxy, compared to an estimated for the Sun. The cluster has passed through a dynamic process called core collapse and now has a concentration of mass at its core of about a million times the Sun's mass per cubic parsec. This makes it one of the highest density regions in the Milky Way galaxy. Stars in such close proximity will experience a high rate of interactions that can create binary star systems, as well as a type of star called a blue straggler that is formed by mass transfer. A process of mass segregation may have caused the central region to gain a greater proportion of higher mass stars, creating a color gradient with increasing blueness toward the middle of the cluster. See also List of Messier objects References and footnotes Notes External links Globular Cluster M30 @ SEDS Messier pages Messier 30, Galactic Globular Clusters Database page Messier 030 Messier 030 030 Messier 030 17640803 August Discoveries by Charles Messier
Messier 30
[ "Astronomy" ]
526
[ "Capricornus", "Constellations" ]
961,217
https://en.wikipedia.org/wiki/Messier%2032
Messier 32 (also known as M32 and NGC 221) is a dwarf "early-type" galaxy about from the Solar System, appearing in the constellation Andromeda. M32 is a satellite galaxy of the Andromeda Galaxy (M31) and was discovered by Guillaume Le Gentil in 1749. The galaxy is a prototype of the relatively rare compact elliptical (cE) class. Half the stars concentrate within inner core with an effective radius of . Densities in the central stellar cusp increase steeply, exceeding 3×107 (that is, 30 million) pc−3 (that is, per parsec cubed) at the smallest sub-radii resolved by HST, and the half-light radius of this central star cluster is around . Like more ordinary elliptical galaxies, M32 contains mostly old faint red and yellow stars with practically no dust or gas and consequently no current star formation. It does, however, show hints of star formation in the relatively recent past. Origins The structure and stellar content of M32 are difficult to explain by traditional galaxy formation models. Theoretical arguments and some simulations suggest a scenario in which the strong tidal field of M31 can transform a spiral galaxy or a lenticular galaxy into a compact elliptical. As a small disk galaxy falls into the central parts of M31, much of its outer layers will be stripped away. The central bulge of the small galaxy is much less affected and retains its morphology. Gravitational tidal effects may also drive gas inward and trigger a star burst in the core of the small galaxy, resulting in the high density of M32 observed today. There is evidence that M32 has a faint outer disk, and as such is not a typical elliptical galaxy. Newer simulations find that an off-centre impact by M32 around 800 million years ago explains the present-day warp in M31's disk. However this feature only occurs during the first orbital passage, whereas it takes many orbits for tides to transform a normal dwarf into M32. The observed colours and stellar populations of M32's outskirts do not match the stellar halo of M31, indicating that tidal losses from M32 are not their source. Taken together, these circumstances may suggest that M32 already began in its compact state, and has retained most of its own stars. At least one similar cE galaxy has been discovered in isolation, without any massive companion to thresh it. Another hypothesis is that M32 is in fact the largest remnant of a former spiral galaxy, M32p, which was then the third largest member of the Local Group. According to this simulation, M31 (Andromeda) and M32p merged about two billion years ago, which could explain both the unusual makeup of the current M31 stellar halo, and the structure and content of M32. Distance measurements At least two techniques have been used to measure distances to M32. The infrared surface brightness fluctuations distance measurement technique estimates distances to spiral galaxies based on the graininess of the appearance of their bulges. The distance measured to M32 using this technique is 2.46 ± 0.09 million light-years (755 ± 28 kpc). However, M32 is close enough that the tip of the red giant branch (TRGB) method may be used to estimate its distance. The estimated distance to M32 using this technique is 2.51 ± 0.13 million light-years (770 ± 40 kpc). For several additional reasons, M32 is thought to be in the foreground of M31, rather than behind. Its stars and planetary nebulae do not appear obscured or reddened by foreground gas or dust. Gravitational microlensing of M31 by a star in M32 was observed at the end of November 2000 in one event (with peak on 2 December 2000). Black hole M32 contains a supermassive black hole. Its mass has been estimated to lie between 1.5 and 5 million solar masses. A centrally located faint radio and X-ray source (now named M32* in analogy to Sgr A*) is attributed to gas accretion onto the black hole. See also List of Messier objects List of Andromeda's satellite galaxies List of galaxies References External links "StarDate: M32 Fact Sheet" "SEDS: Elliptical Galaxy M32" Messier 032 Messier 032 Messier 032 Messier 032 Messier 032 032 Messier 032 00452 002555 168 002555 17491029 Discoveries by Guillaume Le Gentil
Messier 32
[ "Astronomy" ]
925
[ "Andromeda (constellation)", "Constellations" ]
961,237
https://en.wikipedia.org/wiki/TransUnion
TransUnion LLC is an American consumer credit reporting agency. TransUnion collects and aggregates information on over one billion individual consumers in over thirty countries including "200 million files profiling nearly every credit-active consumer in the United States". Its customers include over 65,000 businesses. Based in Chicago, Illinois, TransUnion's 2014 revenue was US$1.3 billion. It is the smallest of the three largest credit agencies, along with Experian and Equifax (known as the "Big Three"). TransUnion also markets credit reports and other credit and fraud-protection products directly to consumers. Like all credit reporting agencies, the company is required by U.S. law to provide consumers with one free credit report every year. Additionally a growing segment of TransUnion's business is its business offerings that use advanced big data, particularly its deep AI-TLOxp product. History TransUnion was originally formed in 1968 as a holding company for Union Tank Car Company, making TransUnion a descendant of Standard Oil through Union Tank Car Company. The following year, it acquired the Credit Bureau of Cook County, which possessed and maintained 3.6 million credit accounts. In 1981, a Chicago-based holding company, The Marmon Group, acquired TransUnion for approximately $688 million. In 2010, Goldman Sachs Capital Partners and Advent International acquired it from Madison Dearborn Partners. In 2014, TransUnion acquired Hank Asher's data company TLO. On June 25, 2015, TransUnion became a publicly traded company for the first time, trading under the symbol TRU. TransUnion eventually began to offer products and services for both businesses and consumers. For businesses, TransUnion updated its traditional credit score offering to include trended data that helps predict consumer repayment and debt behavior. This product, referred to as CreditVision, launched in October 2013. Its SmartMove™ service facilitates credit and background checks for landlords. The service also provides credit and background checks for partner companies, such as RentSpree. In September 2013, the company acquired eScan Data Systems of Austin, Texas, to provide post-service eligibility determination support to hospitals and healthcare systems. The technology was integrated into TransUnion's ClearIQ platform, which tracks patients demographic and insurance related information to support benefit verification. In November 2013, TransUnion acquired TLO LLC, a company that leverages data in support of its investigative and risk management tools. Its TLOxp technology aggregates data sets and uses a proprietary algorithm to uncover relationships between data. TLOxp also allows licensed investigators and law enforcement professionals to access personally identifiable information from credit header data. In 2014, a TransUnion analysis found that reporting rental payment information to credit bureaus can positively affect credit scores. As a result, TransUnion initiated a service called ResidentCredit, making it easy for property owners to report data about their tenants on a monthly basis. These reports include the amount each tenant pays, the timeliness of their last payment, and any remaining balance the tenant currently owes. As a result, some companies have started reporting rent payment information to TransUnion. In 2015, TransUnion acquired Trustev, a digital verification company specializing in online fraud for $21 million, minus debts. In 2017, TransUnion acquired FactorTrust, a consumer reporting agency specializing in alternative credit data. In mid-April 2018, TransUnion announced it intended to buy UK-based CallCredit Information Group for $1.4 billion, subject to regulatory approval. In December 2021, TransUnion completed the acquisitions of Neustar, initially announced in September 2021 for $3.1 billion, and Sontiq which included IdentityForce, initially announced in October 2021 for $638 million. In February 2023, TransUnion announced it was rebranding its "thousands of existing B2B products into seven business lines." These include: TruAudience, TruValidate, TruContact (all based on former offerings from Neustar), TruVision, TruIQ, TruEmpower, and TruLookup. Legal and regulatory issues In 2003, Judy Thomas of Klamath Falls, Oregon, was awarded $5.3 million in a successful lawsuit against TransUnion. The award was made on the grounds that it took her six years to get TransUnion to remove incorrect information in her credit report. In 2006, after spending two years trying to correct erroneous credit information that resulted from being a victim of identity theft, a fraud victim named Sloan filed suit against all three of the US's largest credit agencies. TransUnion and Experian settled out of court for an undisclosed amount. In Sloan v. Equifax, a jury awarded Sloan $351,000. "She wrote letters. She called them. They saw the problem. They just didn't fix it," said her attorney, A. Hugo Blankingship III. TransUnion has also been criticized for concealing charges. Many users complained of not being aware of a $17.95/month charge for holding a TransUnion account. In March 2015, following a settlement with the New York Attorney-General, TransUnion, along with other credit reporting companies, Experian and Equifax, agreed to help consumers with errors and red flags on credit reports. Under the new settlement, credit-reporting firms are required to use trained employees to respond when a consumer flags a mistake on their file. These employees are responsible for communicating with the lender and resolving the dispute. In January 2017, TransUnion was fined $5.5 million and ordered to pay $17.6 million in restitution, along with Equifax, by the Consumer Financial Protection Bureau (CFPB). The federal agency fined the companies "for deceiving consumers about the usefulness and actual cost of credit scores they sold to consumers". The CFPB also said the companies "lured consumers into costly recurring payments for credit-related products with false promises". Credit bureaus had the most complaints of all companies filed with the CFPB by consumers in 2018, with 34% of all complaints directed at TransUnion, Equifax, and Experian that year. In June 2017, a California jury ruled against TransUnion with a $60 million verdict in the largest Fair Credit Reporting Act (FCRA) verdict in history. The San Francisco federal court jury awarded $60 million in damages to consumers who were falsely reported on a government list of terrorists and other security threats. The plaintiffs' team of attorneys at Francis & Mailman, P.C. partnered with another California-based firm in the class action. Following up on this, in April 2022, the Consumer Financial Protection Bureau (CFPB) said TransUnion is "incapable of operating its businesses lawfully". Security issues On 13 October 2017, the website for TransUnion's Central American division was reported to have been redirecting visitors to websites that attempted drive-by downloads of malware disguised as Adobe Flash updates. The attack had been performed by hijacking third-party analytics JavaScript from Digital River brand FireClick. On 17 March 2022, TransUnion South Africa disclosed that hackers breached one of their servers and allegedly stole data of 54 million customers, demanding a ransom to not release it, the group N4ughtysecTU claims responsibility. See also Equifax Experian TransUnion Canada TransUnion CIBIL References External links Financial services companies of the United States Companies listed on the New York Stock Exchange Companies based in Chicago American companies established in 1968 Financial services companies established in 1968 2015 initial public offerings Credit scoring Data collection 1968 establishments in Illinois Data companies Data brokers
TransUnion
[ "Technology" ]
1,607
[ "Data collection", "Data" ]
961,260
https://en.wikipedia.org/wiki/Messier%2018
Messier 18 or M18, also designated NGC 6613 and sometimes known as the Black Swan Cluster, is an open cluster of stars in the constellation Sagittarius. It was discovered by Charles Messier in 1764 and included in his list of comet-like objects. From the perspective of Earth, M18 is situated between the Omega Nebula (M17) and the Small Sagittarius Star Cloud (M24). This is a sparse cluster with a linear diameter of 8.04 pc, a tidal radius of 7.3 pc, and is centrally concentrated with core radius of 0.012 pc. It has a Trumpler class of . The cluster is 33 million years old with an estimated mass of . It has one definite Be star and 29 B-type stars in total. There are three supergiant stars, all of class A or earlier. The brightest component (lucida), designated HD 168352, is a B-type giant star with a class of B2 III and a visual magnitude of 8.65. Messier 18 is 1,296 kpc from the Earth and 6,830 kpc from the Galactic Center. It is orbiting the Milky Way core with a period of 186.5 million years and an eccentricity of 0.02. This carries it to as close as 6.5 kpc to, and as far as 6.8 kpc from the galactic core. It passes vertically through the galactic plane once every 27.4 million years, ranging no more than 80 pc above or below. As of January 2022, Messier 18 is one of the few remaining objects within the Messier Catalog to not have been photographed by the Hubble Space Telescope. Gallery See also List of Messier objects References External links Messier 18, SEDS Messier pages Messier 018 Carina–Sagittarius Arm Messier 018 018 Messier 018 17640603 Discoveries by Charles Messier
Messier 18
[ "Astronomy" ]
400
[ "Sagittarius (constellation)", "Constellations" ]
961,291
https://en.wikipedia.org/wiki/Glycocalyx
The glycocalyx (: glycocalyces or glycocalyxes), also known as the pericellular matrix and cell coat, is a layer of glycoproteins and glycolipids which surround the cell membranes of bacteria, epithelial cells, and other cells. Animal epithelial cells have a fuzz-like coating on the external surface of their plasma membranes. This viscous coating is the glycocalyx that consists of several carbohydrate moieties of membrane glycolipids and glycoproteins, which serve as backbone molecules for support. Generally, the carbohydrate portion of the glycolipids found on the surface of plasma membranes helps these molecules contribute to cell–cell recognition, communication, and intercellular adhesion. The glycocalyx is a type of identifier that the body uses to distinguish between its own healthy cells and transplanted tissues, diseased cells, or invading organisms. Included in the glycocalyx are cell-adhesion molecules that enable cells to adhere to each other and guide the movement of cells during embryonic development. The glycocalyx plays a major role in regulation of endothelial vascular tissue, including the modulation of red blood cell volume in capillaries. The term was initially applied to the polysaccharide matrix coating epithelial cells, but its functions have been discovered to go well beyond that. In vascular endothelial tissue The glycocalyx is located on the apical surface of vascular endothelial cells which line the lumen. When vessels are stained with cationic dyes such as Alcian blue stain, transmission electron microscopy shows a small, irregularly shaped layer extending approximately 50–100 nm into the lumen of a blood vessel. Another study used osmium tetroxide staining during freeze substitution, and showed that the endothelial glycocalyx could be up to 11 μm thick. It is present throughout a diverse range of microvascular beds (capillaries) and macrovessels (arteries and veins). The glycocalyx also consists of a wide range of enzymes and proteins that regulate leukocyte and thrombocyte adherence, since its principal role in the vasculature is to maintain plasma and vessel-wall homeostasis. These enzymes and proteins include: Endothelial nitric oxide synthase (endothelial NOS) Extracellular superoxide dismutase (SOD3) Angiotensin converting enzyme Antithrombin-III Lipoprotein lipase Apolipoproteins Growth factors Chemokines The enzymes and proteins listed above serve to reinforce the glycocalyx barrier against vascular and other diseases. Another main function of the glycocalyx within the vascular endothelium is that it shields the vascular walls from direct exposure to blood flow, while serving as a vascular permeability barrier. Its protective functions are universal throughout the vascular system, but its relative importance varies depending on its exact location in the vasculature. In microvascular tissue, the glycocalyx serves as a vascular permeability barrier by inhibiting coagulation and leukocyte adhesion. Leukocytes must not stick to the vascular wall because they are important components of the immune system that must be able to travel to a specific region of the body when needed. In arterial vascular tissue, the glycocalyx also inhibits coagulation and leukocyte adhesion, but through mediation of shear stress-induced nitric oxide release. Another protective function throughout the cardiovascular system is its ability to affect the filtration of interstitial fluid from capillaries into the interstitial space. The glycocalyx, which is located on the apical surface of endothelial cells, is composed of a negatively charged network of proteoglycans, glycoproteins, and glycolipids. Along the luminal surface of the vascular glycocalyx exists an empty layer that excludes red blood cells. Disruption and disease Because the glycocalyx is so prominent throughout the cardiovascular system, disruption to this structure has detrimental effects that can cause disease. Certain stimuli that cause atheroma may lead to enhanced sensitivity of vasculature. Initial dysfunction of the glycocalyx can be caused by hyperglycemia or oxidized low-density lipoproteins (LDLs), which then causes atherothrombosis. In microvasculature, dysfunction of the glycocalyx leads to internal fluid imbalance, and potentially edema. In arterial vascular tissue, glycocalyx disruption causes inflammation and atherothrombosis. Experiments have been performed to test precisely how the glycocalyx can be altered or damaged. One particular study used an isolated perfused heart model designed to facilitate detection of the state of the vascular barrier portion, and sought to cause insult-induced shedding of the glycocalyx to ascertain the cause-and-effect relationship between glycocalyx shedding and vascular permeability. Hypoxic perfusion of the glycocalyx was thought to be sufficient to initiate a degradation mechanism of the endothelial barrier. The study found that flow of oxygen throughout the blood vessels did not have to be completely absent (ischemic hypoxia), but that minimal levels of oxygen were sufficient to cause the degradation. Shedding of the glycocalyx can be triggered by inflammatory stimuli, such as tumor necrosis factor-alpha. Whatever the stimulus is, however, shedding of the glycocalyx leads to a drastic increase in vascular permeability. Vascular walls being permeable is disadvantageous, since that would enable passage of some macromolecules or other harmful antigens. Other sources of damage to the endothelial glycocalyx have been observed in several pathological conditions such as inflammation, hyperglycemia, ischemia-reperfusion, viral infections and sepsis. Some key components of the glycocalyx such as syndecans, heparan sulphate, chondroitin sulphate and hyaluronan can be shed of the endothelial layer by enzymes. Hyaluronidase, hepararanse/heparinase, matrix and membrane-type matrix metalloproteases, thrombin, plasmin and elastase are some examples of enzymes that can induce shedding of the glycocalyx and these sheddases can therefor contribute to degradation of the glycocalyx layer in several pathological conditions. Research shows that plasma hyaluronidase activity is decreased in experimental as well as in clinical septic shock and is therefore not considered to be a sheddase in sepsis. Concomitant, the endogenous plasma inhibition of hyaluronidase is increased and could serve as a protection against glycocalyx shedding. Fluid shear stress is also a potential problem if the glycocalyx is degraded for any reason. This type of frictional stress is caused by the movement of viscous fluid (i.e. blood) along the lumen boundary. Another similar experiment was carried out to determine what kinds of stimuli cause fluid shear stress. The initial measurement was taken with intravital microscopy, which showed a slow-moving plasma layer, the glycocalyx, of 1 μm thick. Light dye damaged the glycocalyx minimally, but that small change increased capillary hematocrit. Thus, fluorescence light microscopy should not be used to study the glycocalyx because that particular method uses a dye. The glycocalyx can also be reduced in thickness when treated with oxidized LDL. These stimuli, along with many other factors, can cause damage to the delicate glycocalyx. These studies are evidence that the glycocalyx plays a crucial role in cardiovascular system health. In bacteria and nature A glycocalyx, literally meaning "sugar coat" (glykys = sweet, kalyx = husk), is a network of polysaccharides that project from cellular surfaces of bacteria, which classifies it as a universal surface component of a bacterial cell, found just outside the bacterial cell wall. A distinct, gelatinous glycocalyx is called a capsule, whereas an irregular, diffuse layer is called a slime layer. This coat is extremely hydrated and stains with ruthenium red. Bacteria growing in natural ecosystems, such as in soil, bovine intestines, or the human urinary tract, are surrounded by some sort of glycocalyx-enclosed microcolony. It serves to protect the bacterium from harmful phagocytes by creating capsules or allowing the bacterium to attach itself to inert surfaces, such as teeth or rocks, via biofilms (e.g. Streptococcus pneumoniae attaches itself to either lung cells, prokaryotes, or other bacteria which can fuse their glycocalices to envelop the colony). In the digestive tract A glycocalyx can also be found on the apical portion of microvilli within the digestive tract, especially within the small intestine. It creates a meshwork 0.3 μm thick and consists of acidic mucopolysaccharides and glycoproteins that project from the apical plasma membrane of epithelial absorptive cells. It provides additional surface for adsorption and includes enzymes secreted by the absorptive cells that are essential for the final steps of digestion of proteins and sugars. Other generalized functions Protection: Cushions the plasma membrane and protects it from chemical injury Immunity to infection: Enables the immune system to recognize and selectively attack foreign organisms Defense against cancer: Changes in the glycocalyx of cancerous cells enable the immune system to recognize and destroy them. Transplant compatibility: Forms the basis for compatibility of blood transfusions, tissue grafts, and organ transplants Cell adhesion: Binds cells together so that tissues do not fall apart Inflammation regulation: Glycocalyx coating on endothelial walls in blood vessels prevents leukocytes from rolling/binding in healthy states. Fertilization: Enables sperm to recognize and bind to eggs Embryonic development: Guides embryonic cells to their destinations in the body See also Perineuronal net References External links Smart carbohydrate chemistry as a means to understand glycocalyx biology – Video by the Lindhorst group at Beilstein TV Cell biology Polysaccharides Glycobiology Polymers
Glycocalyx
[ "Chemistry", "Materials_science", "Biology" ]
2,253
[ "Carbohydrates", "Cell biology", "Polymer chemistry", "Biochemistry", "Glycobiology", "Polymers", "Polysaccharides" ]
961,303
https://en.wikipedia.org/wiki/Messier%2034
Messier 34 (also known as M34, NGC 1039, or the Spiral Cluster) is a large and relatively near open cluster in Perseus. It was probably discovered by Giovanni Batista Hodierna before 1654 and included by Charles Messier in his catalog of comet-like objects in 1764. Messier described it as, "A cluster of small stars a little below the parallel of γ (Andromedae). In an ordinary telescope of 3 feet one can distinguish the stars." Based on the distance modulus of 8.38, it is about away. For stars ranging from 0.12 to 1 solar mass (), the cluster has about 400. It spans about 35′ on the sky which translates to a true radius of 7.5 light years at such distance. The cluster is just visible to the naked eye in very dark conditions, well away from city lights. It is possible to see it in binoculars when light pollution is low. The age of this cluster lies between the ages of the Pleiades open cluster at 100 million years and the Hyades open cluster at 800 million years. Specifically, comparison between noted stellar spectra and the values predicted by stellar evolutionary models suggest 200–250 million years. This is roughly the age at which stars with half a solar mass enter the main sequence. By comparison, stars like the Sun enter the main sequence after 30 million years. The average proportion of elements with higher atomic numbers than helium is termed the metallicity by astronomers. This is expressed by the logarithm of the ratio of iron to hydrogen, compared to the same proportion in the Sun. For M34, the metallicity has a value of [Fe/H] = +0.07 ± 0.04. This is equivalent to a 17% higher proportion of iron compared to the Sun. Other elements show a similar abundance, save for nickel which is underabundant. At least 19 members are white dwarfs. These are stellar remnants of progenitor stars of up to eight solar masses () that have evolved through the main sequence and are no longer have thermonuclear fusion to generate energy. Seventeen of these are of spectral type DA or DAZ, while one is a type DB and the last is a type DC. See also List of Messier objects References External links Messier 34, SEDS Messier pages Messier 34 – Image by Donald P. Waid Messier 034 Messier 034 034 Messier 034 Orion–Cygnus Arm ?
Messier 34
[ "Astronomy" ]
515
[ "Perseus (constellation)", "Constellations" ]
961,330
https://en.wikipedia.org/wiki/S-layer
An S-layer (surface layer) is a part of the cell envelope found in almost all archaea, as well as in many types of bacteria. The S-layers of both archaea and bacteria consists of a monomolecular layer composed of only one (or, in a few cases, two) identical proteins or glycoproteins. This structure is built via self-assembly and encloses the whole cell surface. Thus, the S-layer protein can represent up to 15% of the whole protein content of a cell. S-layer proteins are poorly conserved or not conserved at all, and can differ markedly even between related species. Depending on species, the S-layers have a thickness between 5 and 25 nm and possess identical pores 2–8 nm in diameter. The terminology “S-layer” was used the first time in 1976. The general use was accepted at the "First International Workshop on Crystalline Bacterial Cell Surface Layers, Vienna (Austria)" in 1984, and in the year 1987 S-layers were defined at the European Molecular Biology Organization Workshop on “Crystalline Bacterial Cell Surface Layers”, Vienna as “Two-dimensional arrays of proteinaceous subunits forming surface layers on prokaryotic cells” (see "Preface", page VI in Sleytr "et al. 1988"). For a brief summary on the history of S-layer research see "References". Location of S-layers In Gram-negative bacteria, S-layers are associated to the lipopolysaccharides via ionic, carbohydrate–carbohydrate, protein–carbohydrate interactions and/or protein–protein interactions. In Gram-positive bacteria whose S-layers often contain surface layer homology (SLH) domains, the binding occurs to the peptidoglycan and to a secondary cell wall polymer (e.g., teichoic acids). In the absence of SLH domains, the binding occurs via electrostatic interactions between the positively charged N-terminus of the S-layer protein and a negatively charged secondary cell wall polymer. In Lactobacilli the binding domain may be located at the C-terminus. In Gram-negative archaea, S-layer proteins possess a hydrophobic anchor that is associated with the underlying lipid membrane. In Gram-positive archaea, the S-layer proteins bind to pseudomurein or to methanochondroitin. Biological functions of the S-layer For many bacteria, the S-layer represents the outermost interaction zone with their respective environment. Its functions are very diverse and vary from species to species. In many archaeal species the S-layer is the only cell wall component and, therefore, is important for mechanical and osmotic stabilization. The S-layer is considered to be porous, which contributes to many of its functions. Additional functions associated with S-layers include: protection against bacteriophages, Bdellovibrios, and phagocytosis resistance against low pH barrier against high-molecular-weight substances (e.g., lytic enzymes) adhesion (for glycosylated S-layers) stabilization of the membrane (e.g. the SDBC in Deinococcus radiodurans) resistance against electromagnetic stress (e.g. ionizing radiations and high temperatures) provision of adhesion sites for exoproteins provision of a periplasmic compartment in Gram-positive prokaryotes together with the peptidoglycan and the cytoplasmic membranes anti-fouling properties biomineralization molecular sieve and barrier function A great example of a bacterium which utilizes the biological functions of the S-layer is Clostridioides difficile. In C. difficile, the S-layer has helped with biofilm formation, host cell adhesion, and immunomodulation through cell signaling of the host response. S-layer structure While ubiquitous among Archaea, and common in bacteria, the S-layers of diverse organisms have unique structural properties, including symmetry and unit cell dimensions, due to fundamental differences in their constituent building blocks. Sequence analyses of S-layer proteins have predicted that S-layer proteins have sizes of 40-200 kDa and may be composed of multiple domains some of which may be structurally related. Since the first evidence of a macromolecular array on a bacterial cell wall fragment in the 1950s S-layer structure has been investigated extensively by electron microscopy and medium resolution images of S-layers from these analyses has provided useful information on overall S-layer morphology. High-resolution structures of an archaeal S-layer protein (MA0829 from Methanosarcina acetivorans C2A) of the Methanosarcinales S-layer Tile Protein family and a bacterial S-layer protein (SbsB), from Geobacillus stearothermophilus PV72, have recently been determined by X-ray crystallography. In contrast with existing crystal structures, which have represented individual domains of S-layer proteins or minor proteinaceous components of the S-layer, the MA0829 and SbsB structures have allowed high resolution models of the M. acetivorans and G. stearothermophilus S-layers to be proposed. These models exhibit hexagonal (p6) and oblique (p2) symmetry, for M. acetivorans and G. stearothermophilus S-layers, respectively, and their molecular features, including dimensions and porosity, are in good agreement with data from electron microscopy studies of archaeal and bacterial S-layers. In general, S-layers exhibit either oblique (p1, p2), square (p4) or hexagonal (p3, p6) lattice symmetry. Depending on the lattice symmetry, each morphological unit of the S-layer is composed of one (p1), two (p2), three (p3), four (p4), or six (p6) identical protein subunits. The center-to-center spacing (or unit cell dimensions) between these subunits range from 4 to 35 nm. Self-assembly In vivo assembly Assembly of a highly ordered coherent monomolecular S-layer array on a growing cell surface requires a continuous synthesis of a surplus of S-layer proteins and their translocation to sites of lattice growth. Moreover, information concerning this dynamic process were obtained from reconstitution experiments with isolated S-layer subunits on cell surfaces from which they had been removed (homologous reattachment) or on those of other organisms (heterologous reattachment). In vitro assembly S-layer proteins have the natural capability to self-assemble into regular monomolecular arrays in solution and at interfaces, such as solid supports, the air-water interface, lipid films, liposomes, emulsomes, nanocapsules, nanoparticles or micro beads. S-layer crystal growth follows a non-classical pathway in which a final refolding step of the S-layer protein is part of the lattice formation. Application Native S-layer proteins have already been used three decades ago in the development of biosensors and ultrafiltration membranes. Subsequently, S-layer fusion proteins with specific functional domains (e.g. enzymes, ligands, mimotopes, antibodies or antigens) allowed to investigate completely new strategies for functionalizing surfaces in the life sciences, such as in the development of novel affinity matrices, mucosal vaccines, biocompatible surfaces, micro carriers and encapsulation systems, or in the material sciences as templates for biomineralization. References Cell anatomy Membrane biology
S-layer
[ "Chemistry" ]
1,606
[ "Membrane biology", "Molecular biology" ]
961,381
https://en.wikipedia.org/wiki/Arcade%20cabinet
An arcade cabinet, also known as an arcade machine or a coin-op cabinet or coin-op machine, is the housing within which an arcade game's electronic hardware resides. Most cabinets designed since the mid-1980s conform to the Japanese Amusement Machine Manufacturers Association (JAMMA) wiring standard. Some include additional connectors for features not included in the standard. Parts of an arcade cabinet Because arcade cabinets vary according to the games they were built for or contain, they may not possess all of the parts listed below: A display output, on which the game is displayed. They may display either raster or vector graphics, raster being most common. Standard resolution is between 262.5 and 315 vertical lines, depending on the refresh rate (usually between 50 and 60 Hz). Slower refresh rates allow for better vertical resolution. Monitors may be oriented horizontally or vertically, depending on the game. Some games use more than one monitor. Some newer cabinets have monitors that can display high-definition video. An audio output for sound effects and music, usually produced from a sound chip. Printed circuit boards (PCB) or arcade system boards, the actual hardware upon which the game runs. Hidden within the cabinet. Some systems, such as the SNK Neo-Geo MVS, use a mainboard with game carts. Some mainboards may hold multiple game carts as well. A power supply to provide DC power to the arcade system boards and low voltage lighting for the coin slots and lighted buttons. A marquee, a sign above the monitor displaying the game's title. They are often brightly colored and backlit. A bezel, which is the border around the monitor. It may contain instructions or artwork. A control panel, a level surface near the monitor, upon which the game's controls are arranged. Control panels sometimes have playing instructions. Players often pile their coins or tokens on the control panels of upright and cocktail cabinets. Coin slots, coin returns and the coin box, which allow for the exchange of money or tokens. They are usually below the control panel. Very often, translucent red plastic buttons are placed in between the coin return and the coin slot. When they are pressed, a coin or token that has become jammed in the coin mechanism is returned to the player. See coin acceptor. In some arcades, the coin slot is replaced with a card reader that reads data from a game card bought from the arcade operator. The sides of the arcade cabinet are usually decorated with brightly colored stickers or paint, representing the gameplay of their particular game. Types of cabinets There are many types of arcade cabinets, some being custom-made for a particular game; however, the most common are the upright, the cocktail or table, and the sit-down. Upright cabinets Upright cabinets are the most common in North America, with their design heavily influenced by Computer Space and Pong. While the futuristic look of Computer Space outer fiberglass cabinet did not carry forward, both games did establish separating parts of the arcade machine for the cathode-ray tube (CRT) display, the game controllers, and the computer logic areas. Atari also had placed the controls at a height suitable for most adult players to use, but close enough to the console's base to also allow children to play. Further, the cabinets were more compact than traditional electro-mechanical games and did not use flashing lights or other means to attract players. The side panels of Atari's Pong had a simple wood veneer finish, making it easier to market to non-arcade venues, such as hotels, country clubs, and cocktail bars. In the face of growing competition, Atari started to include cabinet art and attraction panels around 1973–1974, which soon became a standard practice. Arcade cabinets today are usually made of wood and metal, about six feet or two meters tall, with the control panel set perpendicular to the monitor at slightly above waist level. The monitor is housed inside the cabinet, at approximately eye level. The marquee is above it, and often overhangs it. In Computer Space, Pong and other early arcade games, the CRT was mounted 90 degrees from the ground, facing directly outward. Arcade game manufacturers began incorporating design principles from older electro-mechanical games by using CRTs mounted at a 45-degree angle, facing upward and away from the player but towards a one-way mirror that reflected the display to the player. Additional transparent overlays could be added between the mirror and the player's view to include additional images and colorize the black-and-white CRT output, as is the case in Boot Hill. Other games, like Warrior, used a one-sided mirror and included an illuminated background behind the mirror, so that the on-screen characters would appear to the players as if they were on that background. With the advent of color CRT displays, the need for the mirror was eliminated. The CRT was subsequently positioned at an angle permitting a typical adult player to look directly at the screen. Controls are most commonly a joystick for as many players as the game allows, plus action buttons and "player" buttons which serve the same purpose as the start button on console gamepads. Trackballs are sometimes used instead of joysticks, especially in games from the early 1980s. Spinners (knobs for turning, also called "paddle controls") are used to control game elements that move strictly horizontally or vertically, such as the paddles in Arkanoid and Pong. Games such as Robotron: 2084, Smash TV and Battlezone use double joysticks instead of action buttons. Some versions of the original Street Fighter had pressure-sensitive rubber pads instead of buttons. If an upright is housing a driving game, it may have a steering wheel and throttle pedal instead of a joystick and buttons. If the upright is housing a shooting game, it may have light guns attached to the front of the machine, via durable cables. Some arcade machines had the monitor placed at the bottom of the cabinet with a mirror mounted at around 45 degrees above the screen facing the player. This was done to save space, as a large CRT monitor would otherwise poke out the back of the cabinet. To correct for the mirrored image, some games had an option to flip the video output using a dip switch setting. Other genres of games such as Guitar Freaks feature controllers resembling musical instruments. Upright cabinet shape designs vary from the simplest symmetric perpendicular boxes as with Star Trek to complicated asymmetric forms. Games are typically for one or two players; however, games such as Gauntlet feature as many as four sets of controls. Sit-down or table cabinets Cocktail cabinets Cocktail cabinets are shaped like low, rectangular tables, with the controls usually set at either of the broad ends, or, though not as common, at the narrow ends, and the monitor inside the table, the screen facing upward. Two-player games housed in cocktails were usually alternant, each player taking turns. The monitor reverses its orientation (game software controlled) for each player, so the game display is properly oriented for each player. This requires special programming of the cocktail versions of the game (usually set by dip switches). The monitor's orientation is usually in player two's favor only in two-player games when it is player two's turn and in player one's favor all other times. Simultaneous, 4 player games that are built as a cocktail include Warlords, and others. In Japan, many games manufactured by Taito from the 1970s to the early 1980s have the cocktail versions prefixed by "T.T" in their titles (eg. T.T Space Invaders). Cocktail cabinet versions were usually released alongside the upright version of the same game. They were relatively common in the 1980s, especially during the Golden Age of Arcade Games, but have since lost popularity. Their main advantage over upright cabinets was their smaller size, making them seem less obtrusive, although requiring more floor space (more so by having players seated at each end). The top of the table was covered with a piece of tempered glass, making it convenient to set drinks on (hence the name), and they were often seen in bars and pubs. Candy cabinets Owing to the resemblance of plastic to hard candy, they are often known as "candy cabinets", by both arcade enthusiasts and by people in the industry. They are also generally easier to clean and move than upright cabinets, but usually just as heavy as most have 29" screens, as opposed to 20"–25". They are positioned so that the player can sit down on a chair or stool and play for extended periods. SNK sold many Neo-Geo MVS cabinets in this configuration, though most arcade games made in Japan that only use a joystick and buttons will come in a sit-down cabinet variety. In Japanese arcades, this type of cabinet is generally more prevalent than the upright kind, and they are usually lined up in uniform-looking rows. A variant of this, often referred to as "versus-style" cabinets are designed to look like two cabinets facing each other, with two monitors and separate controls allowing two players to fight each other without having to share the same monitor and control area. Some newer cabinets can emulate these "versus-style" cabinets through networking. Deluxe cabinets Deluxe cabinets (also known as DX cabinets in Japan) are most commonly used for games involving gambling, long stints of gaming (such as fighting games), or vehicles (such as flight simulators and racing games). These cabinets typically have equipment resembling the controls of a vehicle (though some of them are merely large cabinets with fair features such as a great screen or chairs). Driving games may have a bucket seat, foot pedals, a stick shift, and even an ignition, while flight simulators may have a flight yoke or joystick, and motorcycle games handlebars, and a seat shaped like a full-size bike. Often, these cabinets are arranged side-by-side, to allow players to compete together. Sega is one of the biggest manufacturers of these kinds of cabinets, while Namco released Ridge Racer Full Scale, in which the player sits in a full-size Mazda MX-5 road car. Cockpit or environmental cabinets A cockpit or environmental cabinet is a type of deluxe cabinet where the player sits inside the cabinet itself. It also typically has an enclosure. Examples of this can be seen on the Killer List of Videogames, including shooter games such as Star Fire, Missile Command, SubRoc-3D, Star Wars, Astron Belt, Sinistar and Discs of Tron as well as racing games such as Monaco GP, Turbo and Pole Position. A number of cockpit/or environmental cabinets incorporate hydraulic motion simulation, as covered in the section below. Motion simulator cabinets A motion simulator cabinet is a type of deluxe cabinet that is very elaborate, including hydraulics which move the player according to the action on screen. In Japan, they are known as "taikan" games, with "taikan" meaning "body sensation" in Japanese. Sega is particularly known for these kinds of cabinets, with various types of sit-down and cockpit motion cabinets that Sega have been manufacturing since the 1980s. Namco was another major manufacturer of motion simulator cabinets. Motorbike racing games since Sega's Hang-On have had the player sit on and move a motorbike replica to control the in-game actions (like a motion controller). Driving games since Sega's Out Run have had hydraulic motion simulator sit-down cabinets, while hydraulic motion simulator cockpit cabinets have been used for space combat games such as Sega's Space Tactics (1981) and Galaxy Force, rail shooters such as Space Harrier and Thunder Blade, and combat flight simulators such as After Burner and G-LOC: Air Battle. One of the most sophisticated motion simulator cabinets is Sega's R360, which simulates the full 360-degree rotation of an aircraft. Mini or cabaret cabinets Mini or cabaret cabinets are similar forms of arcade cabinet but are intended for different markets. Modern mini cabinets are sold directly to consumers and are not intended for commercial operation. They are styled just like a standard upright cabinet, often with full art and marquees, but are scaled down to more easily fit in a home environment or be used by children. The older form of mini or cabaret cabinets were marketed for commercial use and are no longer made. They were often thinner as well as shorter, lacked side art, and had smaller marquees and monitors. This reduced their cost, reduced their weight, made them better suited to locations with less space, and also made them less conspicuous in darker environments. In place of side art they were often clad in faux wood grain vinyl instead. Countertop cabinets Countertop or bartop cabinets are usually only large enough to house their monitors and control panels. They are often used for trivia and gambling-type games and are usually found installed on bars or tables in pubs and restaurants. These cabinets often have touchscreen controls instead of traditional push-button controls. They are also fairly popular with home use, as they can be placed upon a table or countertop. Large-scale satellite machines Usually found in Japan, these machines have multiple screens interconnected to one system, sometimes with one big screen in the middle. These also often feature the dispensation of different types of cards, either a smartcard in order to save stats and progress or trading cards used in the game. Conversion kit An arcade conversion kit, also known as a software kit, is special equipment that can be installed into an arcade machine that changes the current game it plays into another one. For example, a conversion kit can be used to reconfigure an arcade machine designed to play one game so that it would play its sequel or update instead, such as from Street Fighter II: Champion Edition to Street Fighter II Turbo. Restoration Since arcade games are becoming increasingly popular as collectibles, an entire niche industry has sprung up focused on arcade cabinet restoration. There are many websites (both commercial and hobbyist) and newsgroups devoted to arcade cabinet restoration. They are full of tips and advice on restoring games to mint condition. Artwork Often game cabinets were used to host a variety of games. Often after the cabinet's initial game was removed and replaced with another, the cabinet's side art was painted over (usually black) so that the cabinet would not misrepresent the game contained within. The side art was also painted over to hide damaged or faded artwork. Of course, hobbyists prefer cabinets with original artwork in the best possible condition. Since machines with good quality art are hard to find, one of the first tasks is stripping any old artwork or paint from the cabinet. This is done with conventional chemical paint strippers or by sanding (preferences vary). Normally artwork cannot be preserved that has been painted over and is removed with any covering paint. New paint can be applied in any manner preferred (roller, brush, spray). Paint used is often just conventional paint with a finish matching the cabinet's original paint. Many games had artwork that was silkscreened directly on the cabinets. Others used large decals for the side art. Some manufacturers produce replication artwork for popular classic games—each varying in quality. This side art can be applied over the new paint after it has dried. These appliques can be very large and must be carefully applied to avoid bubbles or wrinkles from developing. Spraying the surface with a slightly soapy water solution allows the artwork to be quickly repositioned if wrinkles or bubbles develop like in window tinting applications. Control panels, bezels, marquees Acquiring these pieces is harder than installing them. Many hobbyists trade these items via newsgroups or sites such as eBay (the same is true for side art). As with side art, some replication art shops also produce replication artwork for these pieces that is indistinguishable from the original. Some even surpass the originals in quality. Once these pieces are acquired, they usually snap right into place. If the controls are worn and need replacing, if the game is popular, they can be easily obtained. Rarer game controls are harder to come by, but some shops stock replacement controls for classic arcade games. Some shops manufacture controls that are more robust than originals and fit a variety of machines. Installing them takes some experimentation for novices, but are usually not too difficult to place. Monitors While both use the same basic type of tube, raster monitors are easier to service than vector monitors, as the support circuitry is very similar to that which is used in CRT televisions and computer monitors, and is typically easy to adjust for color and brightness. On the other hand, vector monitors can be challenging or very costly to service, and some can no longer be repaired due to certain parts having been discontinued years ago. Even finding a drop-in replacement for a vector monitor is a challenge today, as few were produced after their heyday in the early 1980s. CRT replacement is possible, but the process of transferring the deflection yoke and other parts from one tube neck to the other also means a long process of positioning and adjusting the parts on the CRT for proper performance, a job that may prove too challenging for the typical amateur arcade collector. On the other hand, it may be possible to retrofit other monitor technologies to emulate vector graphics. Some electronic components are stressed by the hot, cramped conditions inside a cabinet. Electrolytic capacitors dry out over time, and if a classic arcade cabinet is still using its original components, it may be near the end of its service life. A common step in refurbishing vintage electronics (of all types) is "recapping": replacing certain capacitors (and other parts) to restore, or ensure the continued safe operation of the monitor and power supplies. Because of the capacity and voltage ratings of these parts, it can be dangerous if not done properly, and should only be attempted by experienced hobbyists or professionals. If a monitor is broken, it may be easier to just source a drop-in replacement through coin-op machine distributors or parts suppliers. Wiring If a cabinet needs rewiring, some wiring kits are available over the Internet. An experienced hobbyist can usually solve most wiring problems through trial and error. Many cabinets are converted to be used to host a game other than the original. In these cases, if both games conform to the JAMMA standard, the process is simple. Other conversions can be more difficult, but some manufacturers such as Nintendo have produced kits to ease the conversion process (Nintendo manufactured kits to convert a cabinet from Classic wiring to VS. wiring). See also Arcade controller Arcade game Slot machine Video arcade Arcade system board JAMMA MAME References External links Arcade hardware Commercial machines Video game terminology
Arcade cabinet
[ "Physics", "Technology" ]
3,849
[ "Machines", "Computing terminology", "Video game terminology", "Commercial machines", "Physical systems" ]
961,450
https://en.wikipedia.org/wiki/196%20%28number%29
196 (one hundred [and] ninety-six) is the natural number following 195 and preceding 197. In mathematics 196 is a square number, the square of 14. As the square of a Catalan number, it counts the number of walks of length 8 in the positive quadrant of the integer grid that start and end at the origin, moving diagonally at each step. It is part of a sequence of square numbers beginning 0, 1, 4, 25, 196, ... in which each number is the smallest square that differs from the previous number by a triangular number. There are 196 one-sided heptominoes, the polyominoes made from 7 squares. Here, one-sided means that asymmetric polyominoes are considered to be distinct from their mirror images. A Lychrel number is a natural number which cannot form a palindromic number through the iterative process of repeatedly reversing its digits and adding the resulting numbers. 196 is the smallest number conjectured to be a Lychrel number in base 10; the process has been carried out for over a billion iterations without finding a palindrome, but no one has ever proven that it will never produce one. See also 196 (disambiguation) References Arithmetic dynamics Integers
196 (number)
[ "Mathematics" ]
265
[ "Recreational mathematics", "Mathematical objects", "Arithmetic dynamics", "Elementary mathematics", "Integers", "Numbers", "Number theory", "Dynamical systems" ]
961,463
https://en.wikipedia.org/wiki/151%20%28number%29
151 (one hundred [and] fifty-one) is a natural number. It follows 150 and precedes 152. In mathematics 151 is the 36th prime number, the previous is 149, with which it comprises a twin prime. 151 is also a palindromic prime, a centered decagonal number, and a lucky number. 151 appears in the Padovan sequence, preceded by the terms 65, 86, 114; it is the sum of the first two of these. 151 is a unique prime in base 2, since it is the only prime with period 15 in base 2. There are 151 4-uniform tilings, such that the symmetry of tilings with regular polygons have four orbits of vertices. 151 is the number of uniform paracompact honeycombs with infinite facets and vertex figures in the third dimension, which stem from 23 different Coxeter groups. Split into two whole numbers, 151 is the sum of 75 and 76, both relevant numbers in Euclidean and hyperbolic 3-space: 75 is the total number of non-prismatic uniform polyhedra, which incorporate regular polyhedra, semiregular polyhedra, and star polyhedra, 75 uniform compound polyhedra, inclusive of seven types of families of prisms and antiprisms, 76 is the number of unique uniform compact hyperbolic honeycombs that are solely generated from Wythoff constructions. While 151 is the 36th indexed prime, its twin prime 149 has a reciprocal whose repeating decimal expansion has a digit sum of 666, which is the magic constant in a prime reciprocal magic square equal to the sum of the first 36 non-zero integers, or equivalently the 36th triangular number. Furthermore, the sum between twin primes (149, 151) is 300, which in turn is the 24th triangular number. References Integers
151 (number)
[ "Mathematics" ]
377
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
961,552
https://en.wikipedia.org/wiki/Television%20channel%20frequencies
The following tables show the frequencies assigned to analog broadcast television channels in various regions of the world, along with the ITU letter designator for the transmission system used. The frequencies shown are for the channel limits and for the analog video and audio carriers. The channel itself usually occupies 6, 7 or 8 megahertz of bandwidth depending on the television transmission system in use. For example, North American channel 1 occupies the spectrum from 44 to 50 MHz. See Broadcast television systems for a table of signal characteristics, including bandwidth, by ITU letter designator. Analog television broadcasts have been phased out in most regions, having been replaced by digital television broadcasts. International normalization for analog TV systems International broadcasting television frequencies are divided in two part of the spectrum; the Very high frequency or "VHF“ band and the Ultra high frequency or "UHF“ band. VHF Americas (most countries), South Korea, Taiwan, Myanmar, and the Philippines During World War II, the frequencies originally assigned as channels 13 to 18 were appropriated by the U.S. military, which still uses them to this day. It was also decided to move the allocation for FM radio from the 42-50 MHz band to a larger 88-106 MHz band (later extended to the current 88-108 MHz FM band). This required a reassignment of the VHF channels to the plan currently in use. Assignments since February 25, 1946 System M 525 lines (most countries in the Americas and Caribbean, South Korea, Taiwan and the Philippines) System N 625 lines (used in Argentina, Paraguay and Uruguay) FM channel 200, 87.9 MHz, overlaps TV 6. This is used only by K200AA. TV 6 analog audio can be heard on FM 87.75 on most broadcast radio receivers as well as on a European TV tuned to channel E4A or channel IC, but at lower volume than wideband FM broadcast stations, because of the lower deviation. Channel 1 audio is the same as European Channel E2 audio and the video is the same as European Channel E2A. Channel 2 video is the same as European Channel E3 video. Japan The frequency spacing for each channel is 6 MHz as the countries above, except between channels 7 and 8 (which overlap). Channels 1 through 3 are reallocated for the expansion of the FM band. United Kingdom, Ireland, and Hong Kong Ireland Channel A was never used terrestrially. The only System I Band I transmitter on Channel B was RTÉ One from the Maghera, Co. Clare transmitter during 1963–1999. Channel A was initially intended for use at Maghera but Channel B was used instead because of the risk of interference to (overspill) reception of BBC 405 line transmissions. It was moved to Channel E due to interference from distant transmitters on channel E3 and Italian channel IA via certain atmospheric conditions and other reasons. Channel C was used by a relay transmitter in Glanmire, Co. Cork. Channel B video is the same as Italian Channel IA video and Channel C audio is the same as Channel E4 audio. There are currently no Band I Channels used in Ireland (except on cable TV, and these have mostly been phased out for DOCSIS use) and no plans to resume using them. Most Irish Cable TV systems do not follow the above channel plan as their analogue (video) carriers are usually at multiples of 8 MHz (i.e. 176, 184, 192 MHz etc. in Band III) Western Europe; Greenland; and most countries in Asia, Africa, and Oceania Channels 1 and 1A were used for early experimental broadcasts and are no longer allocated. Channels 15 and 16 are allocated for use in the African Broadcasting Area only. Channel 2A was only ever used in Austria for the Sendeturm Jauerling to avoid interferences with neighboring Eastern European TV stations. Channel 3 in Belgium, RTBF 1 broadcast from the Liège transmitter with 100 kW until the switchover to DVB-T. Channel 12 was reserved by the military in some countries (like Germany (West Germany only)) so only relay transmitters operated on this frequency. Channel 4A audio carrier's frequency is very close to US Channel 6 audio carrier and overlaps the FM band in Europe. France Channel 1 used an earlier 441-line system and was discontinued in 1956. French overseas departments and territories and former French African colonies Italy Channels A through H are indicated in many European TVs as Channels 13–20. Channels B, C, D, H, H1, and H2 are identical to Channels E4, E4A, E5, E10, E11, and E12, respectively. Channel A video carrier is the same as Channel E2 audio carrier and thus it used to be common that the audio from a distant TV station on channel E2 received via Sporadic E interferes with Channel A video and vice versa. Channel C audio carrier's frequency falls into the FM band in Europe, and is also identical to American A6 channel audio. Eastern Europe, North Korea East Germany (former DDR) In its very early days DFF made some test transmissions using the D/K standard (6.5 MHz audio) before reverting (around 1957) to System B/G (5.5 MHz audio) but using some unique frequencies. From 1960 onwards (West) European standard channels were adopted. Morocco Australia Channels 0, 1, 2, 3, 4, 5 and 5A are no longer used since the transition to digital television. With the introduction of digital TV in 2001, the last two channels were moved up by 1 MHz (some existing services affected - one example is AMV11 in the Upper Murray region of Victoria, Australia and VTV-11 in Western Victoria, due to the introduction of digital television at the time in regional Victoria) to allow a full 7 MHz for a new channel 9A and channel 12 was added following the new channel 11. New Zealand and Indonesia VHF analog TV ceased in New Zealand on 1 December 2013. Channels 10 and 11 weren't added until the late 1980s (except Indonesia). VHF analog TV channel 1A is only used in Indonesia. VHF is currently no longer used for television in Indonesia (except in some regions until 2022) and only UHF is used for both analog and digital television, as in the UK. Angola, Botswana, Lesotho, and South Africa China Vietnam UHF Americas (most countries), South Korea, Taiwan, Burma (Myanmar) and the Philippines For frequencies used in the Americas (most countries), South Korea, Taiwan and the Philippines, refer to Pan-American television frequencies. Notes The frequencies used by UHF channels 70 through 83 were reallocated to the Land Mobile Radio System (Public Safety and Trunked Radio) and mobile phones in a CCIR worldwide convention in 1982, and thus were never used for digital TV but are highlighted in cyan and listed here for theoretical use. In certain metropolitan areas of the United States, Channels 14 through 20 have been allocated to Land Mobile Radio (LMR) use. Channels 52 through 69 in the United States have been reallocated now that conversion to digital TV was completed on June 12, 2009. These channels are highlighted in yellow. Channels 70 through 83 in the United States and Canada were re-allocated to AMPS cellular phone use in 1983. On August 22, 2011, the United States' Federal Communications Commission announced a freeze on all future applications for broadcast stations requesting to use channel 51, to prevent adjacent-channel interference to the A-Block of the 700 MHz band. Later that year (on December 16, 2011), Industry Canada and the CRTC followed suit in placing a moratorium on future television stations using Channel 51 for broadcast use, to prevent adjacent-channel interference to the A-Block of the 700 MHz band. Not all countries listed use ATSC, which has a single VSB carrier wave. Other countries use COFDM modulation for DVB-T (Taiwan, Colombia, Panama) or ISDB-Tb (Philippines and Latin America), which has dozens of carriers within the channel. Burma (Myanmar) uses DVB-T2 on 8 MHz channel spacing on Western Europe / Asia DTV frequency along with Southeast Asian countries (except Philippines). ISDB-Tb frequency DTV channel 14 uses 473.142857 MHz, but ATSC 3.0, DVB-T/DVB-T2, and DTMB, use 473.0 MHz. Channel 37 is reserved for radio astronomy in the United States, Canada, Bermuda, Belize, and the Bahamas, thus there are no television stations assigned to it. Mexico also informally observes a ban on transmitters using this channel. Due to the FCC repack in the United States, all TV stations that had been broadcasting from channels 38 to 51 were required to move on or below channel 36 by July 3, 2020. As a result, channels 38-51 are highlighted in magenta. These frequencies would later be used by U.S. mobile carriers like T-Mobile on Band 71. Japan Frequency spacing for each channel in Japan is the same as in the countries listed above, but the channel numbers are 1 lower than in those countries; for example, channel 13 in Japan is on the same frequency as channel 14 in North and South America (most countries), South Korea, Taiwan, and the Philippines. Channels 13-62 are used for analog and digital TV broadcasting. United Kingdom, Ireland, Hong Kong, Macau, Falkland Islands and Southern Africa Channels 21 to 60 used for DVB-T Digital TV broadcasting in the UK, with the exception of Channel 38, which is used for programme making and special events. Channels 61 to 69 used for 4G LTE. Channel 69 was not used for TV broadcasting in the UK, it was used by the MOD and until 2012 for programme making and special events. PAL I was withdrawn from broadcasting use in the UK during 2012 and 2013. Western Europe, Greenland, most countries in Asia and Africa, and most of Oceania Former channels 14 to 18 renumbered as 21 to 25 in 1961. Channels 70 to 81 no longer allocated to television. They were only used in Italy. France, Eastern Europe, Former Soviet Union, French overseas territories and former French colonies in Africa, North Korea, Vietnam Some cable television providers in Vietnam may use System G. DVB-T/DVB-T2/DTMB/ISDB-T Digital television frequencies (Western Europe, Eastern Europe most countries Asia, Africa and Oceania) Australia Channels 52–69 had been progressively phased out since the introduction of digital television and rationalisation of the spectrum China See also Asian television frequencies Australasian television frequencies Autoroll Broadcast television systems ATSC DVB-T DVB-T2 NTSC NTSC-J PAL RCA SECAM Digital television transition Knife-edge effect Multichannel television sound Pan-American television frequencies References Television technology
Television channel frequencies
[ "Technology" ]
2,212
[ "Information and communications technology", "Television technology" ]
961,605
https://en.wikipedia.org/wiki/ACES%20%28computational%20chemistry%29
Aces II (Advanced Concepts in Electronic Structure Theory) is an ab initio computational chemistry package for performing high-level quantum chemical ab initio calculations. Its major strength is the accurate calculation of atomic and molecular energies as well as properties using many-body techniques such as many-body perturbation theory (MBPT) and, in particular coupled cluster techniques to treat electron correlation. The development of ACES II began in early 1990 in the group of Professor Rodney J. Bartlett at the Quantum Theory Project (QTP) of the University of Florida in Gainesville. There, the need for more efficient codes had been realized and the idea of writing an entirely new program package emerged. During 1990 and 1991 John F. Stanton, Jürgen Gauß, and John D. Watts, all of them at that time postdoctoral researchers in the Bartlett group, supported by a few students, wrote the backbone of what is now known as the ACES II program package. The only parts which were not new coding efforts were the integral packages (the MOLECULE package of J. Almlöf, the VPROP package of P.R. Taylor, and the integral derivative package ABACUS of T. Helgaker, P. Jorgensen J. Olsen, and H.J. Aa. Jensen). The latter was modified extensively for adaptation with Aces II, while the others remained very much in their original forms. Ultimately, two different versions of the program evolved. The first was maintained by the Bartlett group at the University of Florida, and the other (known as ACESII-MAB) was maintained by groups at the University of Texas, Universitaet Mainz in Germany, and ELTE in Budapest, Hungary. The latter is now called CFOUR. Aces III is a parallel implementation that was released in the fall of 2008. The effort led to definition of a new architecture for scalable parallel software called the super instruction architecture. The design and creation of software is divided into two parts: The algorithms are coded in a domain specific language called super instruction assembly language or SIAL, pronounced "sail" for easy communication. The SIAL programs are executed by a MPMD parallel virtual machine called the super instruction processor or SIP. The ACES III program consists of 580,000 lines of SIAL code of which 200,000 lines are comments, and 230,000 lines of C/C++ and Fortran of which 62,000 lines are comments. The latest version of the program was released on August 1, 2014. See also Quantum chemistry computer programs References ACES II Florida-Version Homepage ACES II Mainz-Austin-Budapest-Version Homepage (outdated) ACES III Homepage (outdated) CFOUR Homepage Computational chemistry software University of Florida
ACES (computational chemistry)
[ "Physics", "Chemistry" ]
551
[ "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Quantum mechanics", "Computational chemistry stubs", "Computational chemistry", "Physical chemistry stubs", "Quantum physics stubs" ]
961,611
https://en.wikipedia.org/wiki/Lip%20balm
Lip balm or lip salve is a wax-like substance applied to the lips to moisturize and relieve chapped or dry lips, angular cheilitis, stomatitis, or cold sores. Lip balm often contains beeswax or carnauba wax, camphor, cetyl alcohol, lanolin, paraffin, and petrolatum, among other ingredients. Some varieties contain dyes, flavor, fragrance, phenol, salicylic acid, and sunscreen. Overview The primary purpose of lip balm is to provide an occlusive layer on the lip surface to seal moisture in lips and protect them from external exposure. Dry air, cold temperatures, and wind all have a drying effect on skin by drawing moisture away from the body. Lips are particularly vulnerable because the skin is so thin, and thus they are often the first to present signs of dryness. Occlusive materials like waxes and petroleum jelly prevent moisture loss and maintain lip comfort while flavorings, colorants, sunscreens, and various medicaments can provide additional, specific benefits. Lip balms are produced from bee wax and natural candelilla and carnauba waxes. Lip balm can be applied by a finger to the lips, or in a lipstick-style tube from which it can be applied directly. In 2022, the global lip balm market was valued at US$732.76 mln. The market is predicted to grow at a rate of 9.28% within the next five years and is likely to reach US$1247.74 mln by 2027. Production Production for lip balms includes the following stages: Raw materials are checked for its quality (cosmetic products must comply with strict safety standards) The ingredients are dosed, melted, and mixed (this stage involves special equipment) This mixture is treated in a vacuum to remove bubbles The mixture is crystallized for about 48 hours The mixture is then remelted The mixture is cut into pieces which are shaped as required The lip balm is packaged into a casing History Early lip balms Since 40 BC, the Egyptians made treatment for lip care, which was made with a mixture of beeswax, olive oil, and animal fat. United States In the 1800s, Lydia Maria Child recommended earwax as a treatment for cracked lips in her highly-popular book, Child observed that, "Those who are troubled with cracked lips have found this earwax remedy successful when others have failed. It is one of those sorts of cures, which are very likely to be laughed at; but I know of its having produced very beneficial results." The invention of the lip balm was first formally invented in the 1880s by physician Charles Brown Fleet though its origins may be traced to earwax. Fleet later named his lip balm product "ChapStick". In 1872, chemist Robert Chesebrough discovered and sampled a new petroleum jelly, initially describing it as a "natural, waxy ingredient, rich in minerals from deep within the earth" which could be used as a solution for skin repair. He then distributed his product under the name "Wonder Jelly" before shortly changing it to "Vaseline". In the early 1880s, Charles Brown Fleet created ChapStick. However, due to the lack of sales, Fleet sold his formula and rights to ChapStick to John Morton in 1912 for $5, who saw the marketing potential in the brand. After making the purchase, Morton commissioned Frank Wright, Jr. to create a design for the logo of ChapStick for $15 in 1936. In 1972, ChapStick tubes concealing hidden microphones were used during the Watergate scandal. In 1937, Alfred Woelbing created Carmex to treat cold sores in Milwaukee, though the occurrence of World War 2 would slow the production and sales due to the lack of lanolin. In 1980, Carmex underwent a product change by converting its packaging into squeezable tubes. In 1973, Bonne Bell created the first flavored lip balm and marketed the company as Lip Smackers. The company would later collaborate on various different-flavored lip balms including Dr. Pepper in 1975, The Wrigley Company in 2004, and The Coca-Cola Company in 2006. Bonne Bell also collaborated with Disney to produce lip balms with various princess characters in 2010. In 1991, Burt Shavitz and Roxanne Quimby created their first beeswax based lip balm solution through their company, Burt's Bees. In 2020, it was reported that Burt's Bees had used 50 percent of recycled material to package various products and that 100 percent of the products were recyclable. In 2011, Evolution of Smooth (or commonly known as EOS) created a spherical-shaped lip balm as well as describing its 95% organic ingredients. Cannabis infused lip balms With the gradual legalization of cannabis in the United States, some companies have produced lip balms containing doses of THC or CBD oil. The lip balms were infused with a low dosage of THC in order to prevent the occurrence of any psychoactive or related effect. Notable brands Burt's Bees Blistex Carmex ChapStick Labello Lip Smacker Lypsyl EOS Vaseline Aquaphor Nivea Dependency Addictive ingredients Some physicians have suggested that certain types of lip balm can be addictive or contain ingredients that actually cause drying, the accuracy of which has been debated by many professionals. Lip balm manufacturers sometimes state in their FAQs that there is nothing addictive in their products or that all ingredients are listed and approved by the FDA. Snopes found the claim that there are substances in Carmex that are irritants necessitating reapplication, such as ground glass, to be false. However, some experts such as dermatologist Dr. Cynthia Bailey state that some ingredients in lip balm directly causes sensitive lip skin which may lead to addiction. Dermatology professor Marcia Driscoll also adds onto this argument by stating that aroma ingredients found in flavored or scented lip balms have the potential to irritate skin. Causes for Dependency According to a report, professor Brad Rohu states that it is natural for the lips to feel dry. The exposure to environments with cold, dry, or windy weather can directly cause the chapping of the lips as well as behaviors such as lip licking or mouth breathing. These factors may directly contribute to an increased amount of lip balm usage. According to dermatologist Amy Derick, those who have expressed dependencies on lip balm have developed a desire of how the lips feel after application. She also mentions that the variety of lip balm flavor may also directly cause lip balm dependency as a person may want to lick their lips to taste the flavor, which may consequentially remove the lip balm coating from the lips. This may also leave saliva on the lips which can dry up and make the lips feel even more dry than they initially were. Effects on lip barrier The human lips have an inadequate capability of holding moisture as well as an imperfect lip barrier function. The Journal of the American Academy of Dermatology performed a study in order to determine whether consistent use of lip balm would enhance the overall quality of the lips. The study used 32 female participants within the ages of 20 to 40 years and the participants had mild to moderate dried lips without any history of health-related complications. The participants underwent a procedure in which no lip treatment was provided on the first 3 days, then 2 weeks of consistent lip balm usage, and then a period of no treatment for 3 days. The study determined the quality of the lips based on the physical details and appearance throughout the study. The study showed a direct improvement of the physical details of the lips except for lip cracking during the second week of treatment and after the period of no treatment. The study also showed that hydration of the lips lasted for approximately 8 hours after usage and the lip balm improved the lip barrier function despite discontinued usage. The study concluded that lip balms assist the hydration of the lips which consequentially improves the lip barrier function and the quality. This study was completely funded by Burt's Bees, a lip balm company. Mineral oil In 2015, German consumer watchdog Stiftung Warentest analyzed cosmetics containing mineral oils. After developing a new detection method they found high concentrations of Mineral Oil Aromatic Hydrocarbons (MOAH) and even polyaromatics in products containing mineral oils with Vaseline products containing the most MOAH of all tested cosmetics (up to 9%). The European Food Safety Authority sees MOAH and polyaromatics as possibly carcinogenic. Based on the results, Stiftung Warentest warns not to use Vaseline or any product that is based on mineral oils for lip care. Lip balm market United States In 2019, a research report conducted by the Statista Research Department concluded that ChapStick was the leading lip balm brand in the United States with an approximate unit sale of 55.8 million. Carmex was the second leading brand with approximately 35.2 million units sold and Burt's Bees being the third leading brand with approximately 32.3 million units sold. Trends Beezin' Beezin' is a trend dating back to 2013 in which a person applies Burt's Bees brand lip balm onto the eyelids. The practice is done in order to feel a sensation of being high or drunk, and even to increase the desired effects of alcohol and other substances. In 2022, Beezin' became a viral trend on the social media platform TikTok. Some ingredients, including peppermint oil, are known to be eye irritants which can cause an unintentional inflammatory response which may require treatment and may also cause dermatitis on the eyelids. References External links Skin care Drug delivery devices Dosage forms Lips Cosmetics
Lip balm
[ "Chemistry" ]
2,040
[ "Pharmacology", "Drug delivery devices" ]
961,616
https://en.wikipedia.org/wiki/PSI%20%28computational%20chemistry%29
Psi is an ab initio computational chemistry package originally written by the research group of Henry F. Schaefer, III (University of Georgia). Utilizing Psi, one can perform a calculation on a molecular system with various kinds of methods such as Hartree-Fock, Post-Hartree–Fock electron correlation methods, and density functional theory. The program can compute energies, optimize molecular geometries, and compute vibrational frequencies. The major part of the program is written in C++, while Python API is also available, which allows users to perform complex computations or automate tasks easily. Psi4 is the latest release of the program package - it is open source, released as free under the GPL through GitHub. Primary development of Psi4 is currently performed by the research groups of David Sherrill (Georgia Tech), T. Daniel Crawford (Virginia Tech), Francesco Evangelista (Emory University), and Henry F. Schaefer, III (University of Georgia), with substantial contributions by Justin Turney (University of Georgia), Andy Simmonett (NIH), and Rollin King (Bethel University). Psi4 is available on Linux releases such as Fedora and Ubuntu. Features The basic capabilities of Psi are concentrated around the following methods of quantum chemistry: Hartree–Fock method Density functional theory Møller–Plesset perturbation theory Coupled cluster CASSCF multireference configuration interaction methods symmetry-adapted perturbation theory Several methods are available for computing excited electronic states, including configuration interaction singles (CIS), the random phase approximation (RPA), time-dependent density functional theory (TD-DFT), and equation-of-motion coupled cluster (EOM-CCSD). Psi4 has introduced the density-fitting approximation in many portions of the code, leading to faster computations and reduced I/O requirements. Psi4 is the preferred quantum chemistry backend for the OpenFermion project, which seeks to perform quantum chemistry computations on quantum computers. In Psi4 1.4, the program was adapted to facilitate high-throughput workflows and can be connected to BrianQC to speed up calculations for Hartree-Fock and Density functional theory methods. See also List of quantum chemistry and solid-state physics software References External links Psi4 Homepage Psi4 Source Code (GitHub) Computational chemistry software Free chemistry software Chemistry software for Linux
PSI (computational chemistry)
[ "Chemistry" ]
507
[ "Free chemistry software", "Computational chemistry software", "Chemistry software", "Computational chemistry", "Chemistry software for Linux" ]
961,668
https://en.wikipedia.org/wiki/Goal%20difference
Goal difference, goal differential or points difference is a form of tiebreaker used to rank sport teams which finish on equal points in a league competition. Either "goal difference" or "points difference" is used, depending on whether matches are scored by goals (as in ice hockey and association football) or by points (as in rugby union and basketball). Goal difference is calculated as the number of goals scored in all league matches minus the number of goals conceded, and is sometimes known simply as plus–minus. Goal difference was first introduced as a tiebreaker in association football, at the 1970 FIFA World Cup, and was adopted by the Football League in England five years later. It has since spread to many other competitions, where it is typically used as either the first or, after tying teams' head-to-head records, second tiebreaker. Goal difference is zero sum, in that a gain for one team (+1) is exactly balanced by the loss for their opponent (–1). Therefore, the sum of the goal differences in a league table is always zero (provided the teams have only played each other). Goal difference has often replaced the older goal average, or goal ratio. Goal average is the number of goals scored divided by the number of goals conceded, and is therefore a dimensionless quantity. It was replaced by goal difference, which was thought to encourage more attacking play, encouraging teams to score more goals (or points) as opposed to defending against conceding. However goal average is still used as a tiebreaker in Australia, where it is referred to as "percentage". This is calculated as points scored divided by points conceded, and then multiplied by 100. If two or more teams' total points scored and goal differences are both equal, then often goals scored is used as a further tiebreaker, with the team scoring the most goals winning. After this a variety of other tiebreakers may be used. Goal difference v. goal average Under goal average, Team A would win: Under goal difference, Team B would win: Goal average was replaced by goal difference due to the former's encouragement of lower-scoring games. For example, a team that scores 70 while conceding 40 would have a lesser goal average (1.750) than another team that scores 69 while conceding 39 (1.769). Or, for the team that has scored 70 while conceding 40, conceding another would reduce the goal average by 0.043 (to 1.707), whereas scoring another would increase it by only 0.025 (to 1.775), making not conceding much more important than scoring again. The opposite effect occurs when the number of goals scored is less than the number of goals conceded, with goal difference encouraging more defensive play for teams in relegation battles. Consider a team that scores 10 while conceding 20. Under goal difference, an extra goal scored cancels out an extra goal conceded. However, under goal average, an extra goal would increase the goal average by 0.05, while conceding would reduce it by only 0.024. Another issue with goal average is that, if a team has conceded no goals (e.g. England in the 1966 FIFA World Cup Group 1), the value cannot be calculated, as division by zero is undefined. Titles decided on goal difference Netherlands top-flight 2007, PSV Eindhoven and Ajax Heading into the final day of the 2006–07 Eredivisie season, three teams were still in contention to win the title, and with it a guaranteed place in the 2007–08 UEFA Champions League. PSV, looking to win their third straight league title, was the only one of the three to play its final match at home, against Vitesse Arnhem. Ajax, looking to win their first title since 2004, traveled to Willem II, while AZ faced Excelsior looking to win its first league title since 1981, after finishing in the top three in the previous two seasons. These final matches were played on April 29, 2007. AZ struggled against Excelsior (who would have to go through a relegation play-off after the end of the game) as they played almost 72 minutes of the match with only 10 men, as goalkeeper Boy Waterman was red-carded in the 18th minute. AZ came from behind twice, with Danny Koevermans tying the match in the 70th minute with his 22nd goal of the season. AZ had a chance to take the lead after its numerical disadvantage was leveled as Excelsior's Rene van Dieren was sent off for yellow card accumulation. AZ never took advantage and a goal from Johan Voskamp in the 90th minute gave Excelsior a shock 3–2 win. Meanwhile, in Tilburg, Ajax took the lead in the 18th minute with a goal from Urby Emanuelson. Ajax added a second goal in the 69th minute as Klaas-Jan Huntelaar scored his 21st goal of the season. Meanwhile, PSV scored twice in the first 10 minutes, but gave up a goal three minutes later and led only 2–1 at half-time. In the second half, Ibrahim Afellay scored in the 58th minute before another goal from Jefferson Farfan made the score 4–1 to PSV. Following Huntelaar's 69th-minute goal, PSV and Ajax were level on points and goal difference, but Ajax had a superior goals scored. But in the 77th minute, Philip Cocu put PSV up 5–1 and the team was up on goal difference (+50 to Ajax's +49). The scores stayed that way at full time, and so PSV won the 2006–07 Eredivisie in one of the most exciting finishes to a season in recent memory. Iceland top-flight 2010, Breiðablik UBK, ÍBV and FH Hafnarfjörður The 2010 Úrvalsdeild season concluded on September 25, 2010, and three teams were still in contention to win the league title. Leading the table was Breiðablik, based in Kópavogur, who knew that a win would give them their first ever league title. Trailing one point behind were ÍBV from Vestmannaeyjar, who were looking to win their fourth league title, but its first since 1998. In third place was two-time defending champions FH, looking to win the league title, but trailing Breiðablik by only two points. Breiðablik traveled to Stjarnan and were held to a scoreless draw, but would get encouraging news. Playing their final game at Keflavík, ÍBV were losing 2–0 with 16 minutes remaining when Denis Sytnik scored for ÍBV to cut the deficit to 2–1. But two late goals from Keflavík's Magnús Þorsteinsson and Bojan Ljubicic denied ÍBV a chance to overtake Breiðablik, as ÍBV lost to Keflavík by 4–1. Meanwhile, a draw opened the door for FH as they traveled to Reykjavík to face Fram needing to overturn an 11-goal difference. FH got two goals from Gunnar Kristjansson and a third from Atli Viðar Björnsson (which would tie him with two players for the league lead with 14 goals). However, the 3–0 victory was not enough to deny Breiðablik their first ever league title. Hungary top-flight 2014, Debreceni VSC and Győri ETO FC Ahead of the final day of the 2013–14 Nemzeti Bajnokság I season, Debrecen was on course to win its 7th league title since 2005 as its closest competitor Győr had to overturn a 14-goal swing on the final matchday. Despite losing its season-finale 2–0 to Budapest Honved FC, Debrecen won the title as Győr only won 5–0 against already-relegated Mezőkövesd-Zsóry SE. England top-flight 2012, Manchester City and Manchester United The 2011–12 Premier League was largely a two-horse race contested between Manchester City and Manchester United for most of the season, with both clubs finishing 19 points ahead of third-placed Arsenal. City and United went into their final matches of the season level on points, but with City in first-place due to a goal difference superior by +8. The final matches were relegation threatened Queens Park Rangers at home for City, and Sunderland away for United. City were strong favourites, with United's manager Alex Ferguson stating City would have to do 'something stupid' not to beat QPR. A Manchester City win would guarantee the title due to a realistically unassailable superior goal difference. If not a win, then City just needed to match United's result at the Stadium of Light against Sunderland or have United lose against Sunderland. United scored in the 20th minute, winning 1–0. City scored two goals in injury time to come from behind and win 3–2. 1989, Arsenal and Liverpool Arsenal won the league championship on goals-scored, after finishing level on points and goal difference with Liverpool in the 1988–89 season. Arsenal defeated Liverpool 2–0 in the final game of the season to win the championship. England lower division titles decided on goal difference 1983–84, Second Division – Chelsea–Sheffield Wednesday Chelsea 88 points and goal difference 50, Sheffield Wednesday 88 points and goal difference 38. 1989–90, Second Division – Leeds United–Sheffield United Leeds United 85 points and goal difference 27, Sheffield United 85 points and goal difference 20. 1981–82, Third Division – Burnley–Carlisle United Burnley 80 points and goal difference 21, Carlisle United 80 points and goal difference 15. 2016–17, League 2 – Portsmouth-Plymouth Argyle Portsmouth 87 points and goal difference 39, Plymouth 87 points and goal difference 25. 2021–22, League 2 – Forest Green-Exeter City Forest Green 84 points and goal difference 31, Exeter City 84 points and goal difference 24. (N.B. in 1996–97 Wigan Athletic and Fulham finished level on 87 points at the top of the Third Division, but Wigan Athletic were awarded the championship on most goals scored, which was the first tie breaker in use in the Football League between 1992 and 1999, although Fulham had the greater goal difference. Coincidentally Brighton and Hove Albion avoided relegation from the same division on goals scored at the expense of Hereford United, although Hereford had the better goal difference. It reverted to the Goal Difference method from the start of the 1999–2000 season.) Scotland 1986, Premier Division – Hearts–Celtic In 1986, Hearts lost 2–0 at Dundee on the final day of the season, which allowed Celtic to win the league championship on goal difference. Had the first tie-breaker been a goal average, Hearts would have won the championship. 2003, Premier League – Old Firm Rangers won the Scottish Premier League in 2003 on goal difference. In the final round of matches, Rangers played Dunfermline, while second-placed Celtic were playing at Kilmarnock. With Celtic and Rangers level on 94 points going into these matches, the Championship would be decided by which team, Celtic or Rangers, performed best during the final round of matches. If both teams won they would each finish on 97 points, and the League would be decided on goal difference. Rangers won 6–1 and Celtic won 4–0, which left Rangers with a goal difference of 73 (101 for and 28 against), and Celtic a goal difference of 72 (98 scored and 26 against) giving Rangers the title. Titles decided on goal average England top-flight 1924, First Division–Huddersfield Town-Cardiff City In the 1923–24 Football League Championship, Huddersfield Town and Cardiff City both finished on 57 points. Huddersfield Town won the title with 60 goals for to 33 against, for an average of 1.818. Cardiff City's 61 to 34 gave 1.794. They would have been tied on goal difference but City would have won on goals scored. 1950, First Division–Portsmouth-Wolverhampton Wanderers In the 1949–50 Football League Championship, Portsmouth and Wolverhampton Wanderers both finished on 53 points. Portsmouth won the title with 74 goals for to 38 against, for an average of 1.947. Wolverhampton Wanderers 76 to 49 gave 1.551. 1953, First Division–Arsenal-Preston North End In the 1952–53 Football League Championship, Arsenal and Preston North End both finished on 54 points. Arsenal won the title with 97 goals for to 64 against, for an average of 1.516. Preston's 85 to 60 gave 1.417. 1965, First Division–Manchester United-Leeds United In the 1964–65 Football League Championship, Manchester United and Leeds United both finished on 61 points. Manchester United won the title with 89 goals for to 39 against, for an average of 2.282. Leeds United's 83 to 52 gave 1.596, which was actually lower than third-placed Chelsea's, although they finished five points adrift of Leeds. England lower divisions 1950, Second Division–Sheffield United-Sheffield Wednesday Going into the last game of the 1949–50 season, Sheffield Wednesday needed a win against Tottenham Hotspur to secure second place and clinch promotion at the expense of their local rivals Sheffield United. The resulting 0–0 draw meant Wednesday won promotion by a goal average difference of just 0.008 – a 1–1 draw would have left the two level on points and goal average, and a unique play-off match would have had to be played. 1927, Second Division – Portsmouth-Manchester City Going into the last game of the 1926-27 season, both clubs were on 52 points. Portsmouth had a goal average of 1.708, Manchester City's was 1.639. Manchester City won 8-0 and celebrated thinking that would be good enough. The Portsmouth game had kicked off fifteen minutes later than City's, towards the end of the match they were winning 4-1 and knew that another goal would see them promoted which they duly scored. Portsmouth won 5-1, the results meant Portsmouth won promotion by a goal average difference of just 0.006. It is noted that had the current Goal Difference rules applied at this time, City would have been promoted. Scotland 1953, Division A – Rangers–Hibernian Rangers drew their last match of the 1952–53 season, against Queen of the South, 1–1, to finish level with Hibernian on 43 points. They won the title with a goal average of 80–39 to 93–51 (2.051 to 1.824). 1965, First Division – Hearts–Kilmarnock Entering the final day of the 1964–65 season, Hearts were two points ahead of nearest rivals Kilmarnock, with two points awarded for a win. Hearts played Kilmarnock at Tynecastle in the last game, with Kilmarnock needing a 2–0 victory to win the league championship on goal average. Hearts could afford to lose 1–0 or 2–1, but lost 2–0 and Kilmarnock won the championship by a goal average of 1.88 to 1.84. Had goal difference been in use, Hearts would have been champions. Yugoslavia 1951, First League – Red Star Belgrade–Dinamo Zagreb Red Star Belgrade won the 1951 Yugoslav First League championship ahead of Dinamo Zagreb with a 0.013 better goal average. Dinamo's final match against BSK Belgrade ended in a 2–2 draw, and the following day Red Star defeated Partizan 2–0, meaning that both teams finished on 35 points. Red Star's 50 goals for and 21 against gave a goal average of 2.381, while Dinamo's 45 to 19 gave 2.368. 1958, First League – RNK Split–Budućnost In the 1957–58 Yugoslav First League championship, RNK Split and Budućnost finished the season leveled on points and goal average. Both teams had 25 points, with Budućnost's 30 goals for and 36 against giving a goal average of 0.833, the same as RNK Split's 35 goals for and 42 against. A two-legged play-off match between the two was needed to decide who will enter relegation play-offs. The match in Split ended in a goalless draw, while in the return leg Budućnost defeated RNK Split 4–0. RNK Split entered the relegation play-offs and was relegated in their first season in the top flight. See also Net Run Rate, a similar tiebreaker in cricket 1992-93 Premier League, where goal difference was used to determine relegation References Association football terminology Ice hockey terminology Subtraction Tie-breaking in group tournaments
Goal difference
[ "Mathematics" ]
3,455
[ "Sign (mathematics)", "Subtraction" ]
961,677
https://en.wikipedia.org/wiki/Krytron
The krytron is a cold-cathode gas-filled tube intended for use as a very high-speed switch, somewhat similar to the thyratron. It consists of a sealed glass tube with four electrodes. A small triggering pulse on the grid electrode switches the tube on, allowing a large current to flow between the cathode and anode electrodes. The vacuum version is called a vacuum krytron, or sprytron. The krytron was one of the earliest developments of the EG&G Corporation. Description Unlike most other gas switching tubes, the krytron conducts by means of an arc discharge, to handle very high voltages and currents (reaching several kilovolts and several kiloamperes), rather than the low-current glow discharge used in other thyratrons. The krytron is a development of the triggered spark gaps and thyratrons originally developed for radar transmitters during World War II. The gas used in krytrons is hydrogen; noble gases (usually krypton), or a Penning mixture can also be used. Operation A krytron has four electrodes. Two are a conventional anode and cathode. One is a keep-alive electrode, placed near the cathode. The keep-alive has a low positive voltage applied, which causes a small area of gas to ionize near the cathode. High voltage is applied to the anode, but primary conduction does not occur until a positive pulse is applied to the trigger electrode ("Grid" in the image above). Once started, arc conduction carries a considerable current. The fourth is a control grid, usually wrapped around the anode, except for a small opening on its top. In place of or in addition to the keep-alive electrode some krytrons may contain a tiny amount of radioactive material (usually less than of nickel-63), which emits beta particles (high-speed electrons) to make ionization easier. The radiation source serves to increase the reliability of ignition and formation of the keep-alive electrode discharge. The gas filling provides ions for neutralizing the space charge and allowing high currents at lower voltage. The keep-alive discharge populates the gas with ions, forming a preionized plasma. This can shorten the arc formation time by 3–4 orders of magnitude in comparison with non-preionized tubes, as time does not have to be spent on ionizing the medium during formation of the arc path. The electric arc is self-sustaining. Once the tube is triggered, it conducts until the arc is interrupted by the current falling too low for too long (under 10 milliamperes for more than 100 microseconds for the KN22 krytrons). Krytrons and sprytrons are triggered by a high voltage from a capacitor discharge via a trigger transformer, in a similar way flashtubes for e.g. photoflash applications are triggered. Devices integrating a krytron with a trigger transformer are available. Sprytron A sprytron, also known as vacuum krytron or triggered vacuum switch (TVS), is a vacuum, rather than a gas-filled, version. It is designed for use in environments with high levels of ionizing radiation, which might trigger a gas-filled krytron spuriously. It is also more immune to electromagnetic interference than gas-filled tubes. Sprytrons lack the keep alive electrode and the preionization radioactive source. The trigger pulse must be stronger than for a krytron. Sprytrons are able to handle higher currents. Krytrons tend to be used for triggering a secondary switch, e.g., a triggered spark gap, while sprytrons are usually connected directly to the load. The trigger pulse has to be much more intense, as there is no preionized gas path for the electric current, and a vacuum arc must form between the cathode and anode. An arc first forms between the cathode and the grid, then a breakdown occurs between the cathode–grid conductive region and the anode. Sprytrons are evacuated to hard vacuum, typically 0.001 Pa. As kovar and other metals are somewhat permeable to hydrogen, especially during the 600 °C bake-out before evacuation and sealing, all external metal surfaces must be plated with a thick (25 microns or more) layer of soft gold. The same metallization is used for other switch tubes as well. Sprytrons are often designed similar to trigatrons, with the trigger electrode coaxial to the cathode. In one design the trigger electrode is formed as metallization on the inner surface of an alumina tube. The trigger pulse causes surface flashover, which liberates electrons and vaporized surface discharge material into the inter-electrode gap, which facilitates formation of a vacuum arc, closing the switch. The short switching time suggests electrons from the trigger discharge and the corresponding secondary electrons knocked from the anode as the initiation of the switching operation; the vaporized material travels too slowly through the gap to play significant role. The repeatability of the triggering can be improved by special coating of the surface between the trigger electrode and the cathode, and the jitter can be improved by doping the trigger substrate and modifying the trigger probe structures. Sprytrons can degrade in storage, by outgassing from their components, diffusion of gases (especially hydrogen) through the metal components, and gas leaks through the hermetic seals. An example tube manufactured with internal pressure of 0.001 Pa will exhibit spontaneous gap breakdowns when the pressure inside rises to 1 Pa. Accelerated testing of storage life can be done by storing in increased ambient pressure, optionally with added helium for leak testing, and increased temperature storage (150 °C) for outgassing testing. Sprytrons can be made miniaturized and rugged. Sprytrons can be also triggered by a laser pulse. In 1999 the laser pulse energy needed to trigger a sprytron was reduced to 10 microjoules. Sprytrons are usually manufactured as rugged metal/ceramic parts. They typically have low inductance (10 nanohenries) and low electrical resistance when switched on (10–30 milliohms). After triggering, just before the sprytron switches fully on in avalanche mode, it briefly becomes slightly conductive (carrying 100–200 amperes); high-power MOSFET transistors operating in avalanche mode show similar behavior. SPICE models for sprytrons are available. Performance This design, dating from the late 1940s, is still capable of pulse-power performance that even the most advanced semiconductors (even IGBTs) cannot match easily. Krytrons and sprytrons are capable of handling high-current high-voltage pulses, with very fast switching times, and constant, low jitter time delay between application of the trigger pulse and switching on. Krytrons can switch currents of up to about 3000 amperes and voltages up to about 5000 volts. Commutation time of less than 1 nanosecond can be achieved, with a delay between the application of the trigger pulse and switching as low as about 30 nanoseconds. The achievable jitter may be below 5 nanoseconds. The required trigger pulse voltage is about 200–2000 volts; higher voltages decrease the switching delay to some degree. Commutation time can be somewhat shortened by increasing the trigger pulse rise time. A given krytron tube will give very consistent performance to identical trigger pulses (low jitter). The keep-alive current ranges from tens to hundreds of microamperes. The pulse repetition rate can range from one per minute to tens of thousands per minute. Switching performance is largely independent of the environment (temperature, acceleration, vibration, etc.). However, the formation of the keep-alive glow discharge is more sensitive, which necessitates the use of a radioactive source to aid its ignition. Krytrons have a limited lifetime, ranging, according to type, typically from tens of thousands to tens of millions of switching operations, and sometimes only a few hundreds. Sprytrons have somewhat faster switching times than krytrons. Hydrogen-filled thyratrons may be used as a replacement in some applications. Applications Krytrons and their variations are manufactured by Perkin-Elmer Components and used in a variety of industrial and military devices. They are best known for their use in igniting exploding-bridgewire and slapper detonators in nuclear weapons, their original application, either directly (sprytrons are usually used for this) or by triggering higher-power spark gap switches. They are also used to trigger thyratrons, large flashlamps in photocopiers, lasers and scientific apparatus, and for firing ignitors for industrial explosives. Export restrictions in the United States Because of their potential for use as triggers of nuclear weapons, the export of krytrons is tightly regulated in the United States. A number of cases involving the smuggling or attempted smuggling of krytrons have been reported, as countries seeking to develop nuclear weapons have attempted to procure supplies of krytrons for igniting their weapons. One prominent case was that of Richard Kelly Smyth, who allegedly helped Arnon Milchan smuggle 15 orders of 810 krytrons total to Israel in the early 1980s. 469 of these were returned to the United States, with Israel claiming the remaining 341 were "destroyed in testing". Krytrons and sprytrons handling voltages of 2,500 V and above, currents of 100 A and above, and switching delays of under 10 microseconds are typically suitable for nuclear weapon triggers. In popular culture A krytron was the "MacGuffin" in Roman Polanski's 1988 film Frantic. The device in the film was actually a Krytron-Pac, which consisted of a Krytron tube along with a trigger transformer encased in black epoxy. The krytron, incorrectly called a "kryton", also appeared in the Tom Clancy nuclear terrorism novel The Sum of All Fears. The plot of Larry Collins' book The Road to Armageddon revolved heavily around American-made krytrons that Iranian mullahs wanted for three Russian nuclear artillery shells they had hoped to upgrade to full nuclear weapons. The term "krytron" appeared in the season 3, episode 14 (Provenance) of the television drama Person of Interest. In Season 3 of NCIS episode "Kill Ari, Part 2", it was revealed that Ari Haswari, a rogue Mossad operative, had been tasked with acquiring a krytron trigger. Along with stolen plutonium from Dimona, these were key components for an Israeli sting operation. The krytron was also incorrectly called a "kryton". Further developments Optically triggered solid-state switches based on diamond are a potential candidate for krytron replacement. Notes References EG&G Electronic Components Catalog, 1994. CBS/Hytron second source documentation: "Krytron Trigger Tubes" spec sheets E-337, E-337A-1, E-337A-2 "7229 Cold-Cathode Trigger Tube" data sheet E287B "7230 Reliable Cold-Cathode Trigger Tube" data sheet E287C "7231 Subminiature Cold-Cathode Trigger Tube" data sheet E287D "7232 Reliable Subminiature Cold-Cathode Trigger Tube" data sheet E287E External links John Pasley's article about gas-filled switch tubes, Krytron section Photo of a small glass krytron 40 month sentence to illegal exporter (though the sentence was definitely related to the 'fugitive' details) Gas-filled tubes Nuclear weapons Pulsed power Switching tubes Vacuum tubes
Krytron
[ "Physics" ]
2,468
[ "Physical quantities", "Vacuum tubes", "Vacuum", "Power (physics)", "Pulsed power", "Matter" ]
961,680
https://en.wikipedia.org/wiki/Malayan%20tapir
The Malayan tapir (Tapirus indicus), also called Asian tapir, Asiatic tapir, oriental tapir, Indian tapir, piebald tapir, or black-and-white tapir, is the only living tapir species outside of the Americas. It is native to Southeast Asia from the Malay Peninsula to Sumatra. It has been listed as Endangered on the IUCN Red List since 2008, as the population is estimated to comprise fewer than 2,500 mature individuals. Taxonomy The scientific name Tapirus indicus was proposed by Anselme Gaëtan Desmarest in 1819 who referred to a tapir described by Pierre-Médard Diard. Tapirus indicus brevetianus was coined by a Dutch zoologist in 1926 who described a black Malayan tapir from Sumatra that had been sent to Rotterdam Zoo in the early 1920s. Phylogenetic analyses of 13 Malayan tapirs showed that the species is monophyletic. It was placed in the genus Acrocodia by Colin Groves and Peter Grubb in 2011. However, a comparison of mitochondrial DNA of 16 perissodactyl species revealed that the Malayan tapir forms a sister group together with the Tapirus species native to the Americas. It was the first Tapirus species that genetically diverged from the group, estimated about in the Late Oligocene. Description The Malayan tapir is easily identified by its markings, most notably the light-colored patch that extends from its shoulders to its hindquarters. Black hair covers its head, shoulders, and legs, while white hair covers its midsection, rear, and the tips of its ears; these white edges around the rims of the outer ear as is true of other tapirs. The disrupted coloration breaks up its outline, providing camouflage by making the animal difficult to recognize against the varied terrain and dense flora of its habitat; potential predators may mistake it for a large rock, rather than prey, when it is lying down to sleep. The Malayan tapir is the largest of the four extant tapir species and grows to between in length, not counting a stubby tail of only in length, and stands tall. It typically weighs between , although some adults can weigh up to . The females are usually larger than the males. Like other tapir species, it has a small, stubby tail and a long, flexible proboscis. It has four toes on each front foot and three toes on each back foot. The Malayan tapir has rather poor eyesight, but excellent hearing and sense of smell. The tapir's unique proboscis is supported by several evolutionary adaptations of its skull. It has a large sagittal crest, unusually positioned orbits, an unusually shaped cranium with elevated frontal bones, and a retracted nasal incision as well as retracted facial cartilage. This evolutionary process is believed to have caused the loss of some cartilages, facial muscles, and the bony wall of the tapir's nasal chamber. Vision Malayan tapirs have very poor eyesight, both on land and in water, instead relying heavily on their excellent senses of smell and hearing to navigate and forage. Their eyes are small and, like many herbivores, positioned on the sides of the face. They have brown irises, but the corneas are often covered in a blue haze; this corneal cloudiness is thought to be caused by repetitive exposure to light. This loss of transparency impacts the ability of the cornea to transmit and focus outside light as it enters the eye, impairing the animal's overall vision. As these tapirs are most active at night on top of having poor eyesight, this habit may make it harder for them to search for food and avoid predators. Color variation Two melanistic Malayan tapirs were observed in Jerangau Forest Reserve in Malaysia in 2000. A black Malayan tapir was also recorded in Tekai Tembeling Forest Reserve in Pahang state in 2016. Distribution and habitat The Malayan tapir lives throughout the tropical lowland rainforests of Southeast Asia, including Sumatra in Indonesia, Peninsular Malaysia, Myanmar, and Thailand. Pleistocene fossils were found in Java and other locations accompanied by herbivores more typical of grasslands, indicating that it evolved in more open habitats and retreated to closed forests in later times. It was found in Borneo until at least 8,000 years ago during the early Holocene in the Niah Caves of Sarawak, and some 19th century writers mentioned it as a contemporary species in Borneo, likely based on native accounts. It has been proposed to reintroduce the tapir to the island as a conservation measure. In the continent, the Malayan tapir was found in historical times as far north as China. Behaviour and ecology Malayan tapirs are primarily solitary, marking out large tracts of land as their territory, though these areas usually overlap with those of other individuals. Tapirs mark out their territories by spraying urine on plants, and they often follow distinct paths which they have bulldozed through the undergrowth. Exclusively herbivorous, the animal forages for the tender shoots and leaves of more than 115 species of plants, of which around 30 are particularly preferred, moving slowly through the forest and pausing often to eat and note the scents left behind by other tapirs in the area. The tapir can run quickly when threatened or frightened, and if forced to fight can defend itself with its strong jaws and sharp teeth. Malayan tapirs communicate with high-pitched squeaks and whistles. They usually prefer to live near water and often bathe and swim, and they are also able to climb steep slopes. Tapirs are mainly active at night, though they are not exclusively nocturnal; because they tend to eat soon after sunset or before sunrise, and they will often nap in the middle of the night, they are considered to be crepuscular animals. Life cycle The gestation period of the Malayan tapir is about 390–395 days, after which a single calf is born that weighs around . Malayan tapirs are the largest of the four tapir species at birth and tend to grow more quickly than their relatives. Young tapirs of all species have brown hair with white stripes and spots, a pattern that enables them to hide effectively in the dappled light of the forest. This baby coat fades into adult coloration between four and seven months after birth. Weaning occurs between six and eight months of age, at which time the babies are nearly full-grown, and the animals reach sexual maturity around age three. Breeding typically occurs in April, May or June, and females generally produce one calf every two years. Malayan tapirs can live up to 30 years, both in the wild and in captivity. Predators Because of its size, the Malayan tapir has few natural predators, and even reports of killings by tigers (Panthera tigris) are scarce. Malayan tapirs can defend themselves with their very powerful bite; in 1998, the bite of a captive female Malayan tapir severed off a zookeeper's left arm at the mid-bicep, likely because she stood between her and her offspring. Threats The main threats to the Malayan tapir are loss and destruction of habitat through deforestation. Large tracts of forests in Thailand and Malaysia have been converted for planting oil palms. Habitat fragmentation in peninsular Malaysia caused displacement of 142 Malayan tapirs between 2006 and 2010; some were rescued and relocated, while 15 of them were killed in vehicle collisions. References External links ARKive – images and movies of the Asian tapir (Tapirus indicus) Tapir Specialist Group – Malayan Tapir Tapirs tapir Mammals of Malaysia Mammals of Indonesia Mammals of Myanmar Mammals of Thailand Endangered fauna of Asia EDGE species Mammals described in 1819 Taxobox binomials not recognized by IUCN
Malayan tapir
[ "Biology" ]
1,599
[ "EDGE species", "Biodiversity" ]
961,805
https://en.wikipedia.org/wiki/Algebra%20of%20sets
In mathematics, the algebra of sets, not to be confused with the mathematical structure of an algebra of sets, defines the properties and laws of sets, the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations. Any set of sets closed under the set-theoretic operations forms a Boolean algebra with the join operator being union, the meet operator being intersection, the complement operator being set complement, the bottom being and the top being the universe set under consideration. Fundamentals The algebra of sets is the set-theoretic analogue of the algebra of numbers. Just as arithmetic addition and multiplication are associative and commutative, so are set union and intersection; just as the arithmetic relation "less than or equal" is reflexive, antisymmetric and transitive, so is the set relation of "subset". It is the algebra of the set-theoretic operations of union, intersection and complementation, and the relations of equality and inclusion. For a basic introduction to sets see the article on sets, for a fuller account see naive set theory, and for a full rigorous axiomatic treatment see axiomatic set theory. Fundamental properties of set algebra The binary operations of set union () and intersection () satisfy many identities. Several of these identities or "laws" have well established names. Commutative property: Associative property: Distributive property: The union and intersection of sets may be seen as analogous to the addition and multiplication of numbers. Like addition and multiplication, the operations of union and intersection are commutative and associative, and intersection distributes over union. However, unlike addition and multiplication, union also distributes over intersection. Two additional pairs of properties involve the special sets called the empty set and the universe set ; together with the complement operator ( denotes the complement of . This can also be written as , read as "A prime"). The empty set has no members, and the universe set has all possible members (in a particular context). Identity: Complement: The identity expressions (together with the commutative expressions) say that, just like 0 and 1 for addition and multiplication, and are the identity elements for union and intersection, respectively. Unlike addition and multiplication, union and intersection do not have inverse elements. However the complement laws give the fundamental properties of the somewhat inverse-like unary operation of set complementation. The preceding five pairs of formulae—the commutative, associative, distributive, identity and complement formulae—encompass all of set algebra, in the sense that every valid proposition in the algebra of sets can be derived from them. Note that if the complement formulae are weakened to the rule , then this is exactly the algebra of propositional linear logic. Principle of duality Each of the identities stated above is one of a pair of identities such that each can be transformed into the other by interchanging and , while also interchanging and . These are examples of an extremely important and powerful property of set algebra, namely, the principle of duality for sets, which asserts that for any true statement about sets, the dual statement obtained by interchanging unions and intersections, interchanging and and reversing inclusions is also true. A statement is said to be self-dual if it is equal to its own dual. Some additional laws for unions and intersections The following proposition states six more important laws of set algebra, involving unions and intersections. PROPOSITION 3: For any subsets and of a universe set , the following identities hold: idempotent laws: domination laws: absorption laws: As noted above, each of the laws stated in proposition 3 can be derived from the five fundamental pairs of laws stated above. As an illustration, a proof is given below for the idempotent law for union. Proof: The following proof illustrates that the dual of the above proof is the proof of the dual of the idempotent law for union, namely the idempotent law for intersection. Proof: Intersection can be expressed in terms of set difference: Some additional laws for complements The following proposition states five more important laws of set algebra, involving complements. PROPOSITION 4: Let and be subsets of a universe , then: De Morgan's laws: double complement or involution law: complement laws for the universe set and the empty set: Notice that the double complement law is self-dual. The next proposition, which is also self-dual, says that the complement of a set is the only set that satisfies the complement laws. In other words, complementation is characterized by the complement laws. PROPOSITION 5: Let and be subsets of a universe , then: uniqueness of complements: If , and , then Algebra of inclusion The following proposition says that inclusion, that is the binary relation of one set being a subset of another, is a partial order. PROPOSITION 6: If , and are sets then the following hold: reflexivity: antisymmetry: and if and only if transitivity: If and , then The following proposition says that for any set S, the power set of S, ordered by inclusion, is a bounded lattice, and hence together with the distributive and complement laws above, show that it is a Boolean algebra. PROPOSITION 7: If , and are subsets of a set then the following hold: existence of a least element and a greatest element: existence of joins: If and , then existence of meets: If and , then The following proposition says that the statement is equivalent to various other statements involving unions, intersections and complements. PROPOSITION 8: For any two sets and , the following are equivalent: The above proposition shows that the relation of set inclusion can be characterized by either of the operations of set union or set intersection, which means that the notion of set inclusion is axiomatically superfluous. Algebra of relative complements The following proposition lists several identities concerning relative complements and set-theoretic differences. PROPOSITION 9: For any universe and subsets , and of , the following identities hold: See also σ-algebra is an algebra of sets, completed to include countably infinite operations. Axiomatic set theory Field of sets List of set identities and relations Naive set theory Set (mathematics) Topological space — a subset of , the power set of , closed with respect to arbitrary union, finite intersection and containing and . References Stoll, Robert R.; Set Theory and Logic, Mineola, N.Y.: Dover Publications (1979) . "The Algebra of Sets", pp 16—23. Courant, Richard, Herbert Robbins, Ian Stewart, What is mathematics?: An Elementary Approach to Ideas and Methods, Oxford University Press US, 1996. . "SUPPLEMENT TO CHAPTER II THE ALGEBRA OF SETS". External links Operations on Sets at ProvenMath Basic concepts in set theory
Algebra of sets
[ "Mathematics" ]
1,444
[ "Basic concepts in set theory", "Operations on sets" ]
961,961
https://en.wikipedia.org/wiki/Gene%20gun
In genetic engineering, a gene gun or biolistic particle delivery system is a device used to deliver exogenous DNA (transgenes), RNA, or protein to cells. By coating particles of a heavy metal with a gene of interest and firing these micro-projectiles into cells using mechanical force, an integration of desired genetic information can be introduced into desired cells. The technique involved with such micro-projectile delivery of DNA is often referred to as biolistics, short for "biological ballistics". This device is able to transform almost any type of cell and is not limited to the transformation of the nucleus; it can also transform organelles, including plastids and mitochondria. Gene gun design The gene gun was originally a Crosman air pistol modified to fire dense tungsten particles. It was invented by John C Sanford, Ed Wolf, and Nelson Allen at Cornell University along with Ted Klein of DuPont between 1983 and 1986. The original target was onions (chosen for their large cell size), and the device was used to deliver particles coated with a marker gene which would relay a signal if proper insertion of the DNA transcript occurred. Genetic transformation was demonstrated upon observed expression of the marker gene within onion cells. The earliest custom manufactured gene guns (fabricated by Nelson Allen) used a 22 caliber nail gun cartridge to propel a polyethylene cylinder (bullet) down a 22 caliber Douglas barrel. A droplet of the tungsten powder coated with genetic material was placed onto the bullet and shot down into a Petri dish below. The bullet welded to the disk below the Petri plate, and the genetic material blasted into the sample with a doughnut effect involving devastation in the middle of the sample with a ring of good transformation around the periphery. The gun was connected to a vacuum pump and was placed under a vacuum while firing. The early design was put into limited production by a Rumsey-Loomis (a local machine shop then at Mecklenburg Road in Ithaca, NY, USA). Biolistics, Inc sold Dupont the rights to manufacture and distribute an updated device with improvements including the use of helium as a non-explosive propellant and a multi-disk collision delivery mechanism to minimize damage to sample tissues. Other heavy metals such as gold and silver are also used to deliver genetic material with gold being favored due to lower cytotoxicity in comparison to tungsten projectile carriers. Biolistic construct design Biolistic transformation involves the integration of a functional fragment of DNA—known as a DNA construct—into target cells. A gene construct is a DNA cassette containing all required regulatory elements for proper expression within the target organism. While gene constructs may vary in their design depending on the desired outcome of the transformation procedure, all constructs typically contain a combination a promoter sequence, a terminator sequence, the gene of interest, and a reporter gene. PromoterPromoters control the location and magnitude of gene expression and function as “the steering wheel and gas pedal” of a gene. Promoters precede the gene of interest in the DNA construct and can be changed through laboratory design to fine-tune transgene expression. The 35S promoter from Cauliflower mosaic virus is an example of a commonly used promoter that results in robust constitutive gene expression within plants. TerminatorTerminator sequences are required for proper gene expression and are placed after the coding region of the gene of interest within the DNA construct. A common terminator for biolistic transformation is the NOS terminator derived from Agrobacterium tumefaciens. Due to the high frequency of use of this terminator in genetically engineered plants, strategies have been developed to detect its presence within the food supply to monitor for unauthorized GE crops. Reporter gene A gene encoding a selectable marker is a common element within DNA constructs and is used to select for properly transformed cells. The selectable marker chosen will depend on the species being transformed, but it will typically be a gene granting cells a detoxification capacity for certain herbicides or antibiotics such as kanamycin, hygromycin B, or glyphosate. Additional elements Optional components of a DNA construct include elements such as cre-lox sequences that allow for controlled removal of the construct from the target genome. Such elements are chosen by the construct developer to perform specialized functions alongside the main gene of interest. Application Gene guns are mostly used with plant cells. However, there is much potential use in humans and other animals as well. Plants The target of a gene gun is often a callus of undifferentiated plant cells or a group of immature embryos growing on gel medium in a Petri dish. After the DNA-coated gold particles have been delivered to the cells, the DNA is used as a template for transcription (transient expression) and sometimes it integrates into a plant chromosome ('stable' transformation) If the delivered DNA construct contains a selectable marker, then stably transformed cells can be selected and cultured using tissue culture methods. For example, if the delivered DNA construct contains a gene that confers resistance to an antibiotic or herbicide, then stably transformed cells may be selected by including that antibiotic or herbicide in the tissue culture media. Transformed cells can be treated with a series of plant hormones, such as auxins and gibberellins, and each may divide and differentiate into the organized, specialized, tissue cells of an entire plant. This capability of total re-generation is called totipotency. The new plant that originated from a successfully transformed cell may have new traits that are heritable. The use of the gene gun may be contrasted with the use of Agrobacterium tumefaciens and its Ti plasmid to insert DNA into plant cells. See transformation for different methods of transformation in different species. Humans and other animals Gene guns have also been used to deliver DNA vaccines. The delivery of plasmids into rat neurons through the use of a gene gun, specifically DRG neurons, is also used as a pharmacological precursor in studying the effects of neurodegenerative diseases such as Alzheimer's disease. The gene gun has become a common tool for labeling subsets of cells in cultured tissue. In addition to being able to transfect cells with DNA plasmids coding for fluorescent proteins, the gene gun can be adapted to deliver a wide variety of vital dyes to cells. Gene gun bombardment has also been used to transform Caenorhabditis elegans, as an alternative to microinjection. Advantages Biolistics has proven to be a versatile method of genetic modification and it is generally preferred to engineer transformation-resistant crops, such as cereals. Notably, Bt maize is a product of biolistics. Plastid transformation has also seen great success with particle bombardment when compared to other current techniques, such as Agrobacterium mediated transformation, which have difficulty targeting the vector to and stably expressing in the chloroplast. In addition, there are no reports of a chloroplast silencing a transgene inserted with a gene gun. Additionally, with only one firing of a gene gun, a skilled technician can generate two transformed organisms in certain species. This technology has even allowed for modification of specific tissues in situ, although this is likely to damage large numbers of cells and transform only some, rather than all, cells of the tissue. Limitations Biolistics introduces DNA randomly into the target cells. Thus the DNA may be transformed into whatever genomes are present in the cell, be they nuclear, mitochondrial, plasmid or any others, in any combination, though proper construct design may mitigate this. The delivery and integration of multiple templates of the DNA construct is a distinct possibility, resulting in potential variable expression levels and copy numbers of the inserted gene. This is due to the ability of the constructs to give and take genetic material from other constructs, causing some to carry no transgene and others to carry multiple copies; the number of copies inserted depends on both how many copies of the transgene an inserted construct has, and how many were inserted. Also, because eukaryotic constructs rely on illegitimate recombination—a process by which the transgene is integrated into the genome without similar genetic sequences—and not homologous recombination, they cannot be targeted to specific locations within the genome, unless the transgene is co-delivered with genome editing reagents. References Further reading External links John O'Brien presents...Gene Gun Barrels for more information about biolistics Molecular biology Molecular genetics Laboratory techniques Gene delivery 1983 introductions Nanotechnology
Gene gun
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,769
[ "Genetics techniques", "Materials science", "Molecular biology techniques", "Molecular genetics", "nan", "Molecular biology", "Biochemistry", "Nanotechnology", "Gene delivery" ]
962,035
https://en.wikipedia.org/wiki/Seismic%20refraction
Seismic refraction is a geophysical principle governed by Snell's Law of refraction. The seismic refraction method utilizes the refraction of seismic waves by rock or soil layers to characterize the subsurface geologic conditions and geologic structure. Seismic refraction is exploited in engineering geology, geotechnical engineering and exploration geophysics. Seismic refraction traverses (seismic lines) are performed using an array of seismographs or geophones and an energy source. The methods depend on the fact that seismic waves have differing velocities in different types of soil or rock. The waves are refracted when they cross the boundary between different types (or conditions) of soil or rock. The methods enable the general soil types and the approximate depth to strata boundaries, or to bedrock, to be determined. P-wave refraction P-wave refraction evaluates the compression wave generated by the seismic source located at a known distance from the array. The wave is generated by vertically striking a striker plate with a sledgehammer, shooting a seismic shotgun into the ground, or detonating an explosive charge in the ground. Since the compression wave is the fastest of the seismic waves, it is sometimes referred to as the primary wave and is usually more-readily identifiable within the seismic recording as compared to the other seismic waves. S-wave refraction S-wave refraction evaluates the shear wave generated by the seismic source located at a known distance from the array. The wave is generated by horizontally striking an object on the ground surface to induce the shear wave. Since the shear wave is the second fastest wave, it is sometimes referred to as the secondary wave. When compared to the compression wave, the shear wave is approximately one-half (but may vary significantly from this estimate) the velocity depending on the medium. Two horizontal layers ic0 - critical angle V0 - velocity of the first layer V1 - velocity of the second layer h0 - thickness of the first layer T01 - intercept Several horizontal layers Inversion methods The General Reciprocal method The Plus minus method Refraction inversion modeling (refraction tomography) Monte Carlo simulation Genetic algorithms Applications Seismic refraction has been successfully applied to tailings characterisation through P- and S-wave travel time tomographic inversions. See also Reflection seismology References US Army Corps of Engineers EM 1110-1-1802 Central Federal Lands Highway Division Exploration geophysics Geophysics Seismology
Seismic refraction
[ "Physics" ]
492
[ "Applied and interdisciplinary physics", "Geophysics" ]
962,043
https://en.wikipedia.org/wiki/Messier%2035
Messier 35 or M35, also known as NGC 2168 or the Shoe-Buckle Cluster, is a relatively close open cluster of stars in the west of Gemini, at about the declination of the Sun when the latter is at June solstice. It was discovered by Philippe Loys de Chéseaux around 1745 and independently discovered by John Bevis before 1750. It is scattered over part of the sky almost the size of the full moon and is away. The compact open cluster NGC 2158 lies directly southwest of it. Leonard & Merritt (1989) computed the mass of M35 using a statistical technique based on proper motion velocities of its stars. The mass within the central was found to be between 1600 and 3200 solar masses, consistent with the mass of a realistic stellar population within the same radius. Bouy et al. in 2015 found a mass of around within the central . There are 305 stars that can be intrinsically shown to be extremely likely to be members, and up to 4,349 averaging the 50% membership probability, from the kinematic (such as parallax and proper motion) and spectral data published before 2015. The cluster's metallicity is [Fe/H] = , where −1 would be ten times less metallic than the sun. Of 418 probable members, Leiner et al. in 2015 found 64 that have variable radial velocities thus are binary star systems. Four probable members are chemically peculiars, while HD 41995, which in the (telescopic angular) cluster field, shows emission lines. Hu et al. in 2005 found 13 variable stars in the field; at least three are suspect as cluster members. To be a member means to have a gravitational tie or, if recently freed, having been created by the same event. See also List of Messier objects References and footnotes External links Messier 35, SEDS Messier pages M35 – Nightskyinfo.com - featured M35 Messier 035 Messier 035 035 Messier 035 Orion–Cygnus Arm ?
Messier 35
[ "Astronomy" ]
430
[ "Gemini (constellation)", "Constellations" ]
962,058
https://en.wikipedia.org/wiki/Messier%2036
Messier 36 or M36, also known as NGC 1960 or the Pinwheel Cluster, is an open cluster of stars in the somewhat northern Auriga constellation. It was discovered by Giovanni Batista Hodierna before 1654, who described it as a nebulous patch. The cluster was independently re-discovered by Guillaume Le Gentil in 1749, then Charles Messier observed it in 1764 and added it to his catalogue. It is about 1,330 pc (4,340 light years) away from Earth. The cluster is very similar to the Pleiades cluster (M45), and if as far away it would be of similar apparent magnitude. This cluster has an angular diameter of and a core radius of . It has a mass of roughly and a linear tidal radius of . Based upon photometry, the age of the cluster has been estimated by Wu et al. (2009) as 25.1 Myr and  Myr by Bell et al. (2013). The luminosity of the stars that have not yet depleted their lithium implies an age of  Myr, in good agreement with these older estimates. M36 includes ten stars with a visual magnitude brighter than 10, and 178 down to magnitude 14. 38 members display an infrared excess, with one being particularly high. There is one candidate B-type variable star, of 9th magnitude. A 2020 study of the variable stars in the cluster estimated a new closer distance of 3,800 light years from Earth. A young stellar object with an outflow, associated with the infrared source IRAS 05327+3404 was discovered in optical observations of M36. The outflow is nicknamed "Holoea", Hawaiian for "flowing gas". Despite appearing close to M36 it is probably not a part of M36 .It may be a member of the more distant S235 region. The young star driving the outflow was classified as transitional between class I and class II and appears to be surrounded by large amounts of circumstellar material. Map See also List of Messier objects References External links Messier 36, SEDS Messier pages Messier 036 Messier 036 036 Messier 036 Perseus Arm ?
Messier 36
[ "Astronomy" ]
452
[ "Auriga", "Constellations" ]
962,075
https://en.wikipedia.org/wiki/Messier%2037
Messier 37 (also known as M37, NGC 2099, or the Salt and Pepper Cluster) is the brightest and richest open cluster in the constellation Auriga. It was discovered by the Italian astronomer Giovanni Battista Hodierna before 1654. M37 was missed by French astronomer Guillaume Le Gentil when he rediscovered M36 and M38 in 1749. French astronomer Charles Messier independently rediscovered M37 in September 1764 but all three of these clusters were recorded by Hodierna. It is classified as Trumpler type I,1,r or I,2,r. M37 exists in the antipodal direction, opposite from the Galactic Center as seen from Earth, so is in one of the nearby outer arms. Specifically it is still close enough to be in our own. Estimates of its age range from 347 million to 550 million years. It has 1,500 times the mass of the Sun () and contains over 500 identified stars, with roughly 150 stars brighter than magnitude 12.5. M37 has at least a dozen red giants and its hottest surviving main sequence star is of stellar classification B9 V. The abundance of elements other than hydrogen and helium, what astronomers term metallicity, is similar to, if not slightly higher than, the abundance in the Sun. As of 2022, it contains only the third known planetary nebula associated with an open cluster. At its estimated distance of around from Earth, the cluster's angular diameter of 24 arcminutes corresponds to a physical extent of about . The tidal radius of the cluster, where external gravitational perturbations begin to have a significant influence on the orbits of its member stars, is about . This cluster is following an orbit through the Milky Way with a period of 219.3 Ma and an eccentricity of 0.22. This will bring it as close as to, and as distant as from, the Galactic Center. It reaches a peak distance above the galactic plane of and will cross the plane with a period of 31.7 Ma. Sky charts See also List of Messier objects References External links Messier 37, SEDS Messier pages Messier 037 Messier 037 037 Messier 037 Perseus Arm ?
Messier 37
[ "Astronomy" ]
451
[ "Auriga", "Constellations" ]
962,091
https://en.wikipedia.org/wiki/Messier%2038
Messier 38 or M38, also known as NGC 1912 or Starfish Cluster, is an open cluster of stars in the constellation of Auriga. It was discovered by Giovanni Batista Hodierna before 1654 and independently found by Le Gentil in 1749. The open clusters M36 and M37, also discovered by Hodierna, are often grouped together with M38. Distance is about away from Earth. The open cluster NGC 1907 lies nearby on the sky, but the two are most likely just experiencing a fly-by, having originated in different parts of the galaxy. The cluster's brightest stars form a pattern resembling the Greek letter Pi or, according to Webb, an "oblique cross". Walter Scott Houston described its appearance as follows: Photographs usually show a departure from circularity, a feature quite evident to visual observers. Older reports almost always mention a cross shape, which seems more pronounced with small instruments. A view with a 24-inch reflector on a fine Arizona night showed the cluster as irregular, and the host of stars made fruitless any effort to find a geometrical figure. At its distance of 1066 pc., its angular diameter of about 20 arc minutes corresponds to about 4.0 parsecs (13 light years), similar to that of its more distant neighbor M37. It is of intermediate age at about 290 million years. From the population of about 100 stars, this open cluster features a prominent yellow giant with the apparent magnitude +7.9 and spectral type G0 as its brightest member. This corresponds to an absolute magnitude of -1.5, or a luminosity of 900 Suns. For comparison, the Sun would appear as a faint magnitude +15.3 star from the distance of M38. Components See also List of Messier objects Messier 36 Messier 37 References External links Messier 38, SEDS Messier pages Open clusters 038 NGC objects Auriga Perseus Arm ?
Messier 38
[ "Astronomy" ]
398
[ "Auriga", "Constellations" ]
962,118
https://en.wikipedia.org/wiki/Messier%2039
Messier 39 or M39, also known as NGC 7092, is an open cluster of stars in the constellation of Cygnus, sometimes referred to as the Pyramid Cluster. It is positioned two degrees south of the star Pi Cygni and around 9° east-northeast of Deneb. The cluster was discovered by Guillaume Le Gentil in 1749, then Charles Messier added it to his catalogue in 1764. When observed in a small telescope at low power the cluster shows around two dozen members but is best observed with binoculars. It has a total integrated magnitude (brightness) of 4.6 and spans an angular diameter of – about the size of the full Moon. It is centered about away. This cluster has an estimated mass of and a linear tidal radius of . Of the 15 brightest components, six form binary star systems; one more is suspected. HD 205117 is a probable eclipsing binary system with a period of 113.2 days that varies by 0.051 in visual magnitude. Both members seem to be subgiants. Within are at least five chemically peculiar stars and ten suspected short-period variable stars. Map See also List of Messier objects References External links Messier 39, SEDS Messier pages Messier 039 Orion–Cygnus Arm Messier 039 039 Messier 039 17641024
Messier 39
[ "Astronomy" ]
276
[ "Cygnus (constellation)", "Constellations" ]
962,123
https://en.wikipedia.org/wiki/Superframe
In telecommunications, superframe (SF) is a T1 framing standard. In the 1970s it replaced the original T1/D1 framing scheme of the 1960s in which the framing bit simply alternated between 0 and 1. Superframe is sometimes called D4 Framing to avoid confusion with single-frequency signaling. It was first supported by the D2 channel bank, but it was first widely deployed with the D4 channel bank. In order to determine where each channel is located in the stream of data being received, each set of 24 channels is aligned in a frame. The frame is 192 bits long (8 * 24), and is terminated with a 193rd bit, the framing bit, which is used to find the end of the frame. In order for the framing bit to be located by receiving equipment, a predictable pattern is sent on this bit. Equipment will search for a bit which has the correct pattern, and will align its framing based on that bit. The pattern sent is 12 bits long, so every group of 12 frames is called a superframe. The pattern used in the 193rd bit is 100011 011100. Each channel sends two bits of call supervision data during each superframe using robbed-bit signaling during frames 6 and 12 of the superframe. More specifically, after the 6th and 12th bit in the superframe pattern, the least significant data bit of each channel (bit 8; T1 data is sent big-endian and uses 1-origin numbering) is replaced by a "channel-associated signalling" bit (bits A and B, respectively). Superframe remained in service in many places through the turn of the century, replaced by the improved extended superframe (ESF) of the 1980s in applications where its additional features were desired. Extended superframe In telecommunications, extended superframe (ESF) is a T1 framing standard. ESF is sometimes called D5 Framing because it was first used in the D5 channel bank, invented in the 1980s. It is preferred to its predecessor, superframe, because it includes a cyclic redundancy check (CRC) and 4000 bit/s channel capacity for a data link channel (used to pass out-of-band data between equipment.) It requires less frequent synchronization than the earlier superframe format, and provides on-line, real-time monitoring of circuit capability and operating condition. Structure An extended superframe is 24 frames long, and the framing bit of each frame is used in the following manner: All odd-numbered frames (1, 3, ..., 23) are used for the data link (totalling 4000 bits per second), Frames 2, 6, 10, 14, 18, and 22 are used to pass the CRC total of the previous extended superframe (all 4632 bits, framing and data), and Frames 4, 8, 12, 16, 20, and 24 are used to send the fixed framing pattern, 001011. The CRC is computed using the polynomial over all 24×193 = 4632 bits (framing and data) of the previous superframe, but with its framing bits forced to 1 for the purpose of CRC computation. The purpose of this small CRC is not to take any immediate action, but to keep statistics on the performance of the link. Like the predecessor superframe, every sixth frame's least-significant data bit can be used for robbed-bit signaling of call supervision state. However, there are four such bits (ABCD) per channel per extended superframe, rather than the two bits (AB) provided per superframe. (Specifically, the robbed bits follow framing bits 6, 12, 18 and 24.) Unlike the superframe, it is possible to avoid robbed-bit signalling and send call supervision over the data link instead. References Multiplexing Telephony signals Synchronization
Superframe
[ "Engineering" ]
790
[ "Telecommunications engineering", "Synchronization" ]
962,130
https://en.wikipedia.org/wiki/Winnecke%204
Winnecke 4 (also known as Messier 40 or WNC 4) is an optical double star consisting of two unrelated stars in a northerly zone of the sky, Ursa Major. The pair were discovered by Charles Messier in 1764 while he was searching for a nebula that had been reported in the area by Johannes Hevelius. Not seeing any nebulae, Messier catalogued this apparent pair instead. The pair were rediscovered by Friedrich August Theodor Winnecke in 1863, and included in the Winnecke Catalogue of Double Stars as number 4. Burnham calls M40 "one of the few real mistakes in the Messier catalog," faulting Messier for including it when all he saw was a double star, not a nebula of any sort. In 1991 the separation between the components was measured at 51.7″, an increase since 1764. Data gathered by astronomers Brian Skiff (2001) and Richard L. Nugent (2002) strongly suggested the subject was merely an optical double star rather than a physically connected (binary) system. The A star that seems the brighter is over twice as far as B. Parallax measurements from the Gaia satellite show the two stars, HD 238107 and HD 238108, are at distances of and respectively. HD 238108 is itself a genuine binary star, with an 18th magnitude white dwarf companion 5 arcseconds away and a parallax distance of . See also List of Messier objects References External links SEDS: Messier Object 40 Messier 40 CCD LRGB image with 2 hrs total exposure Double stars Winnecke 4 Ursa Major 4 Orion–Cygnus Arm 17641024 238107/8 G-type main-sequence stars K-type giants Discoveries by Charles Messier
Winnecke 4
[ "Astronomy" ]
365
[ "Ursa Major", "Constellations" ]
962,148
https://en.wikipedia.org/wiki/Beehive%20Cluster
The Beehive Cluster (also known as Praesepe (Latin for "manger", "cot" or "crib"), M44, NGC 2632, or Cr 189), is an open cluster in the constellation Cancer. One of the nearest open clusters to Earth, it contains a larger population of stars than other nearby bright open clusters holding around 1,000 stars. Under dark skies, the Beehive Cluster looks like a small nebulous object to the naked eye, and has been known since ancient times. Classical astronomer Ptolemy described it as a "nebulous mass in the breast of Cancer". It was among the first objects that Galileo studied with his telescope. Age and proper motion coincide with those of the Hyades, suggesting they may share similar origins. Both clusters also contain red giants and white dwarfs, which represent later stages of stellar evolution, along with many main sequence stars. Distance to M44 is often cited to be between 160 and 187 parsecs (520–610 light years), but the revised Hipparcos parallaxes (2009) for Praesepe members and the latest infrared color-magnitude diagram favors an analogous distance of 182 pc. There are better age estimates of around 600 million years (compared to about 625 million years for the Hyades). The diameter of the bright inner cluster core is about 7.0 parsecs (23 light years). At 1.5° across, the cluster easily fits within the field of view of binoculars or low-powered small telescopes. Regulus, Castor, and Pollux are guide stars. History In 1609, Galileo first telescopically observed the Beehive and was able to resolve it into 40 stars. Charles Messier added it to his famous catalog in 1769 after precisely measuring its position in the sky. Along with the Orion Nebula and the Pleiades cluster, Messier's inclusion of the Beehive has been noted as curious, as most of Messier's objects were much fainter and more easily confused with comets. Another possibility is that Messier simply wanted to have a larger catalog than his scientific rival Lacaille, whose 1755 catalog contained 42 objects, and so he added some well-known bright objects to boost his list. Wilhelm Schur, as director of the Göttingen Observatory, drew a map of the cluster in 1894.Ancient Greeks and Romans saw this object as a manger from which two donkeys, the adjacent stars Asellus Borealis and Asellus Australis, are eating; these are the donkeys that Dionysos and Silenus rode into battle against the Titans. Hipparchus (c.130 BC) refers to the cluster as Nephelion ("Little Cloud") in his star catalog. Claudius Ptolemy's Almagest includes the Beehive Cluster as one of seven "nebulae" (four of which are real), describing it as "The Nebulous Mass in the Breast (of Cancer)". Aratus (c.260–270 BC) calls the cluster Achlus or "Little Mist" in his poem Phainomena. Johann Bayer showed the cluster as a nebulous star on his Uranometria atlas of 1603, and labeled it Epsilon. The letter is now applied specifically to the brightest star of the cluster Epsilon Cancri, of magnitude 6.29. This perceived nebulous object is in the Ghost (Gui Xiu), the 23rd lunar mansion of ancient Chinese astrology. Ancient Chinese skywatchers saw this as a ghost or demon riding in a carriage and likened its appearance to a "cloud of pollen blown from willow catkins". It was also known by the somewhat less romantic name of Jishi qi (積屍氣, also transliterated Tseih She Ke), the "Exhalation of Piled-up Corpses". It is also known simply as Jishi (積屍), "cumulative corpses". Morphology and composition Like many star clusters of all kinds, Praesepe has experienced mass segregation. This means that bright massive stars are concentrated in the cluster's core, while dimmer and less massive stars populate its halo (sometimes called the corona). The cluster's core radius is estimated at 3.5 parsecs (11.4 light years); its half-mass radius is about 3.9 parsecs (12.7 light years); and its tidal radius is about 12 parsecs (39 light years). However, the tidal radius also includes many stars that are merely "passing through" and not bona fide cluster members. Altogether, the cluster contains at least 1000 gravitationally bound stars, for a total mass of about 500–600 Solar masses. A recent survey counts 1010 high-probability members, of which 68% are M dwarfs, 30% are Sun-like stars of spectral classes F, G, and K, and about 2% are bright stars of spectral class A. Also present are five giant stars, four of which have spectral class K0 III and the fifth G0 III. So far, eleven white dwarfs have been identified, representing the final evolutionary phase of the cluster's most massive stars, which originally belonged to spectral type B. Brown dwarfs, however, are rare in this cluster, probably because they have been lost by tidal stripping from the halo. A brown dwarf has been found in the eclipsing binary system AD 3116. The cluster has a visual brightness of magnitude 3.7. Its brightest stars are blue-white and of magnitude 6 to 6.5. 42 Cancri is a confirmed member. Planets In September 2012, two planets which orbit separate stars were discovered in the Beehive Cluster. The finding was significant for being the first planets detected orbiting stars like Earth's Sun that were situated in stellar clusters. Planets had previously been detected in such clusters, but not orbiting stars like the Sun. The planets have been designated Pr0201 b and Pr0211 b. The 'b' at the end of their names indicates that the bodies are planets. The discoveries are what have been termed hot Jupiters, massive gas giants that, unlike the planet Jupiter, orbit very close to their parent stars. The announcement describing the planetary finds, written by Sam Quinn as the lead author, was published in the Astrophysical Journal Letters. Quinn's team worked with David Latham of the Harvard–Smithsonian Center for Astrophysics, utilizing the Smithsonian Astrophysical Observatory's Fred Lawrence Whipple Observatory. In 2016 additional observations found a second planet in the Pr0211 system, Pr0211 c. This made Pr0211 the first multi-planet system to be discovered in an open cluster. The Kepler space telescope, in its K2 mission, discovered planets around several more stars in the Beehive Cluster. The stars K2-95, K2-100, K2-101, K2-102, K2-103, and K2-104 host a single planet each, and K2-264 has a two-planet system. See also List of Messier objects Cancer (Chinese astronomy) List of open clusters Messier object New General Catalogue Open cluster family Open cluster remnant References External links M44 Photo detail Dark Atmospheres Messier 44, SEDS Messier pages NightSkyInfo.com – M44, the Beehive Cluster Praesepe (M44) at Constellation Guide Cancer (constellation) Orion–Cygnus Arm Open clusters Beehive Cluster NGC objects Astronomical objects known since antiquity Dionysus Silenus
Beehive Cluster
[ "Astronomy" ]
1,558
[ "Cancer (constellation)", "Constellations" ]
962,152
https://en.wikipedia.org/wiki/Messier%2043
Messier 43 or M43, also known as De Mairan's Nebula and NGC 1982, is a star-forming nebula with a prominent H II region in the equatorial constellation of Orion. It was discovered by the French scientist Jean-Jacques d'Ortous de Mairan some time before 1731, then catalogued by Charles Messier in 1769. It is physically part of the Orion Nebula (Messier 42), separate from that main nebula by a dense lane of dust known as the northeast dark lane. It is part of the much larger Orion molecular cloud complex. The main ionizing star in this nebula is HD 37061 (variable star designation NU Ori), the focus of the H II region, away. This is a triple star system with the brighter component being a single-lined spectroscopic binary. The main component is a blue-white hued B-type main-sequence star with a stellar classification of B0.5V or B1V. It has times the mass of the Sun () and times the Sun's radius (). It is radiating over 26,000 times the Sun's luminosity () from its photosphere at an effective temperature of 31,000 K. It is spinning rapidly with a projected rotational velocity of around 200 km/s. The H II region is a roundish volume of ionized hydrogen. It has a diameter of about , at its distance meaning it measures . The net (meaning omitting the star) hydrogen alpha luminosity of this region is ; equivalent to . There is a dark lane crossing the whole west-centre strip from north to south, known as the M43 dark lane, which forming a swirling belt extension to the south links to Orion's northeast dark lane. All of these resemble a mixture of smoke rising from a chimney and in watercolour broad and fine dark brushstrokes, at many wavelengths. Gallery See also List of Messier objects References and footnotes External links Messier 043 Messier 043 Orion–Cygnus Arm Messier 043 043 Messier 043
Messier 43
[ "Astronomy" ]
431
[ "Constellations", "Orion (constellation)" ]
962,171
https://en.wikipedia.org/wiki/Stern%E2%80%93Gerlach%20experiment
In quantum physics, the Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized. Thus an atomic-scale system was shown to have intrinsically quantum properties. In the original experiment, silver atoms were sent through a spatially-varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment were deflected, owing to the magnetic field gradient, from a straight path. The screen revealed discrete points of accumulation, rather than a continuous distribution, owing to their quantized spin. Historically, this experiment was decisive in convincing physicists of the reality of angular-momentum quantization in all atomic-scale systems. After its conception by Otto Stern in 1921, the experiment was first successfully conducted with Walther Gerlach in early 1922. Description The Stern–Gerlach experiment involves sending silver atoms through an inhomogeneous magnetic field and observing their deflection. Silver atoms were evaporated using an electric furnace in a vacuum. Using thin slits, the atoms were guided into a flat beam and the beam sent through an inhomogeneous magnetic field before colliding with a metallic plate. The laws of classical physics predict that the collection of condensed silver atoms on the plate should form a thin solid line in the same shape as the original beam. However, the inhomogeneous magnetic field caused the beam to split in two separate directions, creating two lines on the metallic plate. The results show that particles possess an intrinsic angular momentum that is closely analogous to the angular momentum of a classically spinning object, but that takes only certain quantized values. Another important result is that only one component of a particle's spin can be measured at one time, meaning that the measurement of the spin along the z-axis destroys information about a particle's spin along the x and y axis. The experiment is normally conducted using electrically neutral particles such as silver atoms. This avoids the large deflection in the path of a charged particle moving through a magnetic field and allows spin-dependent effects to dominate. If the particle is treated as a classical spinning magnetic dipole, it will precess in a magnetic field because of the torque that the magnetic field exerts on the dipole (see torque-induced precession). If it moves through a homogeneous magnetic field, the forces exerted on opposite ends of the dipole cancel each other out and the trajectory of the particle is unaffected. However, if the magnetic field is inhomogeneous then the force on one end of the dipole will be slightly greater than the opposing force on the other end, so that there is a net force which deflects the particle's trajectory. If the particles were classical spinning objects, one would expect the distribution of their spin angular momentum vectors to be random and continuous. Each particle would be deflected by an amount proportional to the dot product of its magnetic moment with the external field gradient, producing some density distribution on the detector screen. Instead, the particles passing through the Stern–Gerlach apparatus are deflected either up or down by a specific amount. This was a measurement of the quantum observable now known as spin angular momentum, which demonstrated possible outcomes of a measurement where the observable has a discrete set of values or point spectrum. Although some discrete quantum phenomena, such as atomic spectra, were observed much earlier, the Stern–Gerlach experiment allowed scientists to directly observe separation between discrete quantum states for the first time. Theoretically, quantum angular momentum of any kind has a discrete spectrum, which is sometimes briefly expressed as "angular momentum is quantized". Experiment using particles with +1/2 or −1/2 spin If the experiment is conducted using charged particles like electrons, there will be a Lorentz force that tends to bend the trajectory in a circle. This force can be cancelled by an electric field of appropriate magnitude oriented transverse to the charged particle's path. Electrons are spin-1/2 particles. These have only two possible spin angular momentum values measured along any axis, or , a purely quantum mechanical phenomenon. Because its value is always the same, it is regarded as an intrinsic property of electrons, and is sometimes known as "intrinsic angular momentum" (to distinguish it from orbital angular momentum, which can vary and depends on the presence of other particles). If one measures the spin along a vertical axis, electrons are described as "spin up" or "spin down", based on the magnetic moment pointing up or down, respectively. To mathematically describe the experiment with spin particles, it is easiest to use Dirac's bra–ket notation. As the particles pass through the Stern–Gerlach device, they are deflected either up or down, and observed by the detector which resolves to either spin up or spin down. These are described by the angular momentum quantum number , which can take on one of the two possible allowed values, either or . The act of observing (measuring) the momentum along the axis corresponds to the -axis angular momentum operator, often denoted . In mathematical terms, the initial state of the particles is where constants and are complex numbers. This initial state spin can point in any direction. The squares of the absolute values and are respectively the probabilities for a system in the state to be found in and after the measurement along axis is made. The constants and must also be normalized in order that the probability of finding either one of the values be unity, that is we must ensure that . However, this information is not sufficient to determine the values of and , because they are complex numbers. Therefore, the measurement yields only the squared magnitudes of the constants, which are interpreted as probabilities. Sequential experiments If we link multiple Stern–Gerlach apparatuses (the rectangles containing S-G), we can clearly see that they do not act as simple selectors, i.e. filtering out particles with one of the states (pre-existing to the measurement) and blocking the others. Instead they alter the state by observing it (as in light polarization). In the figure below, x and z name the directions of the (inhomogenous) magnetic field, with the x-z-plane being orthogonal to the particle beam. In the three S-G systems shown below, the cross-hatched squares denote the blocking of a given output, i.e. each of the S-G systems with a blocker allows only particles with one of two states to enter the next S-G apparatus in the sequence. Experiment 1 The top illustration shows that when a second, identical, S-G apparatus is placed at the exit of the first apparatus, only z+ is seen in the output of the second apparatus. This result is expected since all particles at this point are expected to have z+ spin, as only the z+ beam from the first apparatus entered the second apparatus. Experiment 2 The middle system shows what happens when a different S-G apparatus is placed at the exit of the z+ beam resulting of the first apparatus, the second apparatus measuring the deflection of the beams on the x axis instead of the z axis. The second apparatus produces x+ and x- outputs. Now classically we would expect to have one beam with the x characteristic oriented + and the z characteristic oriented +, and another with the x characteristic oriented - and the z characteristic oriented +. Experiment 3 The bottom system contradicts that expectation. The output of the third apparatus which measures the deflection on the z axis again shows an output of z- as well as z+. Given that the input to the second S-G apparatus consisted only of z+, it can be inferred that a S-G apparatus must be altering the states of the particles that pass through it. This experiment can be interpreted to exhibit the uncertainty principle: since the angular momentum cannot be measured on two perpendicular directions at the same time, the measurement of the angular momentum on the x direction destroys the previous determination of the angular momentum in the z direction. That's why the third apparatus measures renewed z+ and z- beams like the x measurement really made a clean slate of the z+ output. History The Stern–Gerlach experiment was conceived by Otto Stern in 1921 and performed by him and Walther Gerlach in Frankfurt in 1922. At the time of the experiment, the most prevalent model for describing the atom was the Bohr-Sommerfeld model, which described electrons as going around the positively charged nucleus only in certain discrete atomic orbitals or energy levels. Since the electron was quantized to be only in certain positions in space, the separation into distinct orbits was referred to as space quantization. The Stern–Gerlach experiment was meant to test the Bohr–Sommerfeld hypothesis that the direction of the angular momentum of a silver atom is quantized. The experiment was first performed with an electromagnet that allowed the non-uniform magnetic field to be turned on gradually from a null value. When the field was null, the silver atoms were deposited as a single band on the detecting glass slide. When the field was made stronger, the middle of the band began to widen and eventually to split into two, so that the glass-slide image looked like a lip-print, with an opening in the middle, and closure at either end. In the middle, where the magnetic field was strong enough to split the beam into two, statistically half of the silver atoms had been deflected by the non-uniformity of the field. Note that the experiment was performed several years before George Uhlenbeck and Samuel Goudsmit formulated their hypothesis about the existence of electron spin in 1925. Even though the result of the Stern−Gerlach experiment has later turned out to be in agreement with the predictions of quantum mechanics for a spin-1/2 particle, the experimental result was also consistent with the Bohr–Sommerfeld theory. In 1927, T.E. Phipps and J.B. Taylor reproduced the effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms. However, in 1926 the non-relativistic scalar Schrödinger equation had incorrectly predicted the magnetic moment of hydrogen to be zero in its ground state. To correct this problem Wolfgang Pauli considered a spin-1/2 version of the Schrödinger equation using the 3 Pauli matrices which now bear his name, which was later shown by Paul Dirac in 1928 to be a consequence of his relativistic Dirac equation. In the early 1930s Stern, together with Otto Robert Frisch and Immanuel Estermann improved the molecular beam apparatus sufficiently to measure the magnetic moment of the proton, a value nearly 2000 times smaller than the electron moment. In 1931, theoretical analysis by Gregory Breit and Isidor Isaac Rabi showed that this apparatus could be used to measure nuclear spin whenever the electronic configuration of the atom was known. The concept was applied by Rabi and Victor W. Cohen in 1934 to determine the spin of sodium atoms. In 1938 Rabi and coworkers inserted an oscillating magnetic field element into their apparatus, inventing nuclear magnetic resonance spectroscopy. By tuning the frequency of the oscillator to the frequency of the nuclear precessions they could selectively tune into each quantum level of the material under study. Rabi was awarded the Nobel Prize in 1944 for this work. Importance The Stern–Gerlach experiment was the first direct evidence of angular-momentum quantization in quantum mechanics, and it strongly influenced later developments in modern physics: In the decade that followed, scientists showed using similar techniques, that the nuclei of some atoms also have quantized angular momentum. It is the interaction of this nuclear angular momentum with the spin of the electron that is responsible for the hyperfine structure of the spectroscopic lines. Norman F. Ramsey later modified the Rabi apparatus to improve its sensitivity (using the separated oscillatory field method). In the early sixties, Ramsey, H. Mark Goldenberg, and Daniel Kleppner used a Stern–Gerlach system to produce a beam of polarized hydrogen as the source of energy for the hydrogen maser. This led to developing an extremely stable clock based on a hydrogen maser. From 1967 until 2019, the second was defined based on 9,192,631,770 Hz hyperfine transition of a cesium-133 atom; the atomic clock which is used to set this standard is an application of Ramsey's work. The Stern–Gerlach experiment has become a prototype for quantum measurement, demonstrating the observation of a discrete value (eigenvalue) of a physical property, previously assumed to be continuous. Entering the Stern–Gerlach magnet, the direction of the silver atom's magnetic moment is indefinite, but when the atom is registered at the screen, it is observed to be at either one spot or the other, and this outcome cannot be predicted in advance. Because the experiment illustrates the character of quantum measurements, The Feynman Lectures on Physics use idealized Stern–Gerlach apparatuses to explain the basic mathematics of quantum theory. See also Photon polarization Stern–Gerlach Medal German inventors and discoverers References Further reading External links Stern–Gerlach Experiment Java Applet Animation Stern–Gerlach Experiment Flash Model Detailed explanation of the Stern–Gerlach Experiment Animation, applications and research linked to the spin (Université Paris Sud) Wave Mechanics and Stern–Gerlach experiment at MIT OpenCourseWare Quantum measurement Foundational quantum physics Physics experiments Spintronics 1922 in science Articles containing video clips
Stern–Gerlach experiment
[ "Physics", "Materials_science" ]
2,814
[ "Physics experiments", "Spintronics", "Foundational quantum physics", "Quantum mechanics", "Quantum measurement", "Experimental physics", "Condensed matter physics" ]